mbox series

[v5,00/27] RFC Support hot device unplug in amdgpu

Message ID 20210428151207.1212258-1-andrey.grodzovsky@amd.com (mailing list archive)
Headers show
Series RFC Support hot device unplug in amdgpu | expand

Message

Andrey Grodzovsky April 28, 2021, 3:11 p.m. UTC
Until now extracting a card either by physical extraction (e.g. eGPU with 
thunderbolt connection or by emulation through  syfs -> /sys/bus/pci/devices/device_id/remove) 
would cause random crashes in user apps. The random crashes in apps were 
mostly due to the app having mapped a device backed BO into its address 
space was still trying to access the BO while the backing device was gone.
To answer this first problem Christian suggested to fix the handling of mapped 
memory in the clients when the device goes away by forcibly unmap all buffers the 
user processes has by clearing their respective VMAs mapping the device BOs. 
Then when the VMAs try to fill in the page tables again we check in the fault 
handlerif the device is removed and if so, return an error. This will generate a 
SIGBUS to the application which can then cleanly terminate.This indeed was done 
but this in turn created a problem of kernel OOPs were the OOPSes were due to the 
fact that while the app was terminating because of the SIGBUSit would trigger use 
after free in the driver by calling to accesses device structures that were already 
released from the pci remove sequence.This was handled by introducing a 'flush' 
sequence during device removal were we wait for drm file reference to drop to 0 
meaning all user clients directly using this device terminated.

v2:
Based on discussions in the mailing list with Daniel and Pekka [1] and based on the document 
produced by Pekka from those discussions [2] the whole approach with returning SIGBUS and 
waiting for all user clients having CPU mapping of device BOs to die was dropped. 
Instead as per the document suggestion the device structures are kept alive until 
the last reference to the device is dropped by user client and in the meanwhile all existing and new CPU mappings of the BOs 
belonging to the device directly or by dma-buf import are rerouted to per user 
process dummy rw page.Also, I skipped the 'Requirements for KMS UAPI' section of [2] 
since i am trying to get the minimal set of requirements that still give useful solution 
to work and this is the'Requirements for Render and Cross-Device UAPI' section and so my 
test case is removing a secondary device, which is render only and is not involved 
in KMS.

v3:
More updates following comments from v2 such as removing loop to find DRM file when rerouting 
page faults to dummy page,getting rid of unnecessary sysfs handling refactoring and moving 
prevention of GPU recovery post device unplug from amdgpu to scheduler layer. 
On top of that added unplug support for the IOMMU enabled system.

v4:
Drop last sysfs hack and use sysfs default attribute.
Guard against write accesses after device removal to avoid modifying released memory.
Update dummy pages handling to on demand allocation and release through drm managed framework.
Add return value to scheduler job TO handler (by Luben Tuikov) and use this in amdgpu for prevention 
of GPU recovery post device unplug
Also rebase on top of drm-misc-mext instead of amd-staging-drm-next

v5:
The most significant in this series is the improved protection from kernel driver accessing MMIO ranges that were allocated
for the device once the device is gone. To do this, first a patch 'drm/amdgpu: Unmap all MMIO mappings' is introduced.
This patch unamps all MMIO mapped into the kernel address space in the form of BARs and kernel BOs with CPU visible VRAM mappings.
This way it helped to discover multiple such access points because a page fault would be immediately generated on access. Most of them
were solved by moving HW fini code into pci_remove stage (patch drm/amdgpu: Add early fini callback) and for some who 
were harder to unwind drm_dev_enter/exit scoping was used. In addition all the IOCTLs and all background work and timers 
are now protected with drm_dev_enter/exit at their root in an attempt that after drm_dev_unplug is finished none of them 
run anymore and the pci_remove thread is the only thread executing which might touch the HW. To prevent deadlocks in such 
case against threads stuck on various HW or SW fences patches 'drm/amdgpu: Finalise device fences on device remove'  
and drm/amdgpu: Add rw_sem to pushing job into sched queue' take care of force signaling all such existing fences 
and rejecting any newly added ones.

With these patches I am able to gracefully remove the secondary card using sysfs remove hook while glxgears is running off of secondary 
card (DRI_PRIME=1) without kernel oopses or hangs and keep working with the primary card or soft reset the device without hangs or oopses.
Also as per Daniel's comment I added 3 tests to IGT [4] to core_hotunplug test suite - remove device while commands are submitted, 
exported BO and exported fence (not pushed yet).
Also now it's possible to plug back the device after unplug 
Also some users now can successfully use those patches with eGPU boxes[3].




TODOs for followup work:
Convert AMDGPU code to use devm (for hw stuff) and drmm (for sw stuff and allocations) (Daniel)
Add support for 'Requirements for KMS UAPI' section of [2] - unplugging primary, display connected card.

[1] - Discussions during v4 of the patchset https://lists.freedesktop.org/archives/amd-gfx/2021-January/058595.html
[2] - drm/doc: device hot-unplug for userspace https://www.spinics.net/lists/dri-devel/msg259755.html
[3] - Related gitlab ticket https://gitlab.freedesktop.org/drm/amd/-/issues/1081
[4] - https://gitlab.freedesktop.org/agrodzov/igt-gpu-tools/-/commits/master

Andrey Grodzovsky (27):
  drm/ttm: Remap all page faults to per process dummy page.
  drm/ttm: Expose ttm_tt_unpopulate for driver use
  drm/amdgpu: Split amdgpu_device_fini into early and late
  drm/amdkfd: Split kfd suspend from devie exit
  drm/amdgpu: Add early fini callback
  drm/amdgpu: Handle IOMMU enabled case.
  drm/amdgpu: Remap all page faults to per process dummy page.
  PCI: add support for dev_groups to struct pci_device_driver
  dmr/amdgpu: Move some sysfs attrs creation to default_attr
  drm/amdgpu: Guard against write accesses after device removal
  drm/sched: Make timeout timer rearm conditional.
  drm/amdgpu: Prevent any job recoveries after device is unplugged.
  drm/amdgpu: When filizing the fence driver. stop scheduler first.
  drm/amdgpu: Fix hang on device removal.
  drm/scheduler: Fix hang when sched_entity released
  drm/amdgpu: Unmap all MMIO mappings
  drm/amdgpu: Add rw_sem to pushing job into sched queue
  drm/sched: Expose drm_sched_entity_kill_jobs
  drm/amdgpu: Finilise device fences on device remove.
  drm: Scope all DRM IOCTLs  with drm_dev_enter/exit
  drm/amdgpu: Add support for hot-unplug feature at DRM level.
  drm/amd/display: Scope all DM queued work with drm_dev_enter/exit
  drm/amd/powerplay: Scope all PM queued work with drm_dev_enter/exit
  drm/amdkfd: Scope all KFD queued work with drm_dev_enter/exit
  drm/amdgpu: Scope all amdgpu queued work with drm_dev_enter/exit
  drm/amd/display: Remove superflous drm_mode_config_cleanup
  drm/amdgpu: Verify DMA opearations from device are done

 drivers/gpu/drm/amd/amdgpu/amdgpu.h           |  18 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c    |  13 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h    |   2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c  |  17 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c        |  13 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c    | 353 ++++++++++++++----
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c       |  34 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c     |  34 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c      |   3 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gart.h      |   1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c       |   9 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c   |  25 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c        | 228 +++++------
 drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c       |  61 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_irq.h       |   3 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c       |  33 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c      |  28 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c       |  12 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    |  41 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.h    |   7 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c       | 115 +++---
 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h       |   3 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c       |  56 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c      |  70 ++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h      |  52 +--
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       |  21 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c       |  74 ++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c       |  45 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c       |  83 ++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        |   7 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c  |  14 +-
 drivers/gpu/drm/amd/amdgpu/cik_ih.c           |   3 +-
 drivers/gpu/drm/amd/amdgpu/cz_ih.c            |   3 +-
 drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c         |  10 +-
 drivers/gpu/drm/amd/amdgpu/iceland_ih.c       |   3 +-
 drivers/gpu/drm/amd/amdgpu/navi10_ih.c        |   5 +-
 drivers/gpu/drm/amd/amdgpu/psp_v11_0.c        |  44 +--
 drivers/gpu/drm/amd/amdgpu/psp_v12_0.c        |   8 +-
 drivers/gpu/drm/amd/amdgpu/psp_v3_1.c         |   8 +-
 drivers/gpu/drm/amd/amdgpu/si_ih.c            |   3 +-
 drivers/gpu/drm/amd/amdgpu/tonga_ih.c         |   3 +-
 drivers/gpu/drm/amd/amdgpu/vce_v4_0.c         |  26 +-
 drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c         |  22 +-
 drivers/gpu/drm/amd/amdgpu/vega10_ih.c        |   5 +-
 drivers/gpu/drm/amd/amdgpu/vega20_ih.c        |   2 +-
 drivers/gpu/drm/amd/amdkfd/kfd_device.c       |   3 +-
 drivers/gpu/drm/amd/amdkfd/kfd_interrupt.c    |  14 +-
 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  13 +-
 .../amd/display/amdgpu_dm/amdgpu_dm_hdcp.c    | 124 +++---
 .../drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c |  24 +-
 drivers/gpu/drm/amd/include/amd_shared.h      |   2 +
 drivers/gpu/drm/amd/pm/amdgpu_dpm.c           |  44 ++-
 .../drm/amd/pm/powerplay/smumgr/smu7_smumgr.c |   2 +
 drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c     |  26 +-
 drivers/gpu/drm/drm_ioctl.c                   |  15 +-
 drivers/gpu/drm/scheduler/sched_entity.c      |   6 +-
 drivers/gpu/drm/scheduler/sched_main.c        |  35 +-
 drivers/gpu/drm/ttm/ttm_bo_vm.c               |  79 +++-
 drivers/gpu/drm/ttm/ttm_tt.c                  |   1 +
 drivers/pci/pci-driver.c                      |   1 +
 include/drm/drm_drv.h                         |   6 +
 include/drm/gpu_scheduler.h                   |   1 +
 include/drm/ttm/ttm_bo_api.h                  |   2 +
 include/linux/pci.h                           |   3 +
 64 files changed, 1388 insertions(+), 633 deletions(-)

Comments

Bjorn Helgaas April 28, 2021, 5:07 p.m. UTC | #1
On Wed, Apr 28, 2021 at 11:11:40AM -0400, Andrey Grodzovsky wrote:
> Until now extracting a card either by physical extraction (e.g. eGPU with 
> thunderbolt connection or by emulation through  syfs -> /sys/bus/pci/devices/device_id/remove) 
> would cause random crashes in user apps. The random crashes in apps were 
> mostly due to the app having mapped a device backed BO into its address 
> space was still trying to access the BO while the backing device was gone.
> To answer this first problem Christian suggested to fix the handling of mapped 
> memory in the clients when the device goes away by forcibly unmap all buffers the 
> user processes has by clearing their respective VMAs mapping the device BOs. 
> Then when the VMAs try to fill in the page tables again we check in the fault 
> handlerif the device is removed and if so, return an error. This will generate a 
> SIGBUS to the application which can then cleanly terminate.This indeed was done 
> but this in turn created a problem of kernel OOPs were the OOPSes were due to the 
> fact that while the app was terminating because of the SIGBUSit would trigger use 
> after free in the driver by calling to accesses device structures that were already 
> released from the pci remove sequence.This was handled by introducing a 'flush' 
> sequence during device removal were we wait for drm file reference to drop to 0 
> meaning all user clients directly using this device terminated.

If DRM includes cover letters in merges, maybe fix the below.  If they
also include the v2, v3, etc below, also consider picking a line
width and sticking to it.  It seems to be creeping wider every rev.

BO?
s/syfs/sysfs/
s/forcibly unmap/forcibly unmapping/
s/handlerif/handler if/
s/processes has/processes have/
s/terminate.This/terminate. This/
s/were the/where the/
s/SIGBUSit/SIGBUS it/
s/to accesses/to access/
s/sequence.This/sequence. This/
s/were we/where we/

> v2:
> Based on discussions in the mailing list with Daniel and Pekka [1] and based on the document 
> produced by Pekka from those discussions [2] the whole approach with returning SIGBUS and 
> waiting for all user clients having CPU mapping of device BOs to die was dropped. 
> Instead as per the document suggestion the device structures are kept alive until 
> the last reference to the device is dropped by user client and in the meanwhile all existing and new CPU mappings of the BOs 
> belonging to the device directly or by dma-buf import are rerouted to per user 
> process dummy rw page.Also, I skipped the 'Requirements for KMS UAPI' section of [2] 
> since i am trying to get the minimal set of requirements that still give useful solution 
> to work and this is the'Requirements for Render and Cross-Device UAPI' section and so my 
> test case is removing a secondary device, which is render only and is not involved 
> in KMS.
> 
> v3:
> More updates following comments from v2 such as removing loop to find DRM file when rerouting 
> page faults to dummy page,getting rid of unnecessary sysfs handling refactoring and moving 
> prevention of GPU recovery post device unplug from amdgpu to scheduler layer. 
> On top of that added unplug support for the IOMMU enabled system.
> 
> v4:
> Drop last sysfs hack and use sysfs default attribute.
> Guard against write accesses after device removal to avoid modifying released memory.
> Update dummy pages handling to on demand allocation and release through drm managed framework.
> Add return value to scheduler job TO handler (by Luben Tuikov) and use this in amdgpu for prevention 
> of GPU recovery post device unplug
> Also rebase on top of drm-misc-mext instead of amd-staging-drm-next
> 
> v5:
> The most significant in this series is the improved protection from kernel driver accessing MMIO ranges that were allocated
> for the device once the device is gone. To do this, first a patch 'drm/amdgpu: Unmap all MMIO mappings' is introduced.
> This patch unamps all MMIO mapped into the kernel address space in the form of BARs and kernel BOs with CPU visible VRAM mappings.
> This way it helped to discover multiple such access points because a page fault would be immediately generated on access. Most of them
> were solved by moving HW fini code into pci_remove stage (patch drm/amdgpu: Add early fini callback) and for some who 
> were harder to unwind drm_dev_enter/exit scoping was used. In addition all the IOCTLs and all background work and timers 
> are now protected with drm_dev_enter/exit at their root in an attempt that after drm_dev_unplug is finished none of them 
> run anymore and the pci_remove thread is the only thread executing which might touch the HW. To prevent deadlocks in such 
> case against threads stuck on various HW or SW fences patches 'drm/amdgpu: Finalise device fences on device remove'  
> and drm/amdgpu: Add rw_sem to pushing job into sched queue' take care of force signaling all such existing fences 
> and rejecting any newly added ones.
> 
> With these patches I am able to gracefully remove the secondary card using sysfs remove hook while glxgears is running off of secondary 
> card (DRI_PRIME=1) without kernel oopses or hangs and keep working with the primary card or soft reset the device without hangs or oopses.
> Also as per Daniel's comment I added 3 tests to IGT [4] to core_hotunplug test suite - remove device while commands are submitted, 
> exported BO and exported fence (not pushed yet).
> Also now it's possible to plug back the device after unplug 
> Also some users now can successfully use those patches with eGPU boxes[3].
> 
> 
> 
> 
> TODOs for followup work:
> Convert AMDGPU code to use devm (for hw stuff) and drmm (for sw stuff and allocations) (Daniel)
> Add support for 'Requirements for KMS UAPI' section of [2] - unplugging primary, display connected card.
> 
> [1] - Discussions during v4 of the patchset https://lists.freedesktop.org/archives/amd-gfx/2021-January/058595.html
> [2] - drm/doc: device hot-unplug for userspace https://www.spinics.net/lists/dri-devel/msg259755.html
> [3] - Related gitlab ticket https://gitlab.freedesktop.org/drm/amd/-/issues/1081
> [4] - https://gitlab.freedesktop.org/agrodzov/igt-gpu-tools/-/commits/master
> 
> Andrey Grodzovsky (27):
>   drm/ttm: Remap all page faults to per process dummy page.
>   drm/ttm: Expose ttm_tt_unpopulate for driver use
>   drm/amdgpu: Split amdgpu_device_fini into early and late
>   drm/amdkfd: Split kfd suspend from devie exit
>   drm/amdgpu: Add early fini callback
>   drm/amdgpu: Handle IOMMU enabled case.
>   drm/amdgpu: Remap all page faults to per process dummy page.
>   PCI: add support for dev_groups to struct pci_device_driver
>   dmr/amdgpu: Move some sysfs attrs creation to default_attr
>   drm/amdgpu: Guard against write accesses after device removal
>   drm/sched: Make timeout timer rearm conditional.
>   drm/amdgpu: Prevent any job recoveries after device is unplugged.
>   drm/amdgpu: When filizing the fence driver. stop scheduler first.
>   drm/amdgpu: Fix hang on device removal.
>   drm/scheduler: Fix hang when sched_entity released
>   drm/amdgpu: Unmap all MMIO mappings
>   drm/amdgpu: Add rw_sem to pushing job into sched queue
>   drm/sched: Expose drm_sched_entity_kill_jobs
>   drm/amdgpu: Finilise device fences on device remove.
>   drm: Scope all DRM IOCTLs  with drm_dev_enter/exit
>   drm/amdgpu: Add support for hot-unplug feature at DRM level.
>   drm/amd/display: Scope all DM queued work with drm_dev_enter/exit
>   drm/amd/powerplay: Scope all PM queued work with drm_dev_enter/exit
>   drm/amdkfd: Scope all KFD queued work with drm_dev_enter/exit
>   drm/amdgpu: Scope all amdgpu queued work with drm_dev_enter/exit
>   drm/amd/display: Remove superflous drm_mode_config_cleanup
>   drm/amdgpu: Verify DMA opearations from device are done
> 
>  drivers/gpu/drm/amd/amdgpu/amdgpu.h           |  18 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c    |  13 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h    |   2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c  |  17 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c        |  13 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c    | 353 ++++++++++++++----
>  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c       |  34 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c     |  34 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c      |   3 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gart.h      |   1 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c       |   9 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c   |  25 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c        | 228 +++++------
>  drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c       |  61 ++-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_irq.h       |   3 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c       |  33 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c      |  28 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c       |  12 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    |  41 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_object.h    |   7 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c       | 115 +++---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h       |   3 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c       |  56 ++-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c      |  70 ++++
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h      |  52 +--
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       |  21 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c       |  74 ++--
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c       |  45 ++-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c       |  83 ++--
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        |   7 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c  |  14 +-
>  drivers/gpu/drm/amd/amdgpu/cik_ih.c           |   3 +-
>  drivers/gpu/drm/amd/amdgpu/cz_ih.c            |   3 +-
>  drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c         |  10 +-
>  drivers/gpu/drm/amd/amdgpu/iceland_ih.c       |   3 +-
>  drivers/gpu/drm/amd/amdgpu/navi10_ih.c        |   5 +-
>  drivers/gpu/drm/amd/amdgpu/psp_v11_0.c        |  44 +--
>  drivers/gpu/drm/amd/amdgpu/psp_v12_0.c        |   8 +-
>  drivers/gpu/drm/amd/amdgpu/psp_v3_1.c         |   8 +-
>  drivers/gpu/drm/amd/amdgpu/si_ih.c            |   3 +-
>  drivers/gpu/drm/amd/amdgpu/tonga_ih.c         |   3 +-
>  drivers/gpu/drm/amd/amdgpu/vce_v4_0.c         |  26 +-
>  drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c         |  22 +-
>  drivers/gpu/drm/amd/amdgpu/vega10_ih.c        |   5 +-
>  drivers/gpu/drm/amd/amdgpu/vega20_ih.c        |   2 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_device.c       |   3 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_interrupt.c    |  14 +-
>  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  13 +-
>  .../amd/display/amdgpu_dm/amdgpu_dm_hdcp.c    | 124 +++---
>  .../drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c |  24 +-
>  drivers/gpu/drm/amd/include/amd_shared.h      |   2 +
>  drivers/gpu/drm/amd/pm/amdgpu_dpm.c           |  44 ++-
>  .../drm/amd/pm/powerplay/smumgr/smu7_smumgr.c |   2 +
>  drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c     |  26 +-
>  drivers/gpu/drm/drm_ioctl.c                   |  15 +-
>  drivers/gpu/drm/scheduler/sched_entity.c      |   6 +-
>  drivers/gpu/drm/scheduler/sched_main.c        |  35 +-
>  drivers/gpu/drm/ttm/ttm_bo_vm.c               |  79 +++-
>  drivers/gpu/drm/ttm/ttm_tt.c                  |   1 +
>  drivers/pci/pci-driver.c                      |   1 +
>  include/drm/drm_drv.h                         |   6 +
>  include/drm/gpu_scheduler.h                   |   1 +
>  include/drm/ttm/ttm_bo_api.h                  |   2 +
>  include/linux/pci.h                           |   3 +
>  64 files changed, 1388 insertions(+), 633 deletions(-)
> 
> -- 
> 2.25.1
>