mbox series

[v4,00/14] RFC Support hot device unplug in amdgpu

Message ID 1611003683-3534-1-git-send-email-andrey.grodzovsky@amd.com (mailing list archive)
Headers show
Series RFC Support hot device unplug in amdgpu | expand

Message

Andrey Grodzovsky Jan. 18, 2021, 9:01 p.m. UTC
Until now extracting a card either by physical extraction (e.g. eGPU with 
thunderbolt connection or by emulation through  syfs -> /sys/bus/pci/devices/device_id/remove) 
would cause random crashes in user apps. The random crashes in apps were 
mostly due to the app having mapped a device backed BO into its address 
space was still trying to access the BO while the backing device was gone.
To answer this first problem Christian suggested to fix the handling of mapped 
memory in the clients when the device goes away by forcibly unmap all buffers the 
user processes has by clearing their respective VMAs mapping the device BOs. 
Then when the VMAs try to fill in the page tables again we check in the fault 
handlerif the device is removed and if so, return an error. This will generate a 
SIGBUS to the application which can then cleanly terminate.This indeed was done 
but this in turn created a problem of kernel OOPs were the OOPSes were due to the 
fact that while the app was terminating because of the SIGBUSit would trigger use 
after free in the driver by calling to accesses device structures that were already 
released from the pci remove sequence.This was handled by introducing a 'flush' 
sequence during device removal were we wait for drm file reference to drop to 0 
meaning all user clients directly using this device terminated.

v2:
Based on discussions in the mailing list with Daniel and Pekka [1] and based on the document 
produced by Pekka from those discussions [2] the whole approach with returning SIGBUS and 
waiting for all user clients having CPU mapping of device BOs to die was dropped. 
Instead as per the document suggestion the device structures are kept alive until 
the last reference to the device is dropped by user client and in the meanwhile all existing and new CPU mappings of the BOs 
belonging to the device directly or by dma-buf import are rerouted to per user 
process dummy rw page.Also, I skipped the 'Requirements for KMS UAPI' section of [2] 
since i am trying to get the minimal set of requirements that still give useful solution 
to work and this is the'Requirements for Render and Cross-Device UAPI' section and so my 
test case is removing a secondary device, which is render only and is not involved 
in KMS.

v3:
More updates following comments from v2 such as removing loop to find DRM file when rerouting 
page faults to dummy page,getting rid of unnecessary sysfs handling refactoring and moving 
prevention of GPU recovery post device unplug from amdgpu to scheduler layer. 
On top of that added unplug support for the IOMMU enabled system.

v4:
Drop last sysfs hack and use sysfs default attribute.
Guard against write accesses after device removal to avoid modifying released memory.
Update dummy pages handling to on demand allocation and release through drm managed framework.
Add return value to scheduler job TO handler (by Luben Tuikov) and use this in amdgpu for prevention 
of GPU recovery post device unplug
Also rebase on top of drm-misc-mext instead of amd-staging-drm-next

With these patches I am able to gracefully remove the secondary card using sysfs remove hook while glxgears 
is running off of secondary card (DRI_PRIME=1) without kernel oopses or hangs and keep working 
with the primary card or soft reset the device without hangs or oopses

TODOs for followup work:
Convert AMDGPU code to use devm (for hw stuff) and drmm (for sw stuff and allocations) (Daniel)
Support plugging the secondary device back after unplug - currently still experiencing HW error on plugging back.
Add support for 'Requirements for KMS UAPI' section of [2] - unplugging primary, display connected card.

[1] - Discussions during v3 of the patchset https://www.spinics.net/lists/amd-gfx/msg55576.html
[2] - drm/doc: device hot-unplug for userspace https://www.spinics.net/lists/dri-devel/msg259755.html
[3] - Related gitlab ticket https://gitlab.freedesktop.org/drm/amd/-/issues/1081

Andrey Grodzovsky (13):
  drm/ttm: Remap all page faults to per process dummy page.
  drm: Unamp the entire device address space on device unplug
  drm/ttm: Expose ttm_tt_unpopulate for driver use
  drm/sched: Cancel and flush all oustatdning jobs before finish.
  drm/amdgpu: Split amdgpu_device_fini into early and late
  drm/amdgpu: Add early fini callback
  drm/amdgpu: Register IOMMU topology notifier per device.
  drm/amdgpu: Fix a bunch of sdma code crash post device unplug
  drm/amdgpu: Remap all page faults to per process dummy page.
  dmr/amdgpu: Move some sysfs attrs creation to default_attr
  drm/amdgpu: Guard against write accesses after device removal
  drm/sched: Make timeout timer rearm conditional.
  drm/amdgpu: Prevent any job recoveries after device is unplugged.

Luben Tuikov (1):
  drm/scheduler: Job timeout handler returns status

 drivers/gpu/drm/amd/amdgpu/amdgpu.h               |  11 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c      |  17 +--
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c        | 149 ++++++++++++++++++++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c           |  20 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c         |  15 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c          |   2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gart.h          |   1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c           |   9 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c       |  25 ++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c           |  26 ++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_irq.h           |   3 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c           |  19 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c           |  12 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.c        |  10 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.h        |   2 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c           |  53 +++++---
 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h           |   3 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c           |   1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c          |  70 ++++++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h          |  52 +-------
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c           |  21 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c            |   8 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c      |  14 +-
 drivers/gpu/drm/amd/amdgpu/cik_ih.c               |   2 +-
 drivers/gpu/drm/amd/amdgpu/cz_ih.c                |   2 +-
 drivers/gpu/drm/amd/amdgpu/iceland_ih.c           |   2 +-
 drivers/gpu/drm/amd/amdgpu/navi10_ih.c            |   2 +-
 drivers/gpu/drm/amd/amdgpu/psp_v11_0.c            |  16 +--
 drivers/gpu/drm/amd/amdgpu/psp_v12_0.c            |   8 +-
 drivers/gpu/drm/amd/amdgpu/psp_v3_1.c             |   8 +-
 drivers/gpu/drm/amd/amdgpu/si_ih.c                |   2 +-
 drivers/gpu/drm/amd/amdgpu/tonga_ih.c             |   2 +-
 drivers/gpu/drm/amd/amdgpu/vega10_ih.c            |   2 +-
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  12 +-
 drivers/gpu/drm/amd/include/amd_shared.h          |   2 +
 drivers/gpu/drm/drm_drv.c                         |   3 +
 drivers/gpu/drm/etnaviv/etnaviv_sched.c           |  10 +-
 drivers/gpu/drm/lima/lima_sched.c                 |   4 +-
 drivers/gpu/drm/panfrost/panfrost_job.c           |   9 +-
 drivers/gpu/drm/scheduler/sched_main.c            |  18 ++-
 drivers/gpu/drm/ttm/ttm_bo_vm.c                   |  82 +++++++++++-
 drivers/gpu/drm/ttm/ttm_tt.c                      |   1 +
 drivers/gpu/drm/v3d/v3d_sched.c                   |  32 ++---
 include/drm/gpu_scheduler.h                       |  17 ++-
 include/drm/ttm/ttm_bo_api.h                      |   2 +
 45 files changed, 583 insertions(+), 198 deletions(-)

Comments

Daniel Vetter Jan. 19, 2021, 2:16 p.m. UTC | #1
On Mon, Jan 18, 2021 at 04:01:09PM -0500, Andrey Grodzovsky wrote:
> Until now extracting a card either by physical extraction (e.g. eGPU with 
> thunderbolt connection or by emulation through  syfs -> /sys/bus/pci/devices/device_id/remove) 
> would cause random crashes in user apps. The random crashes in apps were 
> mostly due to the app having mapped a device backed BO into its address 
> space was still trying to access the BO while the backing device was gone.
> To answer this first problem Christian suggested to fix the handling of mapped 
> memory in the clients when the device goes away by forcibly unmap all buffers the 
> user processes has by clearing their respective VMAs mapping the device BOs. 
> Then when the VMAs try to fill in the page tables again we check in the fault 
> handlerif the device is removed and if so, return an error. This will generate a 
> SIGBUS to the application which can then cleanly terminate.This indeed was done 
> but this in turn created a problem of kernel OOPs were the OOPSes were due to the 
> fact that while the app was terminating because of the SIGBUSit would trigger use 
> after free in the driver by calling to accesses device structures that were already 
> released from the pci remove sequence.This was handled by introducing a 'flush' 
> sequence during device removal were we wait for drm file reference to drop to 0 
> meaning all user clients directly using this device terminated.
> 
> v2:
> Based on discussions in the mailing list with Daniel and Pekka [1] and based on the document 
> produced by Pekka from those discussions [2] the whole approach with returning SIGBUS and 
> waiting for all user clients having CPU mapping of device BOs to die was dropped. 
> Instead as per the document suggestion the device structures are kept alive until 
> the last reference to the device is dropped by user client and in the meanwhile all existing and new CPU mappings of the BOs 
> belonging to the device directly or by dma-buf import are rerouted to per user 
> process dummy rw page.Also, I skipped the 'Requirements for KMS UAPI' section of [2] 
> since i am trying to get the minimal set of requirements that still give useful solution 
> to work and this is the'Requirements for Render and Cross-Device UAPI' section and so my 
> test case is removing a secondary device, which is render only and is not involved 
> in KMS.
> 
> v3:
> More updates following comments from v2 such as removing loop to find DRM file when rerouting 
> page faults to dummy page,getting rid of unnecessary sysfs handling refactoring and moving 
> prevention of GPU recovery post device unplug from amdgpu to scheduler layer. 
> On top of that added unplug support for the IOMMU enabled system.
> 
> v4:
> Drop last sysfs hack and use sysfs default attribute.
> Guard against write accesses after device removal to avoid modifying released memory.
> Update dummy pages handling to on demand allocation and release through drm managed framework.
> Add return value to scheduler job TO handler (by Luben Tuikov) and use this in amdgpu for prevention 
> of GPU recovery post device unplug
> Also rebase on top of drm-misc-mext instead of amd-staging-drm-next
> 
> With these patches I am able to gracefully remove the secondary card using sysfs remove hook while glxgears 
> is running off of secondary card (DRI_PRIME=1) without kernel oopses or hangs and keep working 
> with the primary card or soft reset the device without hangs or oopses
> 
> TODOs for followup work:
> Convert AMDGPU code to use devm (for hw stuff) and drmm (for sw stuff and allocations) (Daniel)
> Support plugging the secondary device back after unplug - currently still experiencing HW error on plugging back.
> Add support for 'Requirements for KMS UAPI' section of [2] - unplugging primary, display connected card.
> 
> [1] - Discussions during v3 of the patchset https://www.spinics.net/lists/amd-gfx/msg55576.html
> [2] - drm/doc: device hot-unplug for userspace https://www.spinics.net/lists/dri-devel/msg259755.html
> [3] - Related gitlab ticket https://gitlab.freedesktop.org/drm/amd/-/issues/1081

btw have you tried this out with some of the igts we have? core_hotunplug
is the one I'm thinking of. Might be worth to extend this for amdgpu
specific stuff (like run some batches on it while hotunplugging).

Since there's so many corner cases we need to test here (shared dma-buf,
shared dma_fence) I think it would make sense to have a shared testcase
across drivers. Only specific thing would be some hooks to keep the gpu
busy in some fashion while we yank the driver. But just to get it started
you can throw in entirely amdgpu specific subtests and just share some of
the test code.
-Daniel

> 
> Andrey Grodzovsky (13):
>   drm/ttm: Remap all page faults to per process dummy page.
>   drm: Unamp the entire device address space on device unplug
>   drm/ttm: Expose ttm_tt_unpopulate for driver use
>   drm/sched: Cancel and flush all oustatdning jobs before finish.
>   drm/amdgpu: Split amdgpu_device_fini into early and late
>   drm/amdgpu: Add early fini callback
>   drm/amdgpu: Register IOMMU topology notifier per device.
>   drm/amdgpu: Fix a bunch of sdma code crash post device unplug
>   drm/amdgpu: Remap all page faults to per process dummy page.
>   dmr/amdgpu: Move some sysfs attrs creation to default_attr
>   drm/amdgpu: Guard against write accesses after device removal
>   drm/sched: Make timeout timer rearm conditional.
>   drm/amdgpu: Prevent any job recoveries after device is unplugged.
> 
> Luben Tuikov (1):
>   drm/scheduler: Job timeout handler returns status
> 
>  drivers/gpu/drm/amd/amdgpu/amdgpu.h               |  11 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c      |  17 +--
>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c        | 149 ++++++++++++++++++++--
>  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c           |  20 ++-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c         |  15 ++-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c          |   2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gart.h          |   1 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c           |   9 ++
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c       |  25 ++--
>  drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c           |  26 ++--
>  drivers/gpu/drm/amd/amdgpu/amdgpu_irq.h           |   3 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c           |  19 ++-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c           |  12 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_object.c        |  10 ++
>  drivers/gpu/drm/amd/amdgpu/amdgpu_object.h        |   2 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c           |  53 +++++---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h           |   3 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c           |   1 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c          |  70 ++++++++++
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h          |  52 +-------
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c           |  21 ++-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c            |   8 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c      |  14 +-
>  drivers/gpu/drm/amd/amdgpu/cik_ih.c               |   2 +-
>  drivers/gpu/drm/amd/amdgpu/cz_ih.c                |   2 +-
>  drivers/gpu/drm/amd/amdgpu/iceland_ih.c           |   2 +-
>  drivers/gpu/drm/amd/amdgpu/navi10_ih.c            |   2 +-
>  drivers/gpu/drm/amd/amdgpu/psp_v11_0.c            |  16 +--
>  drivers/gpu/drm/amd/amdgpu/psp_v12_0.c            |   8 +-
>  drivers/gpu/drm/amd/amdgpu/psp_v3_1.c             |   8 +-
>  drivers/gpu/drm/amd/amdgpu/si_ih.c                |   2 +-
>  drivers/gpu/drm/amd/amdgpu/tonga_ih.c             |   2 +-
>  drivers/gpu/drm/amd/amdgpu/vega10_ih.c            |   2 +-
>  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  12 +-
>  drivers/gpu/drm/amd/include/amd_shared.h          |   2 +
>  drivers/gpu/drm/drm_drv.c                         |   3 +
>  drivers/gpu/drm/etnaviv/etnaviv_sched.c           |  10 +-
>  drivers/gpu/drm/lima/lima_sched.c                 |   4 +-
>  drivers/gpu/drm/panfrost/panfrost_job.c           |   9 +-
>  drivers/gpu/drm/scheduler/sched_main.c            |  18 ++-
>  drivers/gpu/drm/ttm/ttm_bo_vm.c                   |  82 +++++++++++-
>  drivers/gpu/drm/ttm/ttm_tt.c                      |   1 +
>  drivers/gpu/drm/v3d/v3d_sched.c                   |  32 ++---
>  include/drm/gpu_scheduler.h                       |  17 ++-
>  include/drm/ttm/ttm_bo_api.h                      |   2 +
>  45 files changed, 583 insertions(+), 198 deletions(-)
> 
> -- 
> 2.7.4
>
Andrey Grodzovsky Jan. 19, 2021, 5:31 p.m. UTC | #2
On 1/19/21 9:16 AM, Daniel Vetter wrote:
> On Mon, Jan 18, 2021 at 04:01:09PM -0500, Andrey Grodzovsky wrote:
>> Until now extracting a card either by physical extraction (e.g. eGPU with
>> thunderbolt connection or by emulation through  syfs -> /sys/bus/pci/devices/device_id/remove)
>> would cause random crashes in user apps. The random crashes in apps were
>> mostly due to the app having mapped a device backed BO into its address
>> space was still trying to access the BO while the backing device was gone.
>> To answer this first problem Christian suggested to fix the handling of mapped
>> memory in the clients when the device goes away by forcibly unmap all buffers the
>> user processes has by clearing their respective VMAs mapping the device BOs.
>> Then when the VMAs try to fill in the page tables again we check in the fault
>> handlerif the device is removed and if so, return an error. This will generate a
>> SIGBUS to the application which can then cleanly terminate.This indeed was done
>> but this in turn created a problem of kernel OOPs were the OOPSes were due to the
>> fact that while the app was terminating because of the SIGBUSit would trigger use
>> after free in the driver by calling to accesses device structures that were already
>> released from the pci remove sequence.This was handled by introducing a 'flush'
>> sequence during device removal were we wait for drm file reference to drop to 0
>> meaning all user clients directly using this device terminated.
>>
>> v2:
>> Based on discussions in the mailing list with Daniel and Pekka [1] and based on the document
>> produced by Pekka from those discussions [2] the whole approach with returning SIGBUS and
>> waiting for all user clients having CPU mapping of device BOs to die was dropped.
>> Instead as per the document suggestion the device structures are kept alive until
>> the last reference to the device is dropped by user client and in the meanwhile all existing and new CPU mappings of the BOs
>> belonging to the device directly or by dma-buf import are rerouted to per user
>> process dummy rw page.Also, I skipped the 'Requirements for KMS UAPI' section of [2]
>> since i am trying to get the minimal set of requirements that still give useful solution
>> to work and this is the'Requirements for Render and Cross-Device UAPI' section and so my
>> test case is removing a secondary device, which is render only and is not involved
>> in KMS.
>>
>> v3:
>> More updates following comments from v2 such as removing loop to find DRM file when rerouting
>> page faults to dummy page,getting rid of unnecessary sysfs handling refactoring and moving
>> prevention of GPU recovery post device unplug from amdgpu to scheduler layer.
>> On top of that added unplug support for the IOMMU enabled system.
>>
>> v4:
>> Drop last sysfs hack and use sysfs default attribute.
>> Guard against write accesses after device removal to avoid modifying released memory.
>> Update dummy pages handling to on demand allocation and release through drm managed framework.
>> Add return value to scheduler job TO handler (by Luben Tuikov) and use this in amdgpu for prevention
>> of GPU recovery post device unplug
>> Also rebase on top of drm-misc-mext instead of amd-staging-drm-next
>>
>> With these patches I am able to gracefully remove the secondary card using sysfs remove hook while glxgears
>> is running off of secondary card (DRI_PRIME=1) without kernel oopses or hangs and keep working
>> with the primary card or soft reset the device without hangs or oopses
>>
>> TODOs for followup work:
>> Convert AMDGPU code to use devm (for hw stuff) and drmm (for sw stuff and allocations) (Daniel)
>> Support plugging the secondary device back after unplug - currently still experiencing HW error on plugging back.
>> Add support for 'Requirements for KMS UAPI' section of [2] - unplugging primary, display connected card.
>>
>> [1] - Discussions during v3 of the patchset https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.spinics.net%2Flists%2Famd-gfx%2Fmsg55576.html&data=04%7C01%7Candrey.grodzovsky%40amd.com%7C4b12f8caf53645eaf0c608d8bc84d7fa%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637466626035281917%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=E73dK7r1OBt1T9UcSt6kYbxYk9LL22EgizbpvkjfZ0c%3D&reserved=0
>> [2] - drm/doc: device hot-unplug for userspace https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.spinics.net%2Flists%2Fdri-devel%2Fmsg259755.html&data=04%7C01%7Candrey.grodzovsky%40amd.com%7C4b12f8caf53645eaf0c608d8bc84d7fa%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637466626035291908%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=EAzrOrNd14IA6gjjCVi9mAQJQZbcrFQbRNC3bN9gVQc%3D&reserved=0
>> [3] - Related gitlab ticket https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fdrm%2Famd%2F-%2Fissues%2F1081&data=04%7C01%7Candrey.grodzovsky%40amd.com%7C4b12f8caf53645eaf0c608d8bc84d7fa%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637466626035291908%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=Pmd1WS79YGhU65XNsLtz9s3B6Oc1Dq%2FG4v2t1QDYrFQ%3D&reserved=0
> btw have you tried this out with some of the igts we have? core_hotunplug
> is the one I'm thinking of. Might be worth to extend this for amdgpu
> specific stuff (like run some batches on it while hotunplugging).

No, I mostly used just running glxgears while testing which covers already
exported/imported dma-buf case and a few manually hacked tests in libdrm amdgpu 
test suite


>
> Since there's so many corner cases we need to test here (shared dma-buf,
> shared dma_fence) I think it would make sense to have a shared testcase
> across drivers.


Not familiar with IGT too much, is there an easy way to setup shared dma bufs 
and fences
use cases there or you mean I need to add them now ?


> Only specific thing would be some hooks to keep the gpu
> busy in some fashion while we yank the driver.


Do you mean like staring X and some active rendering on top (like glxgears) 
automatically from within IGT ?


> But just to get it started
> you can throw in entirely amdgpu specific subtests and just share some of
> the test code.
> -Daniel


Im general, I wasn't aware of this test suite and looks like it does what i test 
among other stuff.
I will definitely  try to run with it although the rescan part will not work as 
plugging
the device back is in my TODO list and not part of the scope for this patchset 
and so I will
probably comment the re-scan section out while testing.

Andrey


>
>> Andrey Grodzovsky (13):
>>    drm/ttm: Remap all page faults to per process dummy page.
>>    drm: Unamp the entire device address space on device unplug
>>    drm/ttm: Expose ttm_tt_unpopulate for driver use
>>    drm/sched: Cancel and flush all oustatdning jobs before finish.
>>    drm/amdgpu: Split amdgpu_device_fini into early and late
>>    drm/amdgpu: Add early fini callback
>>    drm/amdgpu: Register IOMMU topology notifier per device.
>>    drm/amdgpu: Fix a bunch of sdma code crash post device unplug
>>    drm/amdgpu: Remap all page faults to per process dummy page.
>>    dmr/amdgpu: Move some sysfs attrs creation to default_attr
>>    drm/amdgpu: Guard against write accesses after device removal
>>    drm/sched: Make timeout timer rearm conditional.
>>    drm/amdgpu: Prevent any job recoveries after device is unplugged.
>>
>> Luben Tuikov (1):
>>    drm/scheduler: Job timeout handler returns status
>>
>>   drivers/gpu/drm/amd/amdgpu/amdgpu.h               |  11 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c      |  17 +--
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c        | 149 ++++++++++++++++++++--
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c           |  20 ++-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c         |  15 ++-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c          |   2 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_gart.h          |   1 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c           |   9 ++
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c       |  25 ++--
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c           |  26 ++--
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_irq.h           |   3 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.c           |  19 ++-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c           |  12 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_object.c        |  10 ++
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_object.h        |   2 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c           |  53 +++++---
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h           |   3 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c           |   1 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c          |  70 ++++++++++
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h          |  52 +-------
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c           |  21 ++-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c            |   8 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c      |  14 +-
>>   drivers/gpu/drm/amd/amdgpu/cik_ih.c               |   2 +-
>>   drivers/gpu/drm/amd/amdgpu/cz_ih.c                |   2 +-
>>   drivers/gpu/drm/amd/amdgpu/iceland_ih.c           |   2 +-
>>   drivers/gpu/drm/amd/amdgpu/navi10_ih.c            |   2 +-
>>   drivers/gpu/drm/amd/amdgpu/psp_v11_0.c            |  16 +--
>>   drivers/gpu/drm/amd/amdgpu/psp_v12_0.c            |   8 +-
>>   drivers/gpu/drm/amd/amdgpu/psp_v3_1.c             |   8 +-
>>   drivers/gpu/drm/amd/amdgpu/si_ih.c                |   2 +-
>>   drivers/gpu/drm/amd/amdgpu/tonga_ih.c             |   2 +-
>>   drivers/gpu/drm/amd/amdgpu/vega10_ih.c            |   2 +-
>>   drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  12 +-
>>   drivers/gpu/drm/amd/include/amd_shared.h          |   2 +
>>   drivers/gpu/drm/drm_drv.c                         |   3 +
>>   drivers/gpu/drm/etnaviv/etnaviv_sched.c           |  10 +-
>>   drivers/gpu/drm/lima/lima_sched.c                 |   4 +-
>>   drivers/gpu/drm/panfrost/panfrost_job.c           |   9 +-
>>   drivers/gpu/drm/scheduler/sched_main.c            |  18 ++-
>>   drivers/gpu/drm/ttm/ttm_bo_vm.c                   |  82 +++++++++++-
>>   drivers/gpu/drm/ttm/ttm_tt.c                      |   1 +
>>   drivers/gpu/drm/v3d/v3d_sched.c                   |  32 ++---
>>   include/drm/gpu_scheduler.h                       |  17 ++-
>>   include/drm/ttm/ttm_bo_api.h                      |   2 +
>>   45 files changed, 583 insertions(+), 198 deletions(-)
>>
>> -- 
>> 2.7.4
>>
Daniel Vetter Jan. 19, 2021, 6:08 p.m. UTC | #3
On Tue, Jan 19, 2021 at 6:31 PM Andrey Grodzovsky
<Andrey.Grodzovsky@amd.com> wrote:
>
>
> On 1/19/21 9:16 AM, Daniel Vetter wrote:
> > On Mon, Jan 18, 2021 at 04:01:09PM -0500, Andrey Grodzovsky wrote:
> >> Until now extracting a card either by physical extraction (e.g. eGPU with
> >> thunderbolt connection or by emulation through  syfs -> /sys/bus/pci/devices/device_id/remove)
> >> would cause random crashes in user apps. The random crashes in apps were
> >> mostly due to the app having mapped a device backed BO into its address
> >> space was still trying to access the BO while the backing device was gone.
> >> To answer this first problem Christian suggested to fix the handling of mapped
> >> memory in the clients when the device goes away by forcibly unmap all buffers the
> >> user processes has by clearing their respective VMAs mapping the device BOs.
> >> Then when the VMAs try to fill in the page tables again we check in the fault
> >> handlerif the device is removed and if so, return an error. This will generate a
> >> SIGBUS to the application which can then cleanly terminate.This indeed was done
> >> but this in turn created a problem of kernel OOPs were the OOPSes were due to the
> >> fact that while the app was terminating because of the SIGBUSit would trigger use
> >> after free in the driver by calling to accesses device structures that were already
> >> released from the pci remove sequence.This was handled by introducing a 'flush'
> >> sequence during device removal were we wait for drm file reference to drop to 0
> >> meaning all user clients directly using this device terminated.
> >>
> >> v2:
> >> Based on discussions in the mailing list with Daniel and Pekka [1] and based on the document
> >> produced by Pekka from those discussions [2] the whole approach with returning SIGBUS and
> >> waiting for all user clients having CPU mapping of device BOs to die was dropped.
> >> Instead as per the document suggestion the device structures are kept alive until
> >> the last reference to the device is dropped by user client and in the meanwhile all existing and new CPU mappings of the BOs
> >> belonging to the device directly or by dma-buf import are rerouted to per user
> >> process dummy rw page.Also, I skipped the 'Requirements for KMS UAPI' section of [2]
> >> since i am trying to get the minimal set of requirements that still give useful solution
> >> to work and this is the'Requirements for Render and Cross-Device UAPI' section and so my
> >> test case is removing a secondary device, which is render only and is not involved
> >> in KMS.
> >>
> >> v3:
> >> More updates following comments from v2 such as removing loop to find DRM file when rerouting
> >> page faults to dummy page,getting rid of unnecessary sysfs handling refactoring and moving
> >> prevention of GPU recovery post device unplug from amdgpu to scheduler layer.
> >> On top of that added unplug support for the IOMMU enabled system.
> >>
> >> v4:
> >> Drop last sysfs hack and use sysfs default attribute.
> >> Guard against write accesses after device removal to avoid modifying released memory.
> >> Update dummy pages handling to on demand allocation and release through drm managed framework.
> >> Add return value to scheduler job TO handler (by Luben Tuikov) and use this in amdgpu for prevention
> >> of GPU recovery post device unplug
> >> Also rebase on top of drm-misc-mext instead of amd-staging-drm-next
> >>
> >> With these patches I am able to gracefully remove the secondary card using sysfs remove hook while glxgears
> >> is running off of secondary card (DRI_PRIME=1) without kernel oopses or hangs and keep working
> >> with the primary card or soft reset the device without hangs or oopses
> >>
> >> TODOs for followup work:
> >> Convert AMDGPU code to use devm (for hw stuff) and drmm (for sw stuff and allocations) (Daniel)
> >> Support plugging the secondary device back after unplug - currently still experiencing HW error on plugging back.
> >> Add support for 'Requirements for KMS UAPI' section of [2] - unplugging primary, display connected card.
> >>
> >> [1] - Discussions during v3 of the patchset https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.spinics.net%2Flists%2Famd-gfx%2Fmsg55576.html&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7C4b12f8caf53645eaf0c608d8bc84d7fa%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637466626035281917%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=E73dK7r1OBt1T9UcSt6kYbxYk9LL22EgizbpvkjfZ0c%3D&amp;reserved=0
> >> [2] - drm/doc: device hot-unplug for userspace https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.spinics.net%2Flists%2Fdri-devel%2Fmsg259755.html&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7C4b12f8caf53645eaf0c608d8bc84d7fa%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637466626035291908%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=EAzrOrNd14IA6gjjCVi9mAQJQZbcrFQbRNC3bN9gVQc%3D&amp;reserved=0
> >> [3] - Related gitlab ticket https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fdrm%2Famd%2F-%2Fissues%2F1081&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7C4b12f8caf53645eaf0c608d8bc84d7fa%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637466626035291908%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=Pmd1WS79YGhU65XNsLtz9s3B6Oc1Dq%2FG4v2t1QDYrFQ%3D&amp;reserved=0
> > btw have you tried this out with some of the igts we have? core_hotunplug
> > is the one I'm thinking of. Might be worth to extend this for amdgpu
> > specific stuff (like run some batches on it while hotunplugging).
>
> No, I mostly used just running glxgears while testing which covers already
> exported/imported dma-buf case and a few manually hacked tests in libdrm amdgpu
> test suite
>
>
> >
> > Since there's so many corner cases we need to test here (shared dma-buf,
> > shared dma_fence) I think it would make sense to have a shared testcase
> > across drivers.
>
>
> Not familiar with IGT too much, is there an easy way to setup shared dma bufs
> and fences
> use cases there or you mean I need to add them now ?

We do have test infrastructure for all of that, but the hotunplug test
doesn't have that yet I think.

> > Only specific thing would be some hooks to keep the gpu
> > busy in some fashion while we yank the driver.
>
>
> Do you mean like staring X and some active rendering on top (like glxgears)
> automatically from within IGT ?

Nope, igt is meant to be bare metal testing so you don't have to drag
the entire winsys around (which in a wayland world, is not really good
for driver testing anyway, since everything is different). We use this
for our pre-merge ci for drm/i915.

> > But just to get it started
> > you can throw in entirely amdgpu specific subtests and just share some of
> > the test code.
> > -Daniel
>
>
> Im general, I wasn't aware of this test suite and looks like it does what i test
> among other stuff.
> I will definitely  try to run with it although the rescan part will not work as
> plugging
> the device back is in my TODO list and not part of the scope for this patchset
> and so I will
> probably comment the re-scan section out while testing.

amd gem has been using libdrm-amd thus far iirc, but for things like
this I think it'd be worth to at least consider switching. Display
team has already started to use some of the test and contribute stuff
(I think the VRR testcase is from amd).
-Daniel

>
> Andrey
>
>
> >
> >> Andrey Grodzovsky (13):
> >>    drm/ttm: Remap all page faults to per process dummy page.
> >>    drm: Unamp the entire device address space on device unplug
> >>    drm/ttm: Expose ttm_tt_unpopulate for driver use
> >>    drm/sched: Cancel and flush all oustatdning jobs before finish.
> >>    drm/amdgpu: Split amdgpu_device_fini into early and late
> >>    drm/amdgpu: Add early fini callback
> >>    drm/amdgpu: Register IOMMU topology notifier per device.
> >>    drm/amdgpu: Fix a bunch of sdma code crash post device unplug
> >>    drm/amdgpu: Remap all page faults to per process dummy page.
> >>    dmr/amdgpu: Move some sysfs attrs creation to default_attr
> >>    drm/amdgpu: Guard against write accesses after device removal
> >>    drm/sched: Make timeout timer rearm conditional.
> >>    drm/amdgpu: Prevent any job recoveries after device is unplugged.
> >>
> >> Luben Tuikov (1):
> >>    drm/scheduler: Job timeout handler returns status
> >>
> >>   drivers/gpu/drm/amd/amdgpu/amdgpu.h               |  11 +-
> >>   drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c      |  17 +--
> >>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c        | 149 ++++++++++++++++++++--
> >>   drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c           |  20 ++-
> >>   drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c         |  15 ++-
> >>   drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c          |   2 +-
> >>   drivers/gpu/drm/amd/amdgpu/amdgpu_gart.h          |   1 +
> >>   drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c           |   9 ++
> >>   drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c       |  25 ++--
> >>   drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c           |  26 ++--
> >>   drivers/gpu/drm/amd/amdgpu/amdgpu_irq.h           |   3 +-
> >>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.c           |  19 ++-
> >>   drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c           |  12 +-
> >>   drivers/gpu/drm/amd/amdgpu/amdgpu_object.c        |  10 ++
> >>   drivers/gpu/drm/amd/amdgpu/amdgpu_object.h        |   2 +
> >>   drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c           |  53 +++++---
> >>   drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h           |   3 +
> >>   drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c           |   1 +
> >>   drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c          |  70 ++++++++++
> >>   drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h          |  52 +-------
> >>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c           |  21 ++-
> >>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c            |   8 +-
> >>   drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c      |  14 +-
> >>   drivers/gpu/drm/amd/amdgpu/cik_ih.c               |   2 +-
> >>   drivers/gpu/drm/amd/amdgpu/cz_ih.c                |   2 +-
> >>   drivers/gpu/drm/amd/amdgpu/iceland_ih.c           |   2 +-
> >>   drivers/gpu/drm/amd/amdgpu/navi10_ih.c            |   2 +-
> >>   drivers/gpu/drm/amd/amdgpu/psp_v11_0.c            |  16 +--
> >>   drivers/gpu/drm/amd/amdgpu/psp_v12_0.c            |   8 +-
> >>   drivers/gpu/drm/amd/amdgpu/psp_v3_1.c             |   8 +-
> >>   drivers/gpu/drm/amd/amdgpu/si_ih.c                |   2 +-
> >>   drivers/gpu/drm/amd/amdgpu/tonga_ih.c             |   2 +-
> >>   drivers/gpu/drm/amd/amdgpu/vega10_ih.c            |   2 +-
> >>   drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  12 +-
> >>   drivers/gpu/drm/amd/include/amd_shared.h          |   2 +
> >>   drivers/gpu/drm/drm_drv.c                         |   3 +
> >>   drivers/gpu/drm/etnaviv/etnaviv_sched.c           |  10 +-
> >>   drivers/gpu/drm/lima/lima_sched.c                 |   4 +-
> >>   drivers/gpu/drm/panfrost/panfrost_job.c           |   9 +-
> >>   drivers/gpu/drm/scheduler/sched_main.c            |  18 ++-
> >>   drivers/gpu/drm/ttm/ttm_bo_vm.c                   |  82 +++++++++++-
> >>   drivers/gpu/drm/ttm/ttm_tt.c                      |   1 +
> >>   drivers/gpu/drm/v3d/v3d_sched.c                   |  32 ++---
> >>   include/drm/gpu_scheduler.h                       |  17 ++-
> >>   include/drm/ttm/ttm_bo_api.h                      |   2 +
> >>   45 files changed, 583 insertions(+), 198 deletions(-)
> >>
> >> --
> >> 2.7.4
> >>
Andrey Grodzovsky Jan. 19, 2021, 6:18 p.m. UTC | #4
On 1/19/21 1:08 PM, Daniel Vetter wrote:
> On Tue, Jan 19, 2021 at 6:31 PM Andrey Grodzovsky
> <Andrey.Grodzovsky@amd.com> wrote:
>>
>> On 1/19/21 9:16 AM, Daniel Vetter wrote:
>>> On Mon, Jan 18, 2021 at 04:01:09PM -0500, Andrey Grodzovsky wrote:
>>>> Until now extracting a card either by physical extraction (e.g. eGPU with
>>>> thunderbolt connection or by emulation through  syfs -> /sys/bus/pci/devices/device_id/remove)
>>>> would cause random crashes in user apps. The random crashes in apps were
>>>> mostly due to the app having mapped a device backed BO into its address
>>>> space was still trying to access the BO while the backing device was gone.
>>>> To answer this first problem Christian suggested to fix the handling of mapped
>>>> memory in the clients when the device goes away by forcibly unmap all buffers the
>>>> user processes has by clearing their respective VMAs mapping the device BOs.
>>>> Then when the VMAs try to fill in the page tables again we check in the fault
>>>> handlerif the device is removed and if so, return an error. This will generate a
>>>> SIGBUS to the application which can then cleanly terminate.This indeed was done
>>>> but this in turn created a problem of kernel OOPs were the OOPSes were due to the
>>>> fact that while the app was terminating because of the SIGBUSit would trigger use
>>>> after free in the driver by calling to accesses device structures that were already
>>>> released from the pci remove sequence.This was handled by introducing a 'flush'
>>>> sequence during device removal were we wait for drm file reference to drop to 0
>>>> meaning all user clients directly using this device terminated.
>>>>
>>>> v2:
>>>> Based on discussions in the mailing list with Daniel and Pekka [1] and based on the document
>>>> produced by Pekka from those discussions [2] the whole approach with returning SIGBUS and
>>>> waiting for all user clients having CPU mapping of device BOs to die was dropped.
>>>> Instead as per the document suggestion the device structures are kept alive until
>>>> the last reference to the device is dropped by user client and in the meanwhile all existing and new CPU mappings of the BOs
>>>> belonging to the device directly or by dma-buf import are rerouted to per user
>>>> process dummy rw page.Also, I skipped the 'Requirements for KMS UAPI' section of [2]
>>>> since i am trying to get the minimal set of requirements that still give useful solution
>>>> to work and this is the'Requirements for Render and Cross-Device UAPI' section and so my
>>>> test case is removing a secondary device, which is render only and is not involved
>>>> in KMS.
>>>>
>>>> v3:
>>>> More updates following comments from v2 such as removing loop to find DRM file when rerouting
>>>> page faults to dummy page,getting rid of unnecessary sysfs handling refactoring and moving
>>>> prevention of GPU recovery post device unplug from amdgpu to scheduler layer.
>>>> On top of that added unplug support for the IOMMU enabled system.
>>>>
>>>> v4:
>>>> Drop last sysfs hack and use sysfs default attribute.
>>>> Guard against write accesses after device removal to avoid modifying released memory.
>>>> Update dummy pages handling to on demand allocation and release through drm managed framework.
>>>> Add return value to scheduler job TO handler (by Luben Tuikov) and use this in amdgpu for prevention
>>>> of GPU recovery post device unplug
>>>> Also rebase on top of drm-misc-mext instead of amd-staging-drm-next
>>>>
>>>> With these patches I am able to gracefully remove the secondary card using sysfs remove hook while glxgears
>>>> is running off of secondary card (DRI_PRIME=1) without kernel oopses or hangs and keep working
>>>> with the primary card or soft reset the device without hangs or oopses
>>>>
>>>> TODOs for followup work:
>>>> Convert AMDGPU code to use devm (for hw stuff) and drmm (for sw stuff and allocations) (Daniel)
>>>> Support plugging the secondary device back after unplug - currently still experiencing HW error on plugging back.
>>>> Add support for 'Requirements for KMS UAPI' section of [2] - unplugging primary, display connected card.
>>>>
>>>> [1] - Discussions during v3 of the patchset https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.spinics.net%2Flists%2Famd-gfx%2Fmsg55576.html&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7C9055ea164ca14a0cbce108d8bca53d37%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637466765176719365%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=AqqeqmhF%2BZ1%2BRwMgtpmfoW1gtEnLGxiy3U5OMm%2BBqk8%3D&amp;reserved=0
>>>> [2] - drm/doc: device hot-unplug for userspace https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.spinics.net%2Flists%2Fdri-devel%2Fmsg259755.html&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7C9055ea164ca14a0cbce108d8bca53d37%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637466765176719365%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=oHHyRtTMTNQAnkzptG0B8%2FeeniU1z2DSca8L4yCYJcE%3D&amp;reserved=0
>>>> [3] - Related gitlab ticket https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fdrm%2Famd%2F-%2Fissues%2F1081&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7C9055ea164ca14a0cbce108d8bca53d37%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637466765176719365%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=inKlV%2F5QIPw%2BhHvLM46X27%2Fcjr%2FXyhxmrC0xYXBhHuE%3D&amp;reserved=0
>>> btw have you tried this out with some of the igts we have? core_hotunplug
>>> is the one I'm thinking of. Might be worth to extend this for amdgpu
>>> specific stuff (like run some batches on it while hotunplugging).
>> No, I mostly used just running glxgears while testing which covers already
>> exported/imported dma-buf case and a few manually hacked tests in libdrm amdgpu
>> test suite
>>
>>
>>> Since there's so many corner cases we need to test here (shared dma-buf,
>>> shared dma_fence) I think it would make sense to have a shared testcase
>>> across drivers.
>>
>> Not familiar with IGT too much, is there an easy way to setup shared dma bufs
>> and fences
>> use cases there or you mean I need to add them now ?
> We do have test infrastructure for all of that, but the hotunplug test
> doesn't have that yet I think.
>
>>> Only specific thing would be some hooks to keep the gpu
>>> busy in some fashion while we yank the driver.
>>
>> Do you mean like staring X and some active rendering on top (like glxgears)
>> automatically from within IGT ?
> Nope, igt is meant to be bare metal testing so you don't have to drag
> the entire winsys around (which in a wayland world, is not really good
> for driver testing anyway, since everything is different). We use this
> for our pre-merge ci for drm/i915.


So i keep it busy by X/glxgers which is manual operation. What you suggest
then is some client within IGT which opens the device and starts submitting jobs
(which is much like what libdrm amdgpu tests already do) ? And this
part is the amdgou specific code I just need to port from libdrm to here ?

Andrey


>
>>> But just to get it started
>>> you can throw in entirely amdgpu specific subtests and just share some of
>>> the test code.
>>> -Daniel
>>
>> Im general, I wasn't aware of this test suite and looks like it does what i test
>> among other stuff.
>> I will definitely  try to run with it although the rescan part will not work as
>> plugging
>> the device back is in my TODO list and not part of the scope for this patchset
>> and so I will
>> probably comment the re-scan section out while testing.
> amd gem has been using libdrm-amd thus far iirc, but for things like
> this I think it'd be worth to at least consider switching. Display
> team has already started to use some of the test and contribute stuff
> (I think the VRR testcase is from amd).
> -Daniel
>
>> Andrey
>>
>>
>>>> Andrey Grodzovsky (13):
>>>>     drm/ttm: Remap all page faults to per process dummy page.
>>>>     drm: Unamp the entire device address space on device unplug
>>>>     drm/ttm: Expose ttm_tt_unpopulate for driver use
>>>>     drm/sched: Cancel and flush all oustatdning jobs before finish.
>>>>     drm/amdgpu: Split amdgpu_device_fini into early and late
>>>>     drm/amdgpu: Add early fini callback
>>>>     drm/amdgpu: Register IOMMU topology notifier per device.
>>>>     drm/amdgpu: Fix a bunch of sdma code crash post device unplug
>>>>     drm/amdgpu: Remap all page faults to per process dummy page.
>>>>     dmr/amdgpu: Move some sysfs attrs creation to default_attr
>>>>     drm/amdgpu: Guard against write accesses after device removal
>>>>     drm/sched: Make timeout timer rearm conditional.
>>>>     drm/amdgpu: Prevent any job recoveries after device is unplugged.
>>>>
>>>> Luben Tuikov (1):
>>>>     drm/scheduler: Job timeout handler returns status
>>>>
>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu.h               |  11 +-
>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c      |  17 +--
>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_device.c        | 149 ++++++++++++++++++++--
>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c           |  20 ++-
>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c         |  15 ++-
>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c          |   2 +-
>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_gart.h          |   1 +
>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c           |   9 ++
>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c       |  25 ++--
>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c           |  26 ++--
>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_irq.h           |   3 +-
>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_job.c           |  19 ++-
>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c           |  12 +-
>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_object.c        |  10 ++
>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_object.h        |   2 +
>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c           |  53 +++++---
>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h           |   3 +
>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c           |   1 +
>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c          |  70 ++++++++++
>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h          |  52 +-------
>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c           |  21 ++-
>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c            |   8 +-
>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c      |  14 +-
>>>>    drivers/gpu/drm/amd/amdgpu/cik_ih.c               |   2 +-
>>>>    drivers/gpu/drm/amd/amdgpu/cz_ih.c                |   2 +-
>>>>    drivers/gpu/drm/amd/amdgpu/iceland_ih.c           |   2 +-
>>>>    drivers/gpu/drm/amd/amdgpu/navi10_ih.c            |   2 +-
>>>>    drivers/gpu/drm/amd/amdgpu/psp_v11_0.c            |  16 +--
>>>>    drivers/gpu/drm/amd/amdgpu/psp_v12_0.c            |   8 +-
>>>>    drivers/gpu/drm/amd/amdgpu/psp_v3_1.c             |   8 +-
>>>>    drivers/gpu/drm/amd/amdgpu/si_ih.c                |   2 +-
>>>>    drivers/gpu/drm/amd/amdgpu/tonga_ih.c             |   2 +-
>>>>    drivers/gpu/drm/amd/amdgpu/vega10_ih.c            |   2 +-
>>>>    drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  12 +-
>>>>    drivers/gpu/drm/amd/include/amd_shared.h          |   2 +
>>>>    drivers/gpu/drm/drm_drv.c                         |   3 +
>>>>    drivers/gpu/drm/etnaviv/etnaviv_sched.c           |  10 +-
>>>>    drivers/gpu/drm/lima/lima_sched.c                 |   4 +-
>>>>    drivers/gpu/drm/panfrost/panfrost_job.c           |   9 +-
>>>>    drivers/gpu/drm/scheduler/sched_main.c            |  18 ++-
>>>>    drivers/gpu/drm/ttm/ttm_bo_vm.c                   |  82 +++++++++++-
>>>>    drivers/gpu/drm/ttm/ttm_tt.c                      |   1 +
>>>>    drivers/gpu/drm/v3d/v3d_sched.c                   |  32 ++---
>>>>    include/drm/gpu_scheduler.h                       |  17 ++-
>>>>    include/drm/ttm/ttm_bo_api.h                      |   2 +
>>>>    45 files changed, 583 insertions(+), 198 deletions(-)
>>>>
>>>> --
>>>> 2.7.4
>>>>
>
>
Daniel Vetter Jan. 20, 2021, 9:05 a.m. UTC | #5
On Tue, Jan 19, 2021 at 01:18:15PM -0500, Andrey Grodzovsky wrote:
> 
> On 1/19/21 1:08 PM, Daniel Vetter wrote:
> > On Tue, Jan 19, 2021 at 6:31 PM Andrey Grodzovsky
> > <Andrey.Grodzovsky@amd.com> wrote:
> > > 
> > > On 1/19/21 9:16 AM, Daniel Vetter wrote:
> > > > On Mon, Jan 18, 2021 at 04:01:09PM -0500, Andrey Grodzovsky wrote:
> > > > > Until now extracting a card either by physical extraction (e.g. eGPU with
> > > > > thunderbolt connection or by emulation through  syfs -> /sys/bus/pci/devices/device_id/remove)
> > > > > would cause random crashes in user apps. The random crashes in apps were
> > > > > mostly due to the app having mapped a device backed BO into its address
> > > > > space was still trying to access the BO while the backing device was gone.
> > > > > To answer this first problem Christian suggested to fix the handling of mapped
> > > > > memory in the clients when the device goes away by forcibly unmap all buffers the
> > > > > user processes has by clearing their respective VMAs mapping the device BOs.
> > > > > Then when the VMAs try to fill in the page tables again we check in the fault
> > > > > handlerif the device is removed and if so, return an error. This will generate a
> > > > > SIGBUS to the application which can then cleanly terminate.This indeed was done
> > > > > but this in turn created a problem of kernel OOPs were the OOPSes were due to the
> > > > > fact that while the app was terminating because of the SIGBUSit would trigger use
> > > > > after free in the driver by calling to accesses device structures that were already
> > > > > released from the pci remove sequence.This was handled by introducing a 'flush'
> > > > > sequence during device removal were we wait for drm file reference to drop to 0
> > > > > meaning all user clients directly using this device terminated.
> > > > > 
> > > > > v2:
> > > > > Based on discussions in the mailing list with Daniel and Pekka [1] and based on the document
> > > > > produced by Pekka from those discussions [2] the whole approach with returning SIGBUS and
> > > > > waiting for all user clients having CPU mapping of device BOs to die was dropped.
> > > > > Instead as per the document suggestion the device structures are kept alive until
> > > > > the last reference to the device is dropped by user client and in the meanwhile all existing and new CPU mappings of the BOs
> > > > > belonging to the device directly or by dma-buf import are rerouted to per user
> > > > > process dummy rw page.Also, I skipped the 'Requirements for KMS UAPI' section of [2]
> > > > > since i am trying to get the minimal set of requirements that still give useful solution
> > > > > to work and this is the'Requirements for Render and Cross-Device UAPI' section and so my
> > > > > test case is removing a secondary device, which is render only and is not involved
> > > > > in KMS.
> > > > > 
> > > > > v3:
> > > > > More updates following comments from v2 such as removing loop to find DRM file when rerouting
> > > > > page faults to dummy page,getting rid of unnecessary sysfs handling refactoring and moving
> > > > > prevention of GPU recovery post device unplug from amdgpu to scheduler layer.
> > > > > On top of that added unplug support for the IOMMU enabled system.
> > > > > 
> > > > > v4:
> > > > > Drop last sysfs hack and use sysfs default attribute.
> > > > > Guard against write accesses after device removal to avoid modifying released memory.
> > > > > Update dummy pages handling to on demand allocation and release through drm managed framework.
> > > > > Add return value to scheduler job TO handler (by Luben Tuikov) and use this in amdgpu for prevention
> > > > > of GPU recovery post device unplug
> > > > > Also rebase on top of drm-misc-mext instead of amd-staging-drm-next
> > > > > 
> > > > > With these patches I am able to gracefully remove the secondary card using sysfs remove hook while glxgears
> > > > > is running off of secondary card (DRI_PRIME=1) without kernel oopses or hangs and keep working
> > > > > with the primary card or soft reset the device without hangs or oopses
> > > > > 
> > > > > TODOs for followup work:
> > > > > Convert AMDGPU code to use devm (for hw stuff) and drmm (for sw stuff and allocations) (Daniel)
> > > > > Support plugging the secondary device back after unplug - currently still experiencing HW error on plugging back.
> > > > > Add support for 'Requirements for KMS UAPI' section of [2] - unplugging primary, display connected card.
> > > > > 
> > > > > [1] - Discussions during v3 of the patchset https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.spinics.net%2Flists%2Famd-gfx%2Fmsg55576.html&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7C9055ea164ca14a0cbce108d8bca53d37%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637466765176719365%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=AqqeqmhF%2BZ1%2BRwMgtpmfoW1gtEnLGxiy3U5OMm%2BBqk8%3D&amp;reserved=0
> > > > > [2] - drm/doc: device hot-unplug for userspace https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.spinics.net%2Flists%2Fdri-devel%2Fmsg259755.html&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7C9055ea164ca14a0cbce108d8bca53d37%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637466765176719365%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=oHHyRtTMTNQAnkzptG0B8%2FeeniU1z2DSca8L4yCYJcE%3D&amp;reserved=0
> > > > > [3] - Related gitlab ticket https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fdrm%2Famd%2F-%2Fissues%2F1081&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7C9055ea164ca14a0cbce108d8bca53d37%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637466765176719365%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=inKlV%2F5QIPw%2BhHvLM46X27%2Fcjr%2FXyhxmrC0xYXBhHuE%3D&amp;reserved=0
> > > > btw have you tried this out with some of the igts we have? core_hotunplug
> > > > is the one I'm thinking of. Might be worth to extend this for amdgpu
> > > > specific stuff (like run some batches on it while hotunplugging).
> > > No, I mostly used just running glxgears while testing which covers already
> > > exported/imported dma-buf case and a few manually hacked tests in libdrm amdgpu
> > > test suite
> > > 
> > > 
> > > > Since there's so many corner cases we need to test here (shared dma-buf,
> > > > shared dma_fence) I think it would make sense to have a shared testcase
> > > > across drivers.
> > > 
> > > Not familiar with IGT too much, is there an easy way to setup shared dma bufs
> > > and fences
> > > use cases there or you mean I need to add them now ?
> > We do have test infrastructure for all of that, but the hotunplug test
> > doesn't have that yet I think.
> > 
> > > > Only specific thing would be some hooks to keep the gpu
> > > > busy in some fashion while we yank the driver.
> > > 
> > > Do you mean like staring X and some active rendering on top (like glxgears)
> > > automatically from within IGT ?
> > Nope, igt is meant to be bare metal testing so you don't have to drag
> > the entire winsys around (which in a wayland world, is not really good
> > for driver testing anyway, since everything is different). We use this
> > for our pre-merge ci for drm/i915.
> 
> 
> So i keep it busy by X/glxgers which is manual operation. What you suggest
> then is some client within IGT which opens the device and starts submitting jobs
> (which is much like what libdrm amdgpu tests already do) ? And this
> part is the amdgou specific code I just need to port from libdrm to here ?

Yup. For i915 tests we have an entire library already for small workloads,
including some that just spin forever (useful for reset testing and could
also come handy for unload testing).
-Daniel

> 
> Andrey
> 
> 
> > 
> > > > But just to get it started
> > > > you can throw in entirely amdgpu specific subtests and just share some of
> > > > the test code.
> > > > -Daniel
> > > 
> > > Im general, I wasn't aware of this test suite and looks like it does what i test
> > > among other stuff.
> > > I will definitely  try to run with it although the rescan part will not work as
> > > plugging
> > > the device back is in my TODO list and not part of the scope for this patchset
> > > and so I will
> > > probably comment the re-scan section out while testing.
> > amd gem has been using libdrm-amd thus far iirc, but for things like
> > this I think it'd be worth to at least consider switching. Display
> > team has already started to use some of the test and contribute stuff
> > (I think the VRR testcase is from amd).
> > -Daniel
> > 
> > > Andrey
> > > 
> > > 
> > > > > Andrey Grodzovsky (13):
> > > > >     drm/ttm: Remap all page faults to per process dummy page.
> > > > >     drm: Unamp the entire device address space on device unplug
> > > > >     drm/ttm: Expose ttm_tt_unpopulate for driver use
> > > > >     drm/sched: Cancel and flush all oustatdning jobs before finish.
> > > > >     drm/amdgpu: Split amdgpu_device_fini into early and late
> > > > >     drm/amdgpu: Add early fini callback
> > > > >     drm/amdgpu: Register IOMMU topology notifier per device.
> > > > >     drm/amdgpu: Fix a bunch of sdma code crash post device unplug
> > > > >     drm/amdgpu: Remap all page faults to per process dummy page.
> > > > >     dmr/amdgpu: Move some sysfs attrs creation to default_attr
> > > > >     drm/amdgpu: Guard against write accesses after device removal
> > > > >     drm/sched: Make timeout timer rearm conditional.
> > > > >     drm/amdgpu: Prevent any job recoveries after device is unplugged.
> > > > > 
> > > > > Luben Tuikov (1):
> > > > >     drm/scheduler: Job timeout handler returns status
> > > > > 
> > > > >    drivers/gpu/drm/amd/amdgpu/amdgpu.h               |  11 +-
> > > > >    drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c      |  17 +--
> > > > >    drivers/gpu/drm/amd/amdgpu/amdgpu_device.c        | 149 ++++++++++++++++++++--
> > > > >    drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c           |  20 ++-
> > > > >    drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c         |  15 ++-
> > > > >    drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c          |   2 +-
> > > > >    drivers/gpu/drm/amd/amdgpu/amdgpu_gart.h          |   1 +
> > > > >    drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c           |   9 ++
> > > > >    drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c       |  25 ++--
> > > > >    drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c           |  26 ++--
> > > > >    drivers/gpu/drm/amd/amdgpu/amdgpu_irq.h           |   3 +-
> > > > >    drivers/gpu/drm/amd/amdgpu/amdgpu_job.c           |  19 ++-
> > > > >    drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c           |  12 +-
> > > > >    drivers/gpu/drm/amd/amdgpu/amdgpu_object.c        |  10 ++
> > > > >    drivers/gpu/drm/amd/amdgpu/amdgpu_object.h        |   2 +
> > > > >    drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c           |  53 +++++---
> > > > >    drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h           |   3 +
> > > > >    drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c           |   1 +
> > > > >    drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c          |  70 ++++++++++
> > > > >    drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h          |  52 +-------
> > > > >    drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c           |  21 ++-
> > > > >    drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c            |   8 +-
> > > > >    drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c      |  14 +-
> > > > >    drivers/gpu/drm/amd/amdgpu/cik_ih.c               |   2 +-
> > > > >    drivers/gpu/drm/amd/amdgpu/cz_ih.c                |   2 +-
> > > > >    drivers/gpu/drm/amd/amdgpu/iceland_ih.c           |   2 +-
> > > > >    drivers/gpu/drm/amd/amdgpu/navi10_ih.c            |   2 +-
> > > > >    drivers/gpu/drm/amd/amdgpu/psp_v11_0.c            |  16 +--
> > > > >    drivers/gpu/drm/amd/amdgpu/psp_v12_0.c            |   8 +-
> > > > >    drivers/gpu/drm/amd/amdgpu/psp_v3_1.c             |   8 +-
> > > > >    drivers/gpu/drm/amd/amdgpu/si_ih.c                |   2 +-
> > > > >    drivers/gpu/drm/amd/amdgpu/tonga_ih.c             |   2 +-
> > > > >    drivers/gpu/drm/amd/amdgpu/vega10_ih.c            |   2 +-
> > > > >    drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  12 +-
> > > > >    drivers/gpu/drm/amd/include/amd_shared.h          |   2 +
> > > > >    drivers/gpu/drm/drm_drv.c                         |   3 +
> > > > >    drivers/gpu/drm/etnaviv/etnaviv_sched.c           |  10 +-
> > > > >    drivers/gpu/drm/lima/lima_sched.c                 |   4 +-
> > > > >    drivers/gpu/drm/panfrost/panfrost_job.c           |   9 +-
> > > > >    drivers/gpu/drm/scheduler/sched_main.c            |  18 ++-
> > > > >    drivers/gpu/drm/ttm/ttm_bo_vm.c                   |  82 +++++++++++-
> > > > >    drivers/gpu/drm/ttm/ttm_tt.c                      |   1 +
> > > > >    drivers/gpu/drm/v3d/v3d_sched.c                   |  32 ++---
> > > > >    include/drm/gpu_scheduler.h                       |  17 ++-
> > > > >    include/drm/ttm/ttm_bo_api.h                      |   2 +
> > > > >    45 files changed, 583 insertions(+), 198 deletions(-)
> > > > > 
> > > > > --
> > > > > 2.7.4
> > > > > 
> > 
> >
Andrey Grodzovsky Jan. 20, 2021, 2:19 p.m. UTC | #6
On 1/20/21 4:05 AM, Daniel Vetter wrote:
> On Tue, Jan 19, 2021 at 01:18:15PM -0500, Andrey Grodzovsky wrote:
>> On 1/19/21 1:08 PM, Daniel Vetter wrote:
>>> On Tue, Jan 19, 2021 at 6:31 PM Andrey Grodzovsky
>>> <Andrey.Grodzovsky@amd.com> wrote:
>>>> On 1/19/21 9:16 AM, Daniel Vetter wrote:
>>>>> On Mon, Jan 18, 2021 at 04:01:09PM -0500, Andrey Grodzovsky wrote:
>>>>>> Until now extracting a card either by physical extraction (e.g. eGPU with
>>>>>> thunderbolt connection or by emulation through  syfs -> /sys/bus/pci/devices/device_id/remove)
>>>>>> would cause random crashes in user apps. The random crashes in apps were
>>>>>> mostly due to the app having mapped a device backed BO into its address
>>>>>> space was still trying to access the BO while the backing device was gone.
>>>>>> To answer this first problem Christian suggested to fix the handling of mapped
>>>>>> memory in the clients when the device goes away by forcibly unmap all buffers the
>>>>>> user processes has by clearing their respective VMAs mapping the device BOs.
>>>>>> Then when the VMAs try to fill in the page tables again we check in the fault
>>>>>> handlerif the device is removed and if so, return an error. This will generate a
>>>>>> SIGBUS to the application which can then cleanly terminate.This indeed was done
>>>>>> but this in turn created a problem of kernel OOPs were the OOPSes were due to the
>>>>>> fact that while the app was terminating because of the SIGBUSit would trigger use
>>>>>> after free in the driver by calling to accesses device structures that were already
>>>>>> released from the pci remove sequence.This was handled by introducing a 'flush'
>>>>>> sequence during device removal were we wait for drm file reference to drop to 0
>>>>>> meaning all user clients directly using this device terminated.
>>>>>>
>>>>>> v2:
>>>>>> Based on discussions in the mailing list with Daniel and Pekka [1] and based on the document
>>>>>> produced by Pekka from those discussions [2] the whole approach with returning SIGBUS and
>>>>>> waiting for all user clients having CPU mapping of device BOs to die was dropped.
>>>>>> Instead as per the document suggestion the device structures are kept alive until
>>>>>> the last reference to the device is dropped by user client and in the meanwhile all existing and new CPU mappings of the BOs
>>>>>> belonging to the device directly or by dma-buf import are rerouted to per user
>>>>>> process dummy rw page.Also, I skipped the 'Requirements for KMS UAPI' section of [2]
>>>>>> since i am trying to get the minimal set of requirements that still give useful solution
>>>>>> to work and this is the'Requirements for Render and Cross-Device UAPI' section and so my
>>>>>> test case is removing a secondary device, which is render only and is not involved
>>>>>> in KMS.
>>>>>>
>>>>>> v3:
>>>>>> More updates following comments from v2 such as removing loop to find DRM file when rerouting
>>>>>> page faults to dummy page,getting rid of unnecessary sysfs handling refactoring and moving
>>>>>> prevention of GPU recovery post device unplug from amdgpu to scheduler layer.
>>>>>> On top of that added unplug support for the IOMMU enabled system.
>>>>>>
>>>>>> v4:
>>>>>> Drop last sysfs hack and use sysfs default attribute.
>>>>>> Guard against write accesses after device removal to avoid modifying released memory.
>>>>>> Update dummy pages handling to on demand allocation and release through drm managed framework.
>>>>>> Add return value to scheduler job TO handler (by Luben Tuikov) and use this in amdgpu for prevention
>>>>>> of GPU recovery post device unplug
>>>>>> Also rebase on top of drm-misc-mext instead of amd-staging-drm-next
>>>>>>
>>>>>> With these patches I am able to gracefully remove the secondary card using sysfs remove hook while glxgears
>>>>>> is running off of secondary card (DRI_PRIME=1) without kernel oopses or hangs and keep working
>>>>>> with the primary card or soft reset the device without hangs or oopses
>>>>>>
>>>>>> TODOs for followup work:
>>>>>> Convert AMDGPU code to use devm (for hw stuff) and drmm (for sw stuff and allocations) (Daniel)
>>>>>> Support plugging the secondary device back after unplug - currently still experiencing HW error on plugging back.
>>>>>> Add support for 'Requirements for KMS UAPI' section of [2] - unplugging primary, display connected card.
>>>>>>
>>>>>> [1] - Discussions during v3 of the patchset https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.spinics.net%2Flists%2Famd-gfx%2Fmsg55576.html&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7Cbe51719dbdac41f5176b08d8bd2279ec%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637467303085005502%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=T4JLiSl7m4R%2FhcfcAxomY%2FMJ8QiTHaJ%2FJaqNZVT%2FDsk%3D&amp;reserved=0
>>>>>> [2] - drm/doc: device hot-unplug for userspace https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.spinics.net%2Flists%2Fdri-devel%2Fmsg259755.html&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7Cbe51719dbdac41f5176b08d8bd2279ec%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637467303085005502%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=qitlHw6tqm4eGRstKccgh8zIPgILbS%2FJUa5yZGmSQcU%3D&amp;reserved=0
>>>>>> [3] - Related gitlab ticket https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fdrm%2Famd%2F-%2Fissues%2F1081&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7Cbe51719dbdac41f5176b08d8bd2279ec%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637467303085005502%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=UzOXP6bHYY6f7MCs4ZbSSvfY0DJ%2FEVPeIqedAi%2BZGG8%3D&amp;reserved=0
>>>>> btw have you tried this out with some of the igts we have? core_hotunplug
>>>>> is the one I'm thinking of. Might be worth to extend this for amdgpu
>>>>> specific stuff (like run some batches on it while hotunplugging).
>>>> No, I mostly used just running glxgears while testing which covers already
>>>> exported/imported dma-buf case and a few manually hacked tests in libdrm amdgpu
>>>> test suite
>>>>
>>>>
>>>>> Since there's so many corner cases we need to test here (shared dma-buf,
>>>>> shared dma_fence) I think it would make sense to have a shared testcase
>>>>> across drivers.
>>>> Not familiar with IGT too much, is there an easy way to setup shared dma bufs
>>>> and fences
>>>> use cases there or you mean I need to add them now ?
>>> We do have test infrastructure for all of that, but the hotunplug test
>>> doesn't have that yet I think.
>>>
>>>>> Only specific thing would be some hooks to keep the gpu
>>>>> busy in some fashion while we yank the driver.
>>>> Do you mean like staring X and some active rendering on top (like glxgears)
>>>> automatically from within IGT ?
>>> Nope, igt is meant to be bare metal testing so you don't have to drag
>>> the entire winsys around (which in a wayland world, is not really good
>>> for driver testing anyway, since everything is different). We use this
>>> for our pre-merge ci for drm/i915.
>>
>> So i keep it busy by X/glxgers which is manual operation. What you suggest
>> then is some client within IGT which opens the device and starts submitting jobs
>> (which is much like what libdrm amdgpu tests already do) ? And this
>> part is the amdgou specific code I just need to port from libdrm to here ?
> Yup. For i915 tests we have an entire library already for small workloads,
> including some that just spin forever (useful for reset testing and could
> also come handy for unload testing).
> -Daniel


Does it mean I would have to drag in the entire infrastructure code from
within libdrm amdgpu code that allows for command submissions through
our IOCTLs ?

Andrey

>
>> Andrey
>>
>>
>>>>> But just to get it started
>>>>> you can throw in entirely amdgpu specific subtests and just share some of
>>>>> the test code.
>>>>> -Daniel
>>>> Im general, I wasn't aware of this test suite and looks like it does what i test
>>>> among other stuff.
>>>> I will definitely  try to run with it although the rescan part will not work as
>>>> plugging
>>>> the device back is in my TODO list and not part of the scope for this patchset
>>>> and so I will
>>>> probably comment the re-scan section out while testing.
>>> amd gem has been using libdrm-amd thus far iirc, but for things like
>>> this I think it'd be worth to at least consider switching. Display
>>> team has already started to use some of the test and contribute stuff
>>> (I think the VRR testcase is from amd).
>>> -Daniel
>>>
>>>> Andrey
>>>>
>>>>
>>>>>> Andrey Grodzovsky (13):
>>>>>>      drm/ttm: Remap all page faults to per process dummy page.
>>>>>>      drm: Unamp the entire device address space on device unplug
>>>>>>      drm/ttm: Expose ttm_tt_unpopulate for driver use
>>>>>>      drm/sched: Cancel and flush all oustatdning jobs before finish.
>>>>>>      drm/amdgpu: Split amdgpu_device_fini into early and late
>>>>>>      drm/amdgpu: Add early fini callback
>>>>>>      drm/amdgpu: Register IOMMU topology notifier per device.
>>>>>>      drm/amdgpu: Fix a bunch of sdma code crash post device unplug
>>>>>>      drm/amdgpu: Remap all page faults to per process dummy page.
>>>>>>      dmr/amdgpu: Move some sysfs attrs creation to default_attr
>>>>>>      drm/amdgpu: Guard against write accesses after device removal
>>>>>>      drm/sched: Make timeout timer rearm conditional.
>>>>>>      drm/amdgpu: Prevent any job recoveries after device is unplugged.
>>>>>>
>>>>>> Luben Tuikov (1):
>>>>>>      drm/scheduler: Job timeout handler returns status
>>>>>>
>>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu.h               |  11 +-
>>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c      |  17 +--
>>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_device.c        | 149 ++++++++++++++++++++--
>>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c           |  20 ++-
>>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c         |  15 ++-
>>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c          |   2 +-
>>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_gart.h          |   1 +
>>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c           |   9 ++
>>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c       |  25 ++--
>>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c           |  26 ++--
>>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_irq.h           |   3 +-
>>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_job.c           |  19 ++-
>>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c           |  12 +-
>>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_object.c        |  10 ++
>>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_object.h        |   2 +
>>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c           |  53 +++++---
>>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h           |   3 +
>>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c           |   1 +
>>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c          |  70 ++++++++++
>>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h          |  52 +-------
>>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c           |  21 ++-
>>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c            |   8 +-
>>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c      |  14 +-
>>>>>>     drivers/gpu/drm/amd/amdgpu/cik_ih.c               |   2 +-
>>>>>>     drivers/gpu/drm/amd/amdgpu/cz_ih.c                |   2 +-
>>>>>>     drivers/gpu/drm/amd/amdgpu/iceland_ih.c           |   2 +-
>>>>>>     drivers/gpu/drm/amd/amdgpu/navi10_ih.c            |   2 +-
>>>>>>     drivers/gpu/drm/amd/amdgpu/psp_v11_0.c            |  16 +--
>>>>>>     drivers/gpu/drm/amd/amdgpu/psp_v12_0.c            |   8 +-
>>>>>>     drivers/gpu/drm/amd/amdgpu/psp_v3_1.c             |   8 +-
>>>>>>     drivers/gpu/drm/amd/amdgpu/si_ih.c                |   2 +-
>>>>>>     drivers/gpu/drm/amd/amdgpu/tonga_ih.c             |   2 +-
>>>>>>     drivers/gpu/drm/amd/amdgpu/vega10_ih.c            |   2 +-
>>>>>>     drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  12 +-
>>>>>>     drivers/gpu/drm/amd/include/amd_shared.h          |   2 +
>>>>>>     drivers/gpu/drm/drm_drv.c                         |   3 +
>>>>>>     drivers/gpu/drm/etnaviv/etnaviv_sched.c           |  10 +-
>>>>>>     drivers/gpu/drm/lima/lima_sched.c                 |   4 +-
>>>>>>     drivers/gpu/drm/panfrost/panfrost_job.c           |   9 +-
>>>>>>     drivers/gpu/drm/scheduler/sched_main.c            |  18 ++-
>>>>>>     drivers/gpu/drm/ttm/ttm_bo_vm.c                   |  82 +++++++++++-
>>>>>>     drivers/gpu/drm/ttm/ttm_tt.c                      |   1 +
>>>>>>     drivers/gpu/drm/v3d/v3d_sched.c                   |  32 ++---
>>>>>>     include/drm/gpu_scheduler.h                       |  17 ++-
>>>>>>     include/drm/ttm/ttm_bo_api.h                      |   2 +
>>>>>>     45 files changed, 583 insertions(+), 198 deletions(-)
>>>>>>
>>>>>> --
>>>>>> 2.7.4
>>>>>>
>>>
Daniel Vetter Jan. 20, 2021, 3:59 p.m. UTC | #7
On Wed, Jan 20, 2021 at 3:20 PM Andrey Grodzovsky
<Andrey.Grodzovsky@amd.com> wrote:
>
>
> On 1/20/21 4:05 AM, Daniel Vetter wrote:
> > On Tue, Jan 19, 2021 at 01:18:15PM -0500, Andrey Grodzovsky wrote:
> >> On 1/19/21 1:08 PM, Daniel Vetter wrote:
> >>> On Tue, Jan 19, 2021 at 6:31 PM Andrey Grodzovsky
> >>> <Andrey.Grodzovsky@amd.com> wrote:
> >>>> On 1/19/21 9:16 AM, Daniel Vetter wrote:
> >>>>> On Mon, Jan 18, 2021 at 04:01:09PM -0500, Andrey Grodzovsky wrote:
> >>>>>> Until now extracting a card either by physical extraction (e.g. eGPU with
> >>>>>> thunderbolt connection or by emulation through  syfs -> /sys/bus/pci/devices/device_id/remove)
> >>>>>> would cause random crashes in user apps. The random crashes in apps were
> >>>>>> mostly due to the app having mapped a device backed BO into its address
> >>>>>> space was still trying to access the BO while the backing device was gone.
> >>>>>> To answer this first problem Christian suggested to fix the handling of mapped
> >>>>>> memory in the clients when the device goes away by forcibly unmap all buffers the
> >>>>>> user processes has by clearing their respective VMAs mapping the device BOs.
> >>>>>> Then when the VMAs try to fill in the page tables again we check in the fault
> >>>>>> handlerif the device is removed and if so, return an error. This will generate a
> >>>>>> SIGBUS to the application which can then cleanly terminate.This indeed was done
> >>>>>> but this in turn created a problem of kernel OOPs were the OOPSes were due to the
> >>>>>> fact that while the app was terminating because of the SIGBUSit would trigger use
> >>>>>> after free in the driver by calling to accesses device structures that were already
> >>>>>> released from the pci remove sequence.This was handled by introducing a 'flush'
> >>>>>> sequence during device removal were we wait for drm file reference to drop to 0
> >>>>>> meaning all user clients directly using this device terminated.
> >>>>>>
> >>>>>> v2:
> >>>>>> Based on discussions in the mailing list with Daniel and Pekka [1] and based on the document
> >>>>>> produced by Pekka from those discussions [2] the whole approach with returning SIGBUS and
> >>>>>> waiting for all user clients having CPU mapping of device BOs to die was dropped.
> >>>>>> Instead as per the document suggestion the device structures are kept alive until
> >>>>>> the last reference to the device is dropped by user client and in the meanwhile all existing and new CPU mappings of the BOs
> >>>>>> belonging to the device directly or by dma-buf import are rerouted to per user
> >>>>>> process dummy rw page.Also, I skipped the 'Requirements for KMS UAPI' section of [2]
> >>>>>> since i am trying to get the minimal set of requirements that still give useful solution
> >>>>>> to work and this is the'Requirements for Render and Cross-Device UAPI' section and so my
> >>>>>> test case is removing a secondary device, which is render only and is not involved
> >>>>>> in KMS.
> >>>>>>
> >>>>>> v3:
> >>>>>> More updates following comments from v2 such as removing loop to find DRM file when rerouting
> >>>>>> page faults to dummy page,getting rid of unnecessary sysfs handling refactoring and moving
> >>>>>> prevention of GPU recovery post device unplug from amdgpu to scheduler layer.
> >>>>>> On top of that added unplug support for the IOMMU enabled system.
> >>>>>>
> >>>>>> v4:
> >>>>>> Drop last sysfs hack and use sysfs default attribute.
> >>>>>> Guard against write accesses after device removal to avoid modifying released memory.
> >>>>>> Update dummy pages handling to on demand allocation and release through drm managed framework.
> >>>>>> Add return value to scheduler job TO handler (by Luben Tuikov) and use this in amdgpu for prevention
> >>>>>> of GPU recovery post device unplug
> >>>>>> Also rebase on top of drm-misc-mext instead of amd-staging-drm-next
> >>>>>>
> >>>>>> With these patches I am able to gracefully remove the secondary card using sysfs remove hook while glxgears
> >>>>>> is running off of secondary card (DRI_PRIME=1) without kernel oopses or hangs and keep working
> >>>>>> with the primary card or soft reset the device without hangs or oopses
> >>>>>>
> >>>>>> TODOs for followup work:
> >>>>>> Convert AMDGPU code to use devm (for hw stuff) and drmm (for sw stuff and allocations) (Daniel)
> >>>>>> Support plugging the secondary device back after unplug - currently still experiencing HW error on plugging back.
> >>>>>> Add support for 'Requirements for KMS UAPI' section of [2] - unplugging primary, display connected card.
> >>>>>>
> >>>>>> [1] - Discussions during v3 of the patchset https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.spinics.net%2Flists%2Famd-gfx%2Fmsg55576.html&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7Cbe51719dbdac41f5176b08d8bd2279ec%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637467303085005502%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=T4JLiSl7m4R%2FhcfcAxomY%2FMJ8QiTHaJ%2FJaqNZVT%2FDsk%3D&amp;reserved=0
> >>>>>> [2] - drm/doc: device hot-unplug for userspace https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.spinics.net%2Flists%2Fdri-devel%2Fmsg259755.html&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7Cbe51719dbdac41f5176b08d8bd2279ec%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637467303085005502%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=qitlHw6tqm4eGRstKccgh8zIPgILbS%2FJUa5yZGmSQcU%3D&amp;reserved=0
> >>>>>> [3] - Related gitlab ticket https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fdrm%2Famd%2F-%2Fissues%2F1081&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7Cbe51719dbdac41f5176b08d8bd2279ec%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637467303085005502%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=UzOXP6bHYY6f7MCs4ZbSSvfY0DJ%2FEVPeIqedAi%2BZGG8%3D&amp;reserved=0
> >>>>> btw have you tried this out with some of the igts we have? core_hotunplug
> >>>>> is the one I'm thinking of. Might be worth to extend this for amdgpu
> >>>>> specific stuff (like run some batches on it while hotunplugging).
> >>>> No, I mostly used just running glxgears while testing which covers already
> >>>> exported/imported dma-buf case and a few manually hacked tests in libdrm amdgpu
> >>>> test suite
> >>>>
> >>>>
> >>>>> Since there's so many corner cases we need to test here (shared dma-buf,
> >>>>> shared dma_fence) I think it would make sense to have a shared testcase
> >>>>> across drivers.
> >>>> Not familiar with IGT too much, is there an easy way to setup shared dma bufs
> >>>> and fences
> >>>> use cases there or you mean I need to add them now ?
> >>> We do have test infrastructure for all of that, but the hotunplug test
> >>> doesn't have that yet I think.
> >>>
> >>>>> Only specific thing would be some hooks to keep the gpu
> >>>>> busy in some fashion while we yank the driver.
> >>>> Do you mean like staring X and some active rendering on top (like glxgears)
> >>>> automatically from within IGT ?
> >>> Nope, igt is meant to be bare metal testing so you don't have to drag
> >>> the entire winsys around (which in a wayland world, is not really good
> >>> for driver testing anyway, since everything is different). We use this
> >>> for our pre-merge ci for drm/i915.
> >>
> >> So i keep it busy by X/glxgers which is manual operation. What you suggest
> >> then is some client within IGT which opens the device and starts submitting jobs
> >> (which is much like what libdrm amdgpu tests already do) ? And this
> >> part is the amdgou specific code I just need to port from libdrm to here ?
> > Yup. For i915 tests we have an entire library already for small workloads,
> > including some that just spin forever (useful for reset testing and could
> > also come handy for unload testing).
> > -Daniel
>
>
> Does it mean I would have to drag in the entire infrastructure code from
> within libdrm amdgpu code that allows for command submissions through
> our IOCTLs ?

No it's perfectly fine to use libdrm in igt tests, we do that too. I
just mean we have some additional helpers to submit specific workloads
for intel gpu, like rendercpy to move data with the 3d engine (just
using copy engines only isn't good enough sometimes for testing), or
the special hanging batchbuffers we use for reset testing, or in
general for having precise control over race conditions and things
like that.

One thing that was somewhat annoying for i915 but shouldn't be a
problem for amdgpu is that igt builds on intel. So we have stub
functions for libdrm-intel, since libdrm-intel doesn't build on arm.
Shouldn't be a problem for you.
-Daniel


> Andrey
>
> >
> >> Andrey
> >>
> >>
> >>>>> But just to get it started
> >>>>> you can throw in entirely amdgpu specific subtests and just share some of
> >>>>> the test code.
> >>>>> -Daniel
> >>>> Im general, I wasn't aware of this test suite and looks like it does what i test
> >>>> among other stuff.
> >>>> I will definitely  try to run with it although the rescan part will not work as
> >>>> plugging
> >>>> the device back is in my TODO list and not part of the scope for this patchset
> >>>> and so I will
> >>>> probably comment the re-scan section out while testing.
> >>> amd gem has been using libdrm-amd thus far iirc, but for things like
> >>> this I think it'd be worth to at least consider switching. Display
> >>> team has already started to use some of the test and contribute stuff
> >>> (I think the VRR testcase is from amd).
> >>> -Daniel
> >>>
> >>>> Andrey
> >>>>
> >>>>
> >>>>>> Andrey Grodzovsky (13):
> >>>>>>      drm/ttm: Remap all page faults to per process dummy page.
> >>>>>>      drm: Unamp the entire device address space on device unplug
> >>>>>>      drm/ttm: Expose ttm_tt_unpopulate for driver use
> >>>>>>      drm/sched: Cancel and flush all oustatdning jobs before finish.
> >>>>>>      drm/amdgpu: Split amdgpu_device_fini into early and late
> >>>>>>      drm/amdgpu: Add early fini callback
> >>>>>>      drm/amdgpu: Register IOMMU topology notifier per device.
> >>>>>>      drm/amdgpu: Fix a bunch of sdma code crash post device unplug
> >>>>>>      drm/amdgpu: Remap all page faults to per process dummy page.
> >>>>>>      dmr/amdgpu: Move some sysfs attrs creation to default_attr
> >>>>>>      drm/amdgpu: Guard against write accesses after device removal
> >>>>>>      drm/sched: Make timeout timer rearm conditional.
> >>>>>>      drm/amdgpu: Prevent any job recoveries after device is unplugged.
> >>>>>>
> >>>>>> Luben Tuikov (1):
> >>>>>>      drm/scheduler: Job timeout handler returns status
> >>>>>>
> >>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu.h               |  11 +-
> >>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c      |  17 +--
> >>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_device.c        | 149 ++++++++++++++++++++--
> >>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c           |  20 ++-
> >>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c         |  15 ++-
> >>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c          |   2 +-
> >>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_gart.h          |   1 +
> >>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c           |   9 ++
> >>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c       |  25 ++--
> >>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c           |  26 ++--
> >>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_irq.h           |   3 +-
> >>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_job.c           |  19 ++-
> >>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c           |  12 +-
> >>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_object.c        |  10 ++
> >>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_object.h        |   2 +
> >>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c           |  53 +++++---
> >>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h           |   3 +
> >>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c           |   1 +
> >>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c          |  70 ++++++++++
> >>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h          |  52 +-------
> >>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c           |  21 ++-
> >>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c            |   8 +-
> >>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c      |  14 +-
> >>>>>>     drivers/gpu/drm/amd/amdgpu/cik_ih.c               |   2 +-
> >>>>>>     drivers/gpu/drm/amd/amdgpu/cz_ih.c                |   2 +-
> >>>>>>     drivers/gpu/drm/amd/amdgpu/iceland_ih.c           |   2 +-
> >>>>>>     drivers/gpu/drm/amd/amdgpu/navi10_ih.c            |   2 +-
> >>>>>>     drivers/gpu/drm/amd/amdgpu/psp_v11_0.c            |  16 +--
> >>>>>>     drivers/gpu/drm/amd/amdgpu/psp_v12_0.c            |   8 +-
> >>>>>>     drivers/gpu/drm/amd/amdgpu/psp_v3_1.c             |   8 +-
> >>>>>>     drivers/gpu/drm/amd/amdgpu/si_ih.c                |   2 +-
> >>>>>>     drivers/gpu/drm/amd/amdgpu/tonga_ih.c             |   2 +-
> >>>>>>     drivers/gpu/drm/amd/amdgpu/vega10_ih.c            |   2 +-
> >>>>>>     drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  12 +-
> >>>>>>     drivers/gpu/drm/amd/include/amd_shared.h          |   2 +
> >>>>>>     drivers/gpu/drm/drm_drv.c                         |   3 +
> >>>>>>     drivers/gpu/drm/etnaviv/etnaviv_sched.c           |  10 +-
> >>>>>>     drivers/gpu/drm/lima/lima_sched.c                 |   4 +-
> >>>>>>     drivers/gpu/drm/panfrost/panfrost_job.c           |   9 +-
> >>>>>>     drivers/gpu/drm/scheduler/sched_main.c            |  18 ++-
> >>>>>>     drivers/gpu/drm/ttm/ttm_bo_vm.c                   |  82 +++++++++++-
> >>>>>>     drivers/gpu/drm/ttm/ttm_tt.c                      |   1 +
> >>>>>>     drivers/gpu/drm/v3d/v3d_sched.c                   |  32 ++---
> >>>>>>     include/drm/gpu_scheduler.h                       |  17 ++-
> >>>>>>     include/drm/ttm/ttm_bo_api.h                      |   2 +
> >>>>>>     45 files changed, 583 insertions(+), 198 deletions(-)
> >>>>>>
> >>>>>> --
> >>>>>> 2.7.4
> >>>>>>
> >>>
Andrey Grodzovsky Feb. 8, 2021, 5:59 a.m. UTC | #8
On 1/20/21 10:59 AM, Daniel Vetter wrote:
> On Wed, Jan 20, 2021 at 3:20 PM Andrey Grodzovsky
> <Andrey.Grodzovsky@amd.com> wrote:
>>
>> On 1/20/21 4:05 AM, Daniel Vetter wrote:
>>> On Tue, Jan 19, 2021 at 01:18:15PM -0500, Andrey Grodzovsky wrote:
>>>> On 1/19/21 1:08 PM, Daniel Vetter wrote:
>>>>> On Tue, Jan 19, 2021 at 6:31 PM Andrey Grodzovsky
>>>>> <Andrey.Grodzovsky@amd.com> wrote:
>>>>>> On 1/19/21 9:16 AM, Daniel Vetter wrote:
>>>>>>> On Mon, Jan 18, 2021 at 04:01:09PM -0500, Andrey Grodzovsky wrote:
>>>>>>>> Until now extracting a card either by physical extraction (e.g. eGPU with
>>>>>>>> thunderbolt connection or by emulation through  syfs -> /sys/bus/pci/devices/device_id/remove)
>>>>>>>> would cause random crashes in user apps. The random crashes in apps were
>>>>>>>> mostly due to the app having mapped a device backed BO into its address
>>>>>>>> space was still trying to access the BO while the backing device was gone.
>>>>>>>> To answer this first problem Christian suggested to fix the handling of mapped
>>>>>>>> memory in the clients when the device goes away by forcibly unmap all buffers the
>>>>>>>> user processes has by clearing their respective VMAs mapping the device BOs.
>>>>>>>> Then when the VMAs try to fill in the page tables again we check in the fault
>>>>>>>> handlerif the device is removed and if so, return an error. This will generate a
>>>>>>>> SIGBUS to the application which can then cleanly terminate.This indeed was done
>>>>>>>> but this in turn created a problem of kernel OOPs were the OOPSes were due to the
>>>>>>>> fact that while the app was terminating because of the SIGBUSit would trigger use
>>>>>>>> after free in the driver by calling to accesses device structures that were already
>>>>>>>> released from the pci remove sequence.This was handled by introducing a 'flush'
>>>>>>>> sequence during device removal were we wait for drm file reference to drop to 0
>>>>>>>> meaning all user clients directly using this device terminated.
>>>>>>>>
>>>>>>>> v2:
>>>>>>>> Based on discussions in the mailing list with Daniel and Pekka [1] and based on the document
>>>>>>>> produced by Pekka from those discussions [2] the whole approach with returning SIGBUS and
>>>>>>>> waiting for all user clients having CPU mapping of device BOs to die was dropped.
>>>>>>>> Instead as per the document suggestion the device structures are kept alive until
>>>>>>>> the last reference to the device is dropped by user client and in the meanwhile all existing and new CPU mappings of the BOs
>>>>>>>> belonging to the device directly or by dma-buf import are rerouted to per user
>>>>>>>> process dummy rw page.Also, I skipped the 'Requirements for KMS UAPI' section of [2]
>>>>>>>> since i am trying to get the minimal set of requirements that still give useful solution
>>>>>>>> to work and this is the'Requirements for Render and Cross-Device UAPI' section and so my
>>>>>>>> test case is removing a secondary device, which is render only and is not involved
>>>>>>>> in KMS.
>>>>>>>>
>>>>>>>> v3:
>>>>>>>> More updates following comments from v2 such as removing loop to find DRM file when rerouting
>>>>>>>> page faults to dummy page,getting rid of unnecessary sysfs handling refactoring and moving
>>>>>>>> prevention of GPU recovery post device unplug from amdgpu to scheduler layer.
>>>>>>>> On top of that added unplug support for the IOMMU enabled system.
>>>>>>>>
>>>>>>>> v4:
>>>>>>>> Drop last sysfs hack and use sysfs default attribute.
>>>>>>>> Guard against write accesses after device removal to avoid modifying released memory.
>>>>>>>> Update dummy pages handling to on demand allocation and release through drm managed framework.
>>>>>>>> Add return value to scheduler job TO handler (by Luben Tuikov) and use this in amdgpu for prevention
>>>>>>>> of GPU recovery post device unplug
>>>>>>>> Also rebase on top of drm-misc-mext instead of amd-staging-drm-next
>>>>>>>>
>>>>>>>> With these patches I am able to gracefully remove the secondary card using sysfs remove hook while glxgears
>>>>>>>> is running off of secondary card (DRI_PRIME=1) without kernel oopses or hangs and keep working
>>>>>>>> with the primary card or soft reset the device without hangs or oopses
>>>>>>>>
>>>>>>>> TODOs for followup work:
>>>>>>>> Convert AMDGPU code to use devm (for hw stuff) and drmm (for sw stuff and allocations) (Daniel)
>>>>>>>> Support plugging the secondary device back after unplug - currently still experiencing HW error on plugging back.
>>>>>>>> Add support for 'Requirements for KMS UAPI' section of [2] - unplugging primary, display connected card.
>>>>>>>>
>>>>>>>> [1] - Discussions during v3 of the patchset https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.spinics.net%2Flists%2Famd-gfx%2Fmsg55576.html&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7Cf3fc3c7b55df40e165f408d8bd5c7364%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637467552072067767%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=fjgP9YubHCrILFxWmpVGSmurTJHkWw%2Bv4okyjSNsPxE%3D&amp;reserved=0
>>>>>>>> [2] - drm/doc: device hot-unplug for userspace https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.spinics.net%2Flists%2Fdri-devel%2Fmsg259755.html&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7Cf3fc3c7b55df40e165f408d8bd5c7364%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637467552072067767%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=11PYyAhOjLDEiNNho8WaMB%2FLkA5AuxK6g9XpbNiPIec%3D&amp;reserved=0
>>>>>>>> [3] - Related gitlab ticket https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fdrm%2Famd%2F-%2Fissues%2F1081&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7Cf3fc3c7b55df40e165f408d8bd5c7364%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637467552072077759%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=q%2F%2FXDKm9LOgw9mq4Ts4JoHR8Ysd8KmoM0NGLD98MsFw%3D&amp;reserved=0
>>>>>>> btw have you tried this out with some of the igts we have? core_hotunplug
>>>>>>> is the one I'm thinking of. Might be worth to extend this for amdgpu
>>>>>>> specific stuff (like run some batches on it while hotunplugging).
>>>>>> No, I mostly used just running glxgears while testing which covers already
>>>>>> exported/imported dma-buf case and a few manually hacked tests in libdrm amdgpu
>>>>>> test suite
>>>>>>
>>>>>>
>>>>>>> Since there's so many corner cases we need to test here (shared dma-buf,
>>>>>>> shared dma_fence) I think it would make sense to have a shared testcase
>>>>>>> across drivers.
>>>>>> Not familiar with IGT too much, is there an easy way to setup shared dma bufs
>>>>>> and fences
>>>>>> use cases there or you mean I need to add them now ?
>>>>> We do have test infrastructure for all of that, but the hotunplug test
>>>>> doesn't have that yet I think.
>>>>>
>>>>>>> Only specific thing would be some hooks to keep the gpu
>>>>>>> busy in some fashion while we yank the driver.
>>>>>> Do you mean like staring X and some active rendering on top (like glxgears)
>>>>>> automatically from within IGT ?
>>>>> Nope, igt is meant to be bare metal testing so you don't have to drag
>>>>> the entire winsys around (which in a wayland world, is not really good
>>>>> for driver testing anyway, since everything is different). We use this
>>>>> for our pre-merge ci for drm/i915.
>>>> So i keep it busy by X/glxgers which is manual operation. What you suggest
>>>> then is some client within IGT which opens the device and starts submitting jobs
>>>> (which is much like what libdrm amdgpu tests already do) ? And this
>>>> part is the amdgou specific code I just need to port from libdrm to here ?
>>> Yup. For i915 tests we have an entire library already for small workloads,
>>> including some that just spin forever (useful for reset testing and could
>>> also come handy for unload testing).
>>> -Daniel
>>
>> Does it mean I would have to drag in the entire infrastructure code from
>> within libdrm amdgpu code that allows for command submissions through
>> our IOCTLs ?
> No it's perfectly fine to use libdrm in igt tests, we do that too. I
> just mean we have some additional helpers to submit specific workloads
> for intel gpu, like rendercpy to move data with the 3d engine (just
> using copy engines only isn't good enough sometimes for testing), or
> the special hanging batchbuffers we use for reset testing, or in
> general for having precise control over race conditions and things
> like that.
>
> One thing that was somewhat annoying for i915 but shouldn't be a
> problem for amdgpu is that igt builds on intel. So we have stub
> functions for libdrm-intel, since libdrm-intel doesn't build on arm.
> Shouldn't be a problem for you.
> -Daniel


Tested with igt hot-unplug test. Passed unbind_rebind, unplug-rescan, 
hot-unbind-rebind and hotunplug-rescan
if disabling the rescan part as I don't support plug-back for now. Also added 
command submission for amdgpu.
Attached a draft of submitting workload while unbinding the driver or simulating
detach. Catched 2 issues with unpug if command submission in flight  during 
unplug -
(unsignaled fence causing a hang in amdgpu_cs_sync and hitting a BUG_ON in 
gfx_v9_0_ring_emit_patch_cond_exec whic is expected i guess).
Guess glxgears command submissions is at a much slower rate so this was missed.
Is that what you meant for this test ?

Andrey


>
>
>> Andrey
>>
>>>> Andrey
>>>>
>>>>
>>>>>>> But just to get it started
>>>>>>> you can throw in entirely amdgpu specific subtests and just share some of
>>>>>>> the test code.
>>>>>>> -Daniel
>>>>>> Im general, I wasn't aware of this test suite and looks like it does what i test
>>>>>> among other stuff.
>>>>>> I will definitely  try to run with it although the rescan part will not work as
>>>>>> plugging
>>>>>> the device back is in my TODO list and not part of the scope for this patchset
>>>>>> and so I will
>>>>>> probably comment the re-scan section out while testing.
>>>>> amd gem has been using libdrm-amd thus far iirc, but for things like
>>>>> this I think it'd be worth to at least consider switching. Display
>>>>> team has already started to use some of the test and contribute stuff
>>>>> (I think the VRR testcase is from amd).
>>>>> -Daniel
>>>>>
>>>>>> Andrey
>>>>>>
>>>>>>
>>>>>>>> Andrey Grodzovsky (13):
>>>>>>>>       drm/ttm: Remap all page faults to per process dummy page.
>>>>>>>>       drm: Unamp the entire device address space on device unplug
>>>>>>>>       drm/ttm: Expose ttm_tt_unpopulate for driver use
>>>>>>>>       drm/sched: Cancel and flush all oustatdning jobs before finish.
>>>>>>>>       drm/amdgpu: Split amdgpu_device_fini into early and late
>>>>>>>>       drm/amdgpu: Add early fini callback
>>>>>>>>       drm/amdgpu: Register IOMMU topology notifier per device.
>>>>>>>>       drm/amdgpu: Fix a bunch of sdma code crash post device unplug
>>>>>>>>       drm/amdgpu: Remap all page faults to per process dummy page.
>>>>>>>>       dmr/amdgpu: Move some sysfs attrs creation to default_attr
>>>>>>>>       drm/amdgpu: Guard against write accesses after device removal
>>>>>>>>       drm/sched: Make timeout timer rearm conditional.
>>>>>>>>       drm/amdgpu: Prevent any job recoveries after device is unplugged.
>>>>>>>>
>>>>>>>> Luben Tuikov (1):
>>>>>>>>       drm/scheduler: Job timeout handler returns status
>>>>>>>>
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu.h               |  11 +-
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c      |  17 +--
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_device.c        | 149 ++++++++++++++++++++--
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c           |  20 ++-
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c         |  15 ++-
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c          |   2 +-
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_gart.h          |   1 +
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c           |   9 ++
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c       |  25 ++--
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c           |  26 ++--
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_irq.h           |   3 +-
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_job.c           |  19 ++-
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c           |  12 +-
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_object.c        |  10 ++
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_object.h        |   2 +
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c           |  53 +++++---
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h           |   3 +
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c           |   1 +
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c          |  70 ++++++++++
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h          |  52 +-------
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c           |  21 ++-
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c            |   8 +-
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c      |  14 +-
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/cik_ih.c               |   2 +-
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/cz_ih.c                |   2 +-
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/iceland_ih.c           |   2 +-
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/navi10_ih.c            |   2 +-
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/psp_v11_0.c            |  16 +--
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/psp_v12_0.c            |   8 +-
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/psp_v3_1.c             |   8 +-
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/si_ih.c                |   2 +-
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/tonga_ih.c             |   2 +-
>>>>>>>>      drivers/gpu/drm/amd/amdgpu/vega10_ih.c            |   2 +-
>>>>>>>>      drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  12 +-
>>>>>>>>      drivers/gpu/drm/amd/include/amd_shared.h          |   2 +
>>>>>>>>      drivers/gpu/drm/drm_drv.c                         |   3 +
>>>>>>>>      drivers/gpu/drm/etnaviv/etnaviv_sched.c           |  10 +-
>>>>>>>>      drivers/gpu/drm/lima/lima_sched.c                 |   4 +-
>>>>>>>>      drivers/gpu/drm/panfrost/panfrost_job.c           |   9 +-
>>>>>>>>      drivers/gpu/drm/scheduler/sched_main.c            |  18 ++-
>>>>>>>>      drivers/gpu/drm/ttm/ttm_bo_vm.c                   |  82 +++++++++++-
>>>>>>>>      drivers/gpu/drm/ttm/ttm_tt.c                      |   1 +
>>>>>>>>      drivers/gpu/drm/v3d/v3d_sched.c                   |  32 ++---
>>>>>>>>      include/drm/gpu_scheduler.h                       |  17 ++-
>>>>>>>>      include/drm/ttm/ttm_bo_api.h                      |   2 +
>>>>>>>>      45 files changed, 583 insertions(+), 198 deletions(-)
>>>>>>>>
>>>>>>>> --
>>>>>>>> 2.7.4
>>>>>>>>
>
>
Daniel Vetter Feb. 8, 2021, 7:27 a.m. UTC | #9
On Mon, Feb 8, 2021 at 6:59 AM Andrey Grodzovsky
<Andrey.Grodzovsky@amd.com> wrote:
>
>
> On 1/20/21 10:59 AM, Daniel Vetter wrote:
> > On Wed, Jan 20, 2021 at 3:20 PM Andrey Grodzovsky
> > <Andrey.Grodzovsky@amd.com> wrote:
> >>
> >> On 1/20/21 4:05 AM, Daniel Vetter wrote:
> >>> On Tue, Jan 19, 2021 at 01:18:15PM -0500, Andrey Grodzovsky wrote:
> >>>> On 1/19/21 1:08 PM, Daniel Vetter wrote:
> >>>>> On Tue, Jan 19, 2021 at 6:31 PM Andrey Grodzovsky
> >>>>> <Andrey.Grodzovsky@amd.com> wrote:
> >>>>>> On 1/19/21 9:16 AM, Daniel Vetter wrote:
> >>>>>>> On Mon, Jan 18, 2021 at 04:01:09PM -0500, Andrey Grodzovsky wrote:
> >>>>>>>> Until now extracting a card either by physical extraction (e.g. eGPU with
> >>>>>>>> thunderbolt connection or by emulation through  syfs -> /sys/bus/pci/devices/device_id/remove)
> >>>>>>>> would cause random crashes in user apps. The random crashes in apps were
> >>>>>>>> mostly due to the app having mapped a device backed BO into its address
> >>>>>>>> space was still trying to access the BO while the backing device was gone.
> >>>>>>>> To answer this first problem Christian suggested to fix the handling of mapped
> >>>>>>>> memory in the clients when the device goes away by forcibly unmap all buffers the
> >>>>>>>> user processes has by clearing their respective VMAs mapping the device BOs.
> >>>>>>>> Then when the VMAs try to fill in the page tables again we check in the fault
> >>>>>>>> handlerif the device is removed and if so, return an error. This will generate a
> >>>>>>>> SIGBUS to the application which can then cleanly terminate.This indeed was done
> >>>>>>>> but this in turn created a problem of kernel OOPs were the OOPSes were due to the
> >>>>>>>> fact that while the app was terminating because of the SIGBUSit would trigger use
> >>>>>>>> after free in the driver by calling to accesses device structures that were already
> >>>>>>>> released from the pci remove sequence.This was handled by introducing a 'flush'
> >>>>>>>> sequence during device removal were we wait for drm file reference to drop to 0
> >>>>>>>> meaning all user clients directly using this device terminated.
> >>>>>>>>
> >>>>>>>> v2:
> >>>>>>>> Based on discussions in the mailing list with Daniel and Pekka [1] and based on the document
> >>>>>>>> produced by Pekka from those discussions [2] the whole approach with returning SIGBUS and
> >>>>>>>> waiting for all user clients having CPU mapping of device BOs to die was dropped.
> >>>>>>>> Instead as per the document suggestion the device structures are kept alive until
> >>>>>>>> the last reference to the device is dropped by user client and in the meanwhile all existing and new CPU mappings of the BOs
> >>>>>>>> belonging to the device directly or by dma-buf import are rerouted to per user
> >>>>>>>> process dummy rw page.Also, I skipped the 'Requirements for KMS UAPI' section of [2]
> >>>>>>>> since i am trying to get the minimal set of requirements that still give useful solution
> >>>>>>>> to work and this is the'Requirements for Render and Cross-Device UAPI' section and so my
> >>>>>>>> test case is removing a secondary device, which is render only and is not involved
> >>>>>>>> in KMS.
> >>>>>>>>
> >>>>>>>> v3:
> >>>>>>>> More updates following comments from v2 such as removing loop to find DRM file when rerouting
> >>>>>>>> page faults to dummy page,getting rid of unnecessary sysfs handling refactoring and moving
> >>>>>>>> prevention of GPU recovery post device unplug from amdgpu to scheduler layer.
> >>>>>>>> On top of that added unplug support for the IOMMU enabled system.
> >>>>>>>>
> >>>>>>>> v4:
> >>>>>>>> Drop last sysfs hack and use sysfs default attribute.
> >>>>>>>> Guard against write accesses after device removal to avoid modifying released memory.
> >>>>>>>> Update dummy pages handling to on demand allocation and release through drm managed framework.
> >>>>>>>> Add return value to scheduler job TO handler (by Luben Tuikov) and use this in amdgpu for prevention
> >>>>>>>> of GPU recovery post device unplug
> >>>>>>>> Also rebase on top of drm-misc-mext instead of amd-staging-drm-next
> >>>>>>>>
> >>>>>>>> With these patches I am able to gracefully remove the secondary card using sysfs remove hook while glxgears
> >>>>>>>> is running off of secondary card (DRI_PRIME=1) without kernel oopses or hangs and keep working
> >>>>>>>> with the primary card or soft reset the device without hangs or oopses
> >>>>>>>>
> >>>>>>>> TODOs for followup work:
> >>>>>>>> Convert AMDGPU code to use devm (for hw stuff) and drmm (for sw stuff and allocations) (Daniel)
> >>>>>>>> Support plugging the secondary device back after unplug - currently still experiencing HW error on plugging back.
> >>>>>>>> Add support for 'Requirements for KMS UAPI' section of [2] - unplugging primary, display connected card.
> >>>>>>>>
> >>>>>>>> [1] - Discussions during v3 of the patchset https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.spinics.net%2Flists%2Famd-gfx%2Fmsg55576.html&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7Cf3fc3c7b55df40e165f408d8bd5c7364%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637467552072067767%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=fjgP9YubHCrILFxWmpVGSmurTJHkWw%2Bv4okyjSNsPxE%3D&amp;reserved=0
> >>>>>>>> [2] - drm/doc: device hot-unplug for userspace https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.spinics.net%2Flists%2Fdri-devel%2Fmsg259755.html&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7Cf3fc3c7b55df40e165f408d8bd5c7364%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637467552072067767%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=11PYyAhOjLDEiNNho8WaMB%2FLkA5AuxK6g9XpbNiPIec%3D&amp;reserved=0
> >>>>>>>> [3] - Related gitlab ticket https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fdrm%2Famd%2F-%2Fissues%2F1081&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7Cf3fc3c7b55df40e165f408d8bd5c7364%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637467552072077759%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=q%2F%2FXDKm9LOgw9mq4Ts4JoHR8Ysd8KmoM0NGLD98MsFw%3D&amp;reserved=0
> >>>>>>> btw have you tried this out with some of the igts we have? core_hotunplug
> >>>>>>> is the one I'm thinking of. Might be worth to extend this for amdgpu
> >>>>>>> specific stuff (like run some batches on it while hotunplugging).
> >>>>>> No, I mostly used just running glxgears while testing which covers already
> >>>>>> exported/imported dma-buf case and a few manually hacked tests in libdrm amdgpu
> >>>>>> test suite
> >>>>>>
> >>>>>>
> >>>>>>> Since there's so many corner cases we need to test here (shared dma-buf,
> >>>>>>> shared dma_fence) I think it would make sense to have a shared testcase
> >>>>>>> across drivers.
> >>>>>> Not familiar with IGT too much, is there an easy way to setup shared dma bufs
> >>>>>> and fences
> >>>>>> use cases there or you mean I need to add them now ?
> >>>>> We do have test infrastructure for all of that, but the hotunplug test
> >>>>> doesn't have that yet I think.
> >>>>>
> >>>>>>> Only specific thing would be some hooks to keep the gpu
> >>>>>>> busy in some fashion while we yank the driver.
> >>>>>> Do you mean like staring X and some active rendering on top (like glxgears)
> >>>>>> automatically from within IGT ?
> >>>>> Nope, igt is meant to be bare metal testing so you don't have to drag
> >>>>> the entire winsys around (which in a wayland world, is not really good
> >>>>> for driver testing anyway, since everything is different). We use this
> >>>>> for our pre-merge ci for drm/i915.
> >>>> So i keep it busy by X/glxgers which is manual operation. What you suggest
> >>>> then is some client within IGT which opens the device and starts submitting jobs
> >>>> (which is much like what libdrm amdgpu tests already do) ? And this
> >>>> part is the amdgou specific code I just need to port from libdrm to here ?
> >>> Yup. For i915 tests we have an entire library already for small workloads,
> >>> including some that just spin forever (useful for reset testing and could
> >>> also come handy for unload testing).
> >>> -Daniel
> >>
> >> Does it mean I would have to drag in the entire infrastructure code from
> >> within libdrm amdgpu code that allows for command submissions through
> >> our IOCTLs ?
> > No it's perfectly fine to use libdrm in igt tests, we do that too. I
> > just mean we have some additional helpers to submit specific workloads
> > for intel gpu, like rendercpy to move data with the 3d engine (just
> > using copy engines only isn't good enough sometimes for testing), or
> > the special hanging batchbuffers we use for reset testing, or in
> > general for having precise control over race conditions and things
> > like that.
> >
> > One thing that was somewhat annoying for i915 but shouldn't be a
> > problem for amdgpu is that igt builds on intel. So we have stub
> > functions for libdrm-intel, since libdrm-intel doesn't build on arm.
> > Shouldn't be a problem for you.
> > -Daniel
>
>
> Tested with igt hot-unplug test. Passed unbind_rebind, unplug-rescan,
> hot-unbind-rebind and hotunplug-rescan
> if disabling the rescan part as I don't support plug-back for now. Also added
> command submission for amdgpu.
> Attached a draft of submitting workload while unbinding the driver or simulating
> detach. Catched 2 issues with unpug if command submission in flight  during
> unplug -
> (unsignaled fence causing a hang in amdgpu_cs_sync and hitting a BUG_ON in
> gfx_v9_0_ring_emit_patch_cond_exec whic is expected i guess).
> Guess glxgears command submissions is at a much slower rate so this was missed.
> Is that what you meant for this test ?

Yup. Would be good if you can submit this one for inclusion.
-Daniel

>
> Andrey
>
>
> >
> >
> >> Andrey
> >>
> >>>> Andrey
> >>>>
> >>>>
> >>>>>>> But just to get it started
> >>>>>>> you can throw in entirely amdgpu specific subtests and just share some of
> >>>>>>> the test code.
> >>>>>>> -Daniel
> >>>>>> Im general, I wasn't aware of this test suite and looks like it does what i test
> >>>>>> among other stuff.
> >>>>>> I will definitely  try to run with it although the rescan part will not work as
> >>>>>> plugging
> >>>>>> the device back is in my TODO list and not part of the scope for this patchset
> >>>>>> and so I will
> >>>>>> probably comment the re-scan section out while testing.
> >>>>> amd gem has been using libdrm-amd thus far iirc, but for things like
> >>>>> this I think it'd be worth to at least consider switching. Display
> >>>>> team has already started to use some of the test and contribute stuff
> >>>>> (I think the VRR testcase is from amd).
> >>>>> -Daniel
> >>>>>
> >>>>>> Andrey
> >>>>>>
> >>>>>>
> >>>>>>>> Andrey Grodzovsky (13):
> >>>>>>>>       drm/ttm: Remap all page faults to per process dummy page.
> >>>>>>>>       drm: Unamp the entire device address space on device unplug
> >>>>>>>>       drm/ttm: Expose ttm_tt_unpopulate for driver use
> >>>>>>>>       drm/sched: Cancel and flush all oustatdning jobs before finish.
> >>>>>>>>       drm/amdgpu: Split amdgpu_device_fini into early and late
> >>>>>>>>       drm/amdgpu: Add early fini callback
> >>>>>>>>       drm/amdgpu: Register IOMMU topology notifier per device.
> >>>>>>>>       drm/amdgpu: Fix a bunch of sdma code crash post device unplug
> >>>>>>>>       drm/amdgpu: Remap all page faults to per process dummy page.
> >>>>>>>>       dmr/amdgpu: Move some sysfs attrs creation to default_attr
> >>>>>>>>       drm/amdgpu: Guard against write accesses after device removal
> >>>>>>>>       drm/sched: Make timeout timer rearm conditional.
> >>>>>>>>       drm/amdgpu: Prevent any job recoveries after device is unplugged.
> >>>>>>>>
> >>>>>>>> Luben Tuikov (1):
> >>>>>>>>       drm/scheduler: Job timeout handler returns status
> >>>>>>>>
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu.h               |  11 +-
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c      |  17 +--
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_device.c        | 149 ++++++++++++++++++++--
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c           |  20 ++-
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c         |  15 ++-
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c          |   2 +-
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_gart.h          |   1 +
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c           |   9 ++
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c       |  25 ++--
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c           |  26 ++--
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_irq.h           |   3 +-
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_job.c           |  19 ++-
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c           |  12 +-
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_object.c        |  10 ++
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_object.h        |   2 +
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c           |  53 +++++---
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h           |   3 +
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c           |   1 +
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c          |  70 ++++++++++
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h          |  52 +-------
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c           |  21 ++-
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c            |   8 +-
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c      |  14 +-
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/cik_ih.c               |   2 +-
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/cz_ih.c                |   2 +-
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/iceland_ih.c           |   2 +-
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/navi10_ih.c            |   2 +-
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/psp_v11_0.c            |  16 +--
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/psp_v12_0.c            |   8 +-
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/psp_v3_1.c             |   8 +-
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/si_ih.c                |   2 +-
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/tonga_ih.c             |   2 +-
> >>>>>>>>      drivers/gpu/drm/amd/amdgpu/vega10_ih.c            |   2 +-
> >>>>>>>>      drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  12 +-
> >>>>>>>>      drivers/gpu/drm/amd/include/amd_shared.h          |   2 +
> >>>>>>>>      drivers/gpu/drm/drm_drv.c                         |   3 +
> >>>>>>>>      drivers/gpu/drm/etnaviv/etnaviv_sched.c           |  10 +-
> >>>>>>>>      drivers/gpu/drm/lima/lima_sched.c                 |   4 +-
> >>>>>>>>      drivers/gpu/drm/panfrost/panfrost_job.c           |   9 +-
> >>>>>>>>      drivers/gpu/drm/scheduler/sched_main.c            |  18 ++-
> >>>>>>>>      drivers/gpu/drm/ttm/ttm_bo_vm.c                   |  82 +++++++++++-
> >>>>>>>>      drivers/gpu/drm/ttm/ttm_tt.c                      |   1 +
> >>>>>>>>      drivers/gpu/drm/v3d/v3d_sched.c                   |  32 ++---
> >>>>>>>>      include/drm/gpu_scheduler.h                       |  17 ++-
> >>>>>>>>      include/drm/ttm/ttm_bo_api.h                      |   2 +
> >>>>>>>>      45 files changed, 583 insertions(+), 198 deletions(-)
> >>>>>>>>
> >>>>>>>> --
> >>>>>>>> 2.7.4
> >>>>>>>>
> >
> >
Andrey Grodzovsky Feb. 9, 2021, 4:01 a.m. UTC | #10
On 2/8/21 2:27 AM, Daniel Vetter wrote:
> On Mon, Feb 8, 2021 at 6:59 AM Andrey Grodzovsky
> <Andrey.Grodzovsky@amd.com> wrote:
>>
>> On 1/20/21 10:59 AM, Daniel Vetter wrote:
>>> On Wed, Jan 20, 2021 at 3:20 PM Andrey Grodzovsky
>>> <Andrey.Grodzovsky@amd.com> wrote:
>>>> On 1/20/21 4:05 AM, Daniel Vetter wrote:
>>>>> On Tue, Jan 19, 2021 at 01:18:15PM -0500, Andrey Grodzovsky wrote:
>>>>>> On 1/19/21 1:08 PM, Daniel Vetter wrote:
>>>>>>> On Tue, Jan 19, 2021 at 6:31 PM Andrey Grodzovsky
>>>>>>> <Andrey.Grodzovsky@amd.com> wrote:
>>>>>>>> On 1/19/21 9:16 AM, Daniel Vetter wrote:
>>>>>>>>> On Mon, Jan 18, 2021 at 04:01:09PM -0500, Andrey Grodzovsky wrote:
>>>>>>>>>> Until now extracting a card either by physical extraction (e.g. eGPU with
>>>>>>>>>> thunderbolt connection or by emulation through  syfs -> /sys/bus/pci/devices/device_id/remove)
>>>>>>>>>> would cause random crashes in user apps. The random crashes in apps were
>>>>>>>>>> mostly due to the app having mapped a device backed BO into its address
>>>>>>>>>> space was still trying to access the BO while the backing device was gone.
>>>>>>>>>> To answer this first problem Christian suggested to fix the handling of mapped
>>>>>>>>>> memory in the clients when the device goes away by forcibly unmap all buffers the
>>>>>>>>>> user processes has by clearing their respective VMAs mapping the device BOs.
>>>>>>>>>> Then when the VMAs try to fill in the page tables again we check in the fault
>>>>>>>>>> handlerif the device is removed and if so, return an error. This will generate a
>>>>>>>>>> SIGBUS to the application which can then cleanly terminate.This indeed was done
>>>>>>>>>> but this in turn created a problem of kernel OOPs were the OOPSes were due to the
>>>>>>>>>> fact that while the app was terminating because of the SIGBUSit would trigger use
>>>>>>>>>> after free in the driver by calling to accesses device structures that were already
>>>>>>>>>> released from the pci remove sequence.This was handled by introducing a 'flush'
>>>>>>>>>> sequence during device removal were we wait for drm file reference to drop to 0
>>>>>>>>>> meaning all user clients directly using this device terminated.
>>>>>>>>>>
>>>>>>>>>> v2:
>>>>>>>>>> Based on discussions in the mailing list with Daniel and Pekka [1] and based on the document
>>>>>>>>>> produced by Pekka from those discussions [2] the whole approach with returning SIGBUS and
>>>>>>>>>> waiting for all user clients having CPU mapping of device BOs to die was dropped.
>>>>>>>>>> Instead as per the document suggestion the device structures are kept alive until
>>>>>>>>>> the last reference to the device is dropped by user client and in the meanwhile all existing and new CPU mappings of the BOs
>>>>>>>>>> belonging to the device directly or by dma-buf import are rerouted to per user
>>>>>>>>>> process dummy rw page.Also, I skipped the 'Requirements for KMS UAPI' section of [2]
>>>>>>>>>> since i am trying to get the minimal set of requirements that still give useful solution
>>>>>>>>>> to work and this is the'Requirements for Render and Cross-Device UAPI' section and so my
>>>>>>>>>> test case is removing a secondary device, which is render only and is not involved
>>>>>>>>>> in KMS.
>>>>>>>>>>
>>>>>>>>>> v3:
>>>>>>>>>> More updates following comments from v2 such as removing loop to find DRM file when rerouting
>>>>>>>>>> page faults to dummy page,getting rid of unnecessary sysfs handling refactoring and moving
>>>>>>>>>> prevention of GPU recovery post device unplug from amdgpu to scheduler layer.
>>>>>>>>>> On top of that added unplug support for the IOMMU enabled system.
>>>>>>>>>>
>>>>>>>>>> v4:
>>>>>>>>>> Drop last sysfs hack and use sysfs default attribute.
>>>>>>>>>> Guard against write accesses after device removal to avoid modifying released memory.
>>>>>>>>>> Update dummy pages handling to on demand allocation and release through drm managed framework.
>>>>>>>>>> Add return value to scheduler job TO handler (by Luben Tuikov) and use this in amdgpu for prevention
>>>>>>>>>> of GPU recovery post device unplug
>>>>>>>>>> Also rebase on top of drm-misc-mext instead of amd-staging-drm-next
>>>>>>>>>>
>>>>>>>>>> With these patches I am able to gracefully remove the secondary card using sysfs remove hook while glxgears
>>>>>>>>>> is running off of secondary card (DRI_PRIME=1) without kernel oopses or hangs and keep working
>>>>>>>>>> with the primary card or soft reset the device without hangs or oopses
>>>>>>>>>>
>>>>>>>>>> TODOs for followup work:
>>>>>>>>>> Convert AMDGPU code to use devm (for hw stuff) and drmm (for sw stuff and allocations) (Daniel)
>>>>>>>>>> Support plugging the secondary device back after unplug - currently still experiencing HW error on plugging back.
>>>>>>>>>> Add support for 'Requirements for KMS UAPI' section of [2] - unplugging primary, display connected card.
>>>>>>>>>>
>>>>>>>>>> [1] - Discussions during v3 of the patchset https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.spinics.net%2Flists%2Famd-gfx%2Fmsg55576.html&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7Cec5a382fde9d43c0397408d8cc02fc38%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637483660504372326%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=5AtLTzLpQ05h1ZonovShf5TUYwOTywkV1WJ1pXfB%2BCA%3D&amp;reserved=0
>>>>>>>>>> [2] - drm/doc: device hot-unplug for userspace https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.spinics.net%2Flists%2Fdri-devel%2Fmsg259755.html&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7Cec5a382fde9d43c0397408d8cc02fc38%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637483660504372326%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=zCBMxQnSeiuFORHHxlSpx10v4gwZ%2BnbTFnxelmWliJo%3D&amp;reserved=0
>>>>>>>>>> [3] - Related gitlab ticket https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fdrm%2Famd%2F-%2Fissues%2F1081&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7Cec5a382fde9d43c0397408d8cc02fc38%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637483660504372326%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=d9rifeYMoPcbE8K5axZvnSy2kQ3ENWgUcpvol6TkkMw%3D&amp;reserved=0
>>>>>>>>> btw have you tried this out with some of the igts we have? core_hotunplug
>>>>>>>>> is the one I'm thinking of. Might be worth to extend this for amdgpu
>>>>>>>>> specific stuff (like run some batches on it while hotunplugging).
>>>>>>>> No, I mostly used just running glxgears while testing which covers already
>>>>>>>> exported/imported dma-buf case and a few manually hacked tests in libdrm amdgpu
>>>>>>>> test suite
>>>>>>>>
>>>>>>>>
>>>>>>>>> Since there's so many corner cases we need to test here (shared dma-buf,
>>>>>>>>> shared dma_fence) I think it would make sense to have a shared testcase
>>>>>>>>> across drivers.
>>>>>>>> Not familiar with IGT too much, is there an easy way to setup shared dma bufs
>>>>>>>> and fences
>>>>>>>> use cases there or you mean I need to add them now ?
>>>>>>> We do have test infrastructure for all of that, but the hotunplug test
>>>>>>> doesn't have that yet I think.
>>>>>>>
>>>>>>>>> Only specific thing would be some hooks to keep the gpu
>>>>>>>>> busy in some fashion while we yank the driver.
>>>>>>>> Do you mean like staring X and some active rendering on top (like glxgears)
>>>>>>>> automatically from within IGT ?
>>>>>>> Nope, igt is meant to be bare metal testing so you don't have to drag
>>>>>>> the entire winsys around (which in a wayland world, is not really good
>>>>>>> for driver testing anyway, since everything is different). We use this
>>>>>>> for our pre-merge ci for drm/i915.
>>>>>> So i keep it busy by X/glxgers which is manual operation. What you suggest
>>>>>> then is some client within IGT which opens the device and starts submitting jobs
>>>>>> (which is much like what libdrm amdgpu tests already do) ? And this
>>>>>> part is the amdgou specific code I just need to port from libdrm to here ?
>>>>> Yup. For i915 tests we have an entire library already for small workloads,
>>>>> including some that just spin forever (useful for reset testing and could
>>>>> also come handy for unload testing).
>>>>> -Daniel
>>>> Does it mean I would have to drag in the entire infrastructure code from
>>>> within libdrm amdgpu code that allows for command submissions through
>>>> our IOCTLs ?
>>> No it's perfectly fine to use libdrm in igt tests, we do that too. I
>>> just mean we have some additional helpers to submit specific workloads
>>> for intel gpu, like rendercpy to move data with the 3d engine (just
>>> using copy engines only isn't good enough sometimes for testing), or
>>> the special hanging batchbuffers we use for reset testing, or in
>>> general for having precise control over race conditions and things
>>> like that.
>>>
>>> One thing that was somewhat annoying for i915 but shouldn't be a
>>> problem for amdgpu is that igt builds on intel. So we have stub
>>> functions for libdrm-intel, since libdrm-intel doesn't build on arm.
>>> Shouldn't be a problem for you.
>>> -Daniel
>>
>> Tested with igt hot-unplug test. Passed unbind_rebind, unplug-rescan,
>> hot-unbind-rebind and hotunplug-rescan
>> if disabling the rescan part as I don't support plug-back for now. Also added
>> command submission for amdgpu.
>> Attached a draft of submitting workload while unbinding the driver or simulating
>> detach. Catched 2 issues with unpug if command submission in flight  during
>> unplug -
>> (unsignaled fence causing a hang in amdgpu_cs_sync and hitting a BUG_ON in
>> gfx_v9_0_ring_emit_patch_cond_exec whic is expected i guess).
>> Guess glxgears command submissions is at a much slower rate so this was missed.
>> Is that what you meant for this test ?
> Yup. Would be good if you can submit this one for inclusion.
> -Daniel


Will do together with exported dma-buf test once I do it.

P.S How am i supposed to do exported fence test. Exporting a fence from device 
A, importing it into device B, unplugging
device A then signaling the fence from device B - this supposed to call a fence 
cb which was registered
by the exporter which by now is dead and hence will cause a 'use after free' ?

Andrey

>
>> Andrey
>>
>>
>>>
>>>> Andrey
>>>>
>>>>>> Andrey
>>>>>>
>>>>>>
>>>>>>>>> But just to get it started
>>>>>>>>> you can throw in entirely amdgpu specific subtests and just share some of
>>>>>>>>> the test code.
>>>>>>>>> -Daniel
>>>>>>>> Im general, I wasn't aware of this test suite and looks like it does what i test
>>>>>>>> among other stuff.
>>>>>>>> I will definitely  try to run with it although the rescan part will not work as
>>>>>>>> plugging
>>>>>>>> the device back is in my TODO list and not part of the scope for this patchset
>>>>>>>> and so I will
>>>>>>>> probably comment the re-scan section out while testing.
>>>>>>> amd gem has been using libdrm-amd thus far iirc, but for things like
>>>>>>> this I think it'd be worth to at least consider switching. Display
>>>>>>> team has already started to use some of the test and contribute stuff
>>>>>>> (I think the VRR testcase is from amd).
>>>>>>> -Daniel
>>>>>>>
>>>>>>>> Andrey
>>>>>>>>
>>>>>>>>
>>>>>>>>>> Andrey Grodzovsky (13):
>>>>>>>>>>        drm/ttm: Remap all page faults to per process dummy page.
>>>>>>>>>>        drm: Unamp the entire device address space on device unplug
>>>>>>>>>>        drm/ttm: Expose ttm_tt_unpopulate for driver use
>>>>>>>>>>        drm/sched: Cancel and flush all oustatdning jobs before finish.
>>>>>>>>>>        drm/amdgpu: Split amdgpu_device_fini into early and late
>>>>>>>>>>        drm/amdgpu: Add early fini callback
>>>>>>>>>>        drm/amdgpu: Register IOMMU topology notifier per device.
>>>>>>>>>>        drm/amdgpu: Fix a bunch of sdma code crash post device unplug
>>>>>>>>>>        drm/amdgpu: Remap all page faults to per process dummy page.
>>>>>>>>>>        dmr/amdgpu: Move some sysfs attrs creation to default_attr
>>>>>>>>>>        drm/amdgpu: Guard against write accesses after device removal
>>>>>>>>>>        drm/sched: Make timeout timer rearm conditional.
>>>>>>>>>>        drm/amdgpu: Prevent any job recoveries after device is unplugged.
>>>>>>>>>>
>>>>>>>>>> Luben Tuikov (1):
>>>>>>>>>>        drm/scheduler: Job timeout handler returns status
>>>>>>>>>>
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/amdgpu.h               |  11 +-
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c      |  17 +--
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/amdgpu_device.c        | 149 ++++++++++++++++++++--
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c           |  20 ++-
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c         |  15 ++-
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c          |   2 +-
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/amdgpu_gart.h          |   1 +
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c           |   9 ++
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c       |  25 ++--
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c           |  26 ++--
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/amdgpu_irq.h           |   3 +-
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/amdgpu_job.c           |  19 ++-
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c           |  12 +-
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/amdgpu_object.c        |  10 ++
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/amdgpu_object.h        |   2 +
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c           |  53 +++++---
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h           |   3 +
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c           |   1 +
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c          |  70 ++++++++++
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h          |  52 +-------
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c           |  21 ++-
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c            |   8 +-
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c      |  14 +-
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/cik_ih.c               |   2 +-
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/cz_ih.c                |   2 +-
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/iceland_ih.c           |   2 +-
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/navi10_ih.c            |   2 +-
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/psp_v11_0.c            |  16 +--
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/psp_v12_0.c            |   8 +-
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/psp_v3_1.c             |   8 +-
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/si_ih.c                |   2 +-
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/tonga_ih.c             |   2 +-
>>>>>>>>>>       drivers/gpu/drm/amd/amdgpu/vega10_ih.c            |   2 +-
>>>>>>>>>>       drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  12 +-
>>>>>>>>>>       drivers/gpu/drm/amd/include/amd_shared.h          |   2 +
>>>>>>>>>>       drivers/gpu/drm/drm_drv.c                         |   3 +
>>>>>>>>>>       drivers/gpu/drm/etnaviv/etnaviv_sched.c           |  10 +-
>>>>>>>>>>       drivers/gpu/drm/lima/lima_sched.c                 |   4 +-
>>>>>>>>>>       drivers/gpu/drm/panfrost/panfrost_job.c           |   9 +-
>>>>>>>>>>       drivers/gpu/drm/scheduler/sched_main.c            |  18 ++-
>>>>>>>>>>       drivers/gpu/drm/ttm/ttm_bo_vm.c                   |  82 +++++++++++-
>>>>>>>>>>       drivers/gpu/drm/ttm/ttm_tt.c                      |   1 +
>>>>>>>>>>       drivers/gpu/drm/v3d/v3d_sched.c                   |  32 ++---
>>>>>>>>>>       include/drm/gpu_scheduler.h                       |  17 ++-
>>>>>>>>>>       include/drm/ttm/ttm_bo_api.h                      |   2 +
>>>>>>>>>>       45 files changed, 583 insertions(+), 198 deletions(-)
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> 2.7.4
>>>>>>>>>>
>>>
>
>
Daniel Vetter Feb. 9, 2021, 9:50 a.m. UTC | #11
On Mon, Feb 08, 2021 at 11:01:14PM -0500, Andrey Grodzovsky wrote:
> 
> On 2/8/21 2:27 AM, Daniel Vetter wrote:
> > On Mon, Feb 8, 2021 at 6:59 AM Andrey Grodzovsky
> > <Andrey.Grodzovsky@amd.com> wrote:
> > > 
> > > On 1/20/21 10:59 AM, Daniel Vetter wrote:
> > > > On Wed, Jan 20, 2021 at 3:20 PM Andrey Grodzovsky
> > > > <Andrey.Grodzovsky@amd.com> wrote:
> > > > > On 1/20/21 4:05 AM, Daniel Vetter wrote:
> > > > > > On Tue, Jan 19, 2021 at 01:18:15PM -0500, Andrey Grodzovsky wrote:
> > > > > > > On 1/19/21 1:08 PM, Daniel Vetter wrote:
> > > > > > > > On Tue, Jan 19, 2021 at 6:31 PM Andrey Grodzovsky
> > > > > > > > <Andrey.Grodzovsky@amd.com> wrote:
> > > > > > > > > On 1/19/21 9:16 AM, Daniel Vetter wrote:
> > > > > > > > > > On Mon, Jan 18, 2021 at 04:01:09PM -0500, Andrey Grodzovsky wrote:
> > > > > > > > > > > Until now extracting a card either by physical extraction (e.g. eGPU with
> > > > > > > > > > > thunderbolt connection or by emulation through  syfs -> /sys/bus/pci/devices/device_id/remove)
> > > > > > > > > > > would cause random crashes in user apps. The random crashes in apps were
> > > > > > > > > > > mostly due to the app having mapped a device backed BO into its address
> > > > > > > > > > > space was still trying to access the BO while the backing device was gone.
> > > > > > > > > > > To answer this first problem Christian suggested to fix the handling of mapped
> > > > > > > > > > > memory in the clients when the device goes away by forcibly unmap all buffers the
> > > > > > > > > > > user processes has by clearing their respective VMAs mapping the device BOs.
> > > > > > > > > > > Then when the VMAs try to fill in the page tables again we check in the fault
> > > > > > > > > > > handlerif the device is removed and if so, return an error. This will generate a
> > > > > > > > > > > SIGBUS to the application which can then cleanly terminate.This indeed was done
> > > > > > > > > > > but this in turn created a problem of kernel OOPs were the OOPSes were due to the
> > > > > > > > > > > fact that while the app was terminating because of the SIGBUSit would trigger use
> > > > > > > > > > > after free in the driver by calling to accesses device structures that were already
> > > > > > > > > > > released from the pci remove sequence.This was handled by introducing a 'flush'
> > > > > > > > > > > sequence during device removal were we wait for drm file reference to drop to 0
> > > > > > > > > > > meaning all user clients directly using this device terminated.
> > > > > > > > > > > 
> > > > > > > > > > > v2:
> > > > > > > > > > > Based on discussions in the mailing list with Daniel and Pekka [1] and based on the document
> > > > > > > > > > > produced by Pekka from those discussions [2] the whole approach with returning SIGBUS and
> > > > > > > > > > > waiting for all user clients having CPU mapping of device BOs to die was dropped.
> > > > > > > > > > > Instead as per the document suggestion the device structures are kept alive until
> > > > > > > > > > > the last reference to the device is dropped by user client and in the meanwhile all existing and new CPU mappings of the BOs
> > > > > > > > > > > belonging to the device directly or by dma-buf import are rerouted to per user
> > > > > > > > > > > process dummy rw page.Also, I skipped the 'Requirements for KMS UAPI' section of [2]
> > > > > > > > > > > since i am trying to get the minimal set of requirements that still give useful solution
> > > > > > > > > > > to work and this is the'Requirements for Render and Cross-Device UAPI' section and so my
> > > > > > > > > > > test case is removing a secondary device, which is render only and is not involved
> > > > > > > > > > > in KMS.
> > > > > > > > > > > 
> > > > > > > > > > > v3:
> > > > > > > > > > > More updates following comments from v2 such as removing loop to find DRM file when rerouting
> > > > > > > > > > > page faults to dummy page,getting rid of unnecessary sysfs handling refactoring and moving
> > > > > > > > > > > prevention of GPU recovery post device unplug from amdgpu to scheduler layer.
> > > > > > > > > > > On top of that added unplug support for the IOMMU enabled system.
> > > > > > > > > > > 
> > > > > > > > > > > v4:
> > > > > > > > > > > Drop last sysfs hack and use sysfs default attribute.
> > > > > > > > > > > Guard against write accesses after device removal to avoid modifying released memory.
> > > > > > > > > > > Update dummy pages handling to on demand allocation and release through drm managed framework.
> > > > > > > > > > > Add return value to scheduler job TO handler (by Luben Tuikov) and use this in amdgpu for prevention
> > > > > > > > > > > of GPU recovery post device unplug
> > > > > > > > > > > Also rebase on top of drm-misc-mext instead of amd-staging-drm-next
> > > > > > > > > > > 
> > > > > > > > > > > With these patches I am able to gracefully remove the secondary card using sysfs remove hook while glxgears
> > > > > > > > > > > is running off of secondary card (DRI_PRIME=1) without kernel oopses or hangs and keep working
> > > > > > > > > > > with the primary card or soft reset the device without hangs or oopses
> > > > > > > > > > > 
> > > > > > > > > > > TODOs for followup work:
> > > > > > > > > > > Convert AMDGPU code to use devm (for hw stuff) and drmm (for sw stuff and allocations) (Daniel)
> > > > > > > > > > > Support plugging the secondary device back after unplug - currently still experiencing HW error on plugging back.
> > > > > > > > > > > Add support for 'Requirements for KMS UAPI' section of [2] - unplugging primary, display connected card.
> > > > > > > > > > > 
> > > > > > > > > > > [1] - Discussions during v3 of the patchset https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.spinics.net%2Flists%2Famd-gfx%2Fmsg55576.html&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7Cec5a382fde9d43c0397408d8cc02fc38%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637483660504372326%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=5AtLTzLpQ05h1ZonovShf5TUYwOTywkV1WJ1pXfB%2BCA%3D&amp;reserved=0
> > > > > > > > > > > [2] - drm/doc: device hot-unplug for userspace https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.spinics.net%2Flists%2Fdri-devel%2Fmsg259755.html&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7Cec5a382fde9d43c0397408d8cc02fc38%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637483660504372326%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=zCBMxQnSeiuFORHHxlSpx10v4gwZ%2BnbTFnxelmWliJo%3D&amp;reserved=0
> > > > > > > > > > > [3] - Related gitlab ticket https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fdrm%2Famd%2F-%2Fissues%2F1081&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7Cec5a382fde9d43c0397408d8cc02fc38%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637483660504372326%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=d9rifeYMoPcbE8K5axZvnSy2kQ3ENWgUcpvol6TkkMw%3D&amp;reserved=0
> > > > > > > > > > btw have you tried this out with some of the igts we have? core_hotunplug
> > > > > > > > > > is the one I'm thinking of. Might be worth to extend this for amdgpu
> > > > > > > > > > specific stuff (like run some batches on it while hotunplugging).
> > > > > > > > > No, I mostly used just running glxgears while testing which covers already
> > > > > > > > > exported/imported dma-buf case and a few manually hacked tests in libdrm amdgpu
> > > > > > > > > test suite
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > > Since there's so many corner cases we need to test here (shared dma-buf,
> > > > > > > > > > shared dma_fence) I think it would make sense to have a shared testcase
> > > > > > > > > > across drivers.
> > > > > > > > > Not familiar with IGT too much, is there an easy way to setup shared dma bufs
> > > > > > > > > and fences
> > > > > > > > > use cases there or you mean I need to add them now ?
> > > > > > > > We do have test infrastructure for all of that, but the hotunplug test
> > > > > > > > doesn't have that yet I think.
> > > > > > > > 
> > > > > > > > > > Only specific thing would be some hooks to keep the gpu
> > > > > > > > > > busy in some fashion while we yank the driver.
> > > > > > > > > Do you mean like staring X and some active rendering on top (like glxgears)
> > > > > > > > > automatically from within IGT ?
> > > > > > > > Nope, igt is meant to be bare metal testing so you don't have to drag
> > > > > > > > the entire winsys around (which in a wayland world, is not really good
> > > > > > > > for driver testing anyway, since everything is different). We use this
> > > > > > > > for our pre-merge ci for drm/i915.
> > > > > > > So i keep it busy by X/glxgers which is manual operation. What you suggest
> > > > > > > then is some client within IGT which opens the device and starts submitting jobs
> > > > > > > (which is much like what libdrm amdgpu tests already do) ? And this
> > > > > > > part is the amdgou specific code I just need to port from libdrm to here ?
> > > > > > Yup. For i915 tests we have an entire library already for small workloads,
> > > > > > including some that just spin forever (useful for reset testing and could
> > > > > > also come handy for unload testing).
> > > > > > -Daniel
> > > > > Does it mean I would have to drag in the entire infrastructure code from
> > > > > within libdrm amdgpu code that allows for command submissions through
> > > > > our IOCTLs ?
> > > > No it's perfectly fine to use libdrm in igt tests, we do that too. I
> > > > just mean we have some additional helpers to submit specific workloads
> > > > for intel gpu, like rendercpy to move data with the 3d engine (just
> > > > using copy engines only isn't good enough sometimes for testing), or
> > > > the special hanging batchbuffers we use for reset testing, or in
> > > > general for having precise control over race conditions and things
> > > > like that.
> > > > 
> > > > One thing that was somewhat annoying for i915 but shouldn't be a
> > > > problem for amdgpu is that igt builds on intel. So we have stub
> > > > functions for libdrm-intel, since libdrm-intel doesn't build on arm.
> > > > Shouldn't be a problem for you.
> > > > -Daniel
> > > 
> > > Tested with igt hot-unplug test. Passed unbind_rebind, unplug-rescan,
> > > hot-unbind-rebind and hotunplug-rescan
> > > if disabling the rescan part as I don't support plug-back for now. Also added
> > > command submission for amdgpu.
> > > Attached a draft of submitting workload while unbinding the driver or simulating
> > > detach. Catched 2 issues with unpug if command submission in flight  during
> > > unplug -
> > > (unsignaled fence causing a hang in amdgpu_cs_sync and hitting a BUG_ON in
> > > gfx_v9_0_ring_emit_patch_cond_exec whic is expected i guess).
> > > Guess glxgears command submissions is at a much slower rate so this was missed.
> > > Is that what you meant for this test ?
> > Yup. Would be good if you can submit this one for inclusion.
> > -Daniel
> 
> 
> Will do together with exported dma-buf test once I do it.
> 
> P.S How am i supposed to do exported fence test. Exporting a fence from
> device A, importing it into device B, unplugging
> device A then signaling the fence from device B - this supposed to call a
> fence cb which was registered
> by the exporter which by now is dead and hence will cause a 'use after free' ?

Yeah in the end we'd need 2 hw devices for testing full fence
functionality. A useful intermediate step would be to just export the
fence (either as sync_file, which I think amdgpu doesn't support because
no android egl support in mesa) or drm_syncobj (which you can do as
standalone fd too iirc), and then just using the fence a bit from
userspace (like wait on it or get its status) after the device is
unplugged.

I think this should cover most of the cross-driver issues that fences
bring in, and I think for the other problems we can worry once we spot.
-Daniel

> 
> Andrey
> 
> > 
> > > Andrey
> > > 
> > > 
> > > > 
> > > > > Andrey
> > > > > 
> > > > > > > Andrey
> > > > > > > 
> > > > > > > 
> > > > > > > > > > But just to get it started
> > > > > > > > > > you can throw in entirely amdgpu specific subtests and just share some of
> > > > > > > > > > the test code.
> > > > > > > > > > -Daniel
> > > > > > > > > Im general, I wasn't aware of this test suite and looks like it does what i test
> > > > > > > > > among other stuff.
> > > > > > > > > I will definitely  try to run with it although the rescan part will not work as
> > > > > > > > > plugging
> > > > > > > > > the device back is in my TODO list and not part of the scope for this patchset
> > > > > > > > > and so I will
> > > > > > > > > probably comment the re-scan section out while testing.
> > > > > > > > amd gem has been using libdrm-amd thus far iirc, but for things like
> > > > > > > > this I think it'd be worth to at least consider switching. Display
> > > > > > > > team has already started to use some of the test and contribute stuff
> > > > > > > > (I think the VRR testcase is from amd).
> > > > > > > > -Daniel
> > > > > > > > 
> > > > > > > > > Andrey
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > > > Andrey Grodzovsky (13):
> > > > > > > > > > >        drm/ttm: Remap all page faults to per process dummy page.
> > > > > > > > > > >        drm: Unamp the entire device address space on device unplug
> > > > > > > > > > >        drm/ttm: Expose ttm_tt_unpopulate for driver use
> > > > > > > > > > >        drm/sched: Cancel and flush all oustatdning jobs before finish.
> > > > > > > > > > >        drm/amdgpu: Split amdgpu_device_fini into early and late
> > > > > > > > > > >        drm/amdgpu: Add early fini callback
> > > > > > > > > > >        drm/amdgpu: Register IOMMU topology notifier per device.
> > > > > > > > > > >        drm/amdgpu: Fix a bunch of sdma code crash post device unplug
> > > > > > > > > > >        drm/amdgpu: Remap all page faults to per process dummy page.
> > > > > > > > > > >        dmr/amdgpu: Move some sysfs attrs creation to default_attr
> > > > > > > > > > >        drm/amdgpu: Guard against write accesses after device removal
> > > > > > > > > > >        drm/sched: Make timeout timer rearm conditional.
> > > > > > > > > > >        drm/amdgpu: Prevent any job recoveries after device is unplugged.
> > > > > > > > > > > 
> > > > > > > > > > > Luben Tuikov (1):
> > > > > > > > > > >        drm/scheduler: Job timeout handler returns status
> > > > > > > > > > > 
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/amdgpu.h               |  11 +-
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c      |  17 +--
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/amdgpu_device.c        | 149 ++++++++++++++++++++--
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c           |  20 ++-
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c         |  15 ++-
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c          |   2 +-
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/amdgpu_gart.h          |   1 +
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c           |   9 ++
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c       |  25 ++--
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c           |  26 ++--
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/amdgpu_irq.h           |   3 +-
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/amdgpu_job.c           |  19 ++-
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c           |  12 +-
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/amdgpu_object.c        |  10 ++
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/amdgpu_object.h        |   2 +
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c           |  53 +++++---
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h           |   3 +
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c           |   1 +
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c          |  70 ++++++++++
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h          |  52 +-------
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c           |  21 ++-
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c            |   8 +-
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c      |  14 +-
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/cik_ih.c               |   2 +-
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/cz_ih.c                |   2 +-
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/iceland_ih.c           |   2 +-
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/navi10_ih.c            |   2 +-
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/psp_v11_0.c            |  16 +--
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/psp_v12_0.c            |   8 +-
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/psp_v3_1.c             |   8 +-
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/si_ih.c                |   2 +-
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/tonga_ih.c             |   2 +-
> > > > > > > > > > >       drivers/gpu/drm/amd/amdgpu/vega10_ih.c            |   2 +-
> > > > > > > > > > >       drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  12 +-
> > > > > > > > > > >       drivers/gpu/drm/amd/include/amd_shared.h          |   2 +
> > > > > > > > > > >       drivers/gpu/drm/drm_drv.c                         |   3 +
> > > > > > > > > > >       drivers/gpu/drm/etnaviv/etnaviv_sched.c           |  10 +-
> > > > > > > > > > >       drivers/gpu/drm/lima/lima_sched.c                 |   4 +-
> > > > > > > > > > >       drivers/gpu/drm/panfrost/panfrost_job.c           |   9 +-
> > > > > > > > > > >       drivers/gpu/drm/scheduler/sched_main.c            |  18 ++-
> > > > > > > > > > >       drivers/gpu/drm/ttm/ttm_bo_vm.c                   |  82 +++++++++++-
> > > > > > > > > > >       drivers/gpu/drm/ttm/ttm_tt.c                      |   1 +
> > > > > > > > > > >       drivers/gpu/drm/v3d/v3d_sched.c                   |  32 ++---
> > > > > > > > > > >       include/drm/gpu_scheduler.h                       |  17 ++-
> > > > > > > > > > >       include/drm/ttm/ttm_bo_api.h                      |   2 +
> > > > > > > > > > >       45 files changed, 583 insertions(+), 198 deletions(-)
> > > > > > > > > > > 
> > > > > > > > > > > --
> > > > > > > > > > > 2.7.4
> > > > > > > > > > > 
> > > > 
> > 
> >
Andrey Grodzovsky Feb. 9, 2021, 3:34 p.m. UTC | #12
On 2/9/21 4:50 AM, Daniel Vetter wrote:
> On Mon, Feb 08, 2021 at 11:01:14PM -0500, Andrey Grodzovsky wrote:
>> On 2/8/21 2:27 AM, Daniel Vetter wrote:
>>> On Mon, Feb 8, 2021 at 6:59 AM Andrey Grodzovsky
>>> <Andrey.Grodzovsky@amd.com> wrote:
>>>> On 1/20/21 10:59 AM, Daniel Vetter wrote:
>>>>> On Wed, Jan 20, 2021 at 3:20 PM Andrey Grodzovsky
>>>>> <Andrey.Grodzovsky@amd.com> wrote:
>>>>>> On 1/20/21 4:05 AM, Daniel Vetter wrote:
>>>>>>> On Tue, Jan 19, 2021 at 01:18:15PM -0500, Andrey Grodzovsky wrote:
>>>>>>>> On 1/19/21 1:08 PM, Daniel Vetter wrote:
>>>>>>>>> On Tue, Jan 19, 2021 at 6:31 PM Andrey Grodzovsky
>>>>>>>>> <Andrey.Grodzovsky@amd.com> wrote:
>>>>>>>>>> On 1/19/21 9:16 AM, Daniel Vetter wrote:
>>>>>>>>>>> On Mon, Jan 18, 2021 at 04:01:09PM -0500, Andrey Grodzovsky wrote:
>>>>>>>>>>>> Until now extracting a card either by physical extraction (e.g. eGPU with
>>>>>>>>>>>> thunderbolt connection or by emulation through  syfs -> /sys/bus/pci/devices/device_id/remove)
>>>>>>>>>>>> would cause random crashes in user apps. The random crashes in apps were
>>>>>>>>>>>> mostly due to the app having mapped a device backed BO into its address
>>>>>>>>>>>> space was still trying to access the BO while the backing device was gone.
>>>>>>>>>>>> To answer this first problem Christian suggested to fix the handling of mapped
>>>>>>>>>>>> memory in the clients when the device goes away by forcibly unmap all buffers the
>>>>>>>>>>>> user processes has by clearing their respective VMAs mapping the device BOs.
>>>>>>>>>>>> Then when the VMAs try to fill in the page tables again we check in the fault
>>>>>>>>>>>> handlerif the device is removed and if so, return an error. This will generate a
>>>>>>>>>>>> SIGBUS to the application which can then cleanly terminate.This indeed was done
>>>>>>>>>>>> but this in turn created a problem of kernel OOPs were the OOPSes were due to the
>>>>>>>>>>>> fact that while the app was terminating because of the SIGBUSit would trigger use
>>>>>>>>>>>> after free in the driver by calling to accesses device structures that were already
>>>>>>>>>>>> released from the pci remove sequence.This was handled by introducing a 'flush'
>>>>>>>>>>>> sequence during device removal were we wait for drm file reference to drop to 0
>>>>>>>>>>>> meaning all user clients directly using this device terminated.
>>>>>>>>>>>>
>>>>>>>>>>>> v2:
>>>>>>>>>>>> Based on discussions in the mailing list with Daniel and Pekka [1] and based on the document
>>>>>>>>>>>> produced by Pekka from those discussions [2] the whole approach with returning SIGBUS and
>>>>>>>>>>>> waiting for all user clients having CPU mapping of device BOs to die was dropped.
>>>>>>>>>>>> Instead as per the document suggestion the device structures are kept alive until
>>>>>>>>>>>> the last reference to the device is dropped by user client and in the meanwhile all existing and new CPU mappings of the BOs
>>>>>>>>>>>> belonging to the device directly or by dma-buf import are rerouted to per user
>>>>>>>>>>>> process dummy rw page.Also, I skipped the 'Requirements for KMS UAPI' section of [2]
>>>>>>>>>>>> since i am trying to get the minimal set of requirements that still give useful solution
>>>>>>>>>>>> to work and this is the'Requirements for Render and Cross-Device UAPI' section and so my
>>>>>>>>>>>> test case is removing a secondary device, which is render only and is not involved
>>>>>>>>>>>> in KMS.
>>>>>>>>>>>>
>>>>>>>>>>>> v3:
>>>>>>>>>>>> More updates following comments from v2 such as removing loop to find DRM file when rerouting
>>>>>>>>>>>> page faults to dummy page,getting rid of unnecessary sysfs handling refactoring and moving
>>>>>>>>>>>> prevention of GPU recovery post device unplug from amdgpu to scheduler layer.
>>>>>>>>>>>> On top of that added unplug support for the IOMMU enabled system.
>>>>>>>>>>>>
>>>>>>>>>>>> v4:
>>>>>>>>>>>> Drop last sysfs hack and use sysfs default attribute.
>>>>>>>>>>>> Guard against write accesses after device removal to avoid modifying released memory.
>>>>>>>>>>>> Update dummy pages handling to on demand allocation and release through drm managed framework.
>>>>>>>>>>>> Add return value to scheduler job TO handler (by Luben Tuikov) and use this in amdgpu for prevention
>>>>>>>>>>>> of GPU recovery post device unplug
>>>>>>>>>>>> Also rebase on top of drm-misc-mext instead of amd-staging-drm-next
>>>>>>>>>>>>
>>>>>>>>>>>> With these patches I am able to gracefully remove the secondary card using sysfs remove hook while glxgears
>>>>>>>>>>>> is running off of secondary card (DRI_PRIME=1) without kernel oopses or hangs and keep working
>>>>>>>>>>>> with the primary card or soft reset the device without hangs or oopses
>>>>>>>>>>>>
>>>>>>>>>>>> TODOs for followup work:
>>>>>>>>>>>> Convert AMDGPU code to use devm (for hw stuff) and drmm (for sw stuff and allocations) (Daniel)
>>>>>>>>>>>> Support plugging the secondary device back after unplug - currently still experiencing HW error on plugging back.
>>>>>>>>>>>> Add support for 'Requirements for KMS UAPI' section of [2] - unplugging primary, display connected card.
>>>>>>>>>>>>
>>>>>>>>>>>> [1] - Discussions during v3 of the patchset https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.spinics.net%2Flists%2Famd-gfx%2Fmsg55576.html&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7C49a8be16944441aca3ce08d8cce01c88%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637484610239738569%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=PkvEkJt2zQ9G0dpiGYtSuVLpziwQ0FklkjdMLV2ZOnE%3D&amp;reserved=0
>>>>>>>>>>>> [2] - drm/doc: device hot-unplug for userspace https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.spinics.net%2Flists%2Fdri-devel%2Fmsg259755.html&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7C49a8be16944441aca3ce08d8cce01c88%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637484610239738569%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=akobkV%2FtjcDck7sHlGbgE5n1FCyGmcklhu%2Bx3goGRuw%3D&amp;reserved=0
>>>>>>>>>>>> [3] - Related gitlab ticket https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fdrm%2Famd%2F-%2Fissues%2F1081&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7C49a8be16944441aca3ce08d8cce01c88%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637484610239738569%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=P0O5VHLkRczar2XHXyM8UysK%2BaD4hpUuoT%2FCBCnOW2g%3D&amp;reserved=0
>>>>>>>>>>> btw have you tried this out with some of the igts we have? core_hotunplug
>>>>>>>>>>> is the one I'm thinking of. Might be worth to extend this for amdgpu
>>>>>>>>>>> specific stuff (like run some batches on it while hotunplugging).
>>>>>>>>>> No, I mostly used just running glxgears while testing which covers already
>>>>>>>>>> exported/imported dma-buf case and a few manually hacked tests in libdrm amdgpu
>>>>>>>>>> test suite
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> Since there's so many corner cases we need to test here (shared dma-buf,
>>>>>>>>>>> shared dma_fence) I think it would make sense to have a shared testcase
>>>>>>>>>>> across drivers.
>>>>>>>>>> Not familiar with IGT too much, is there an easy way to setup shared dma bufs
>>>>>>>>>> and fences
>>>>>>>>>> use cases there or you mean I need to add them now ?
>>>>>>>>> We do have test infrastructure for all of that, but the hotunplug test
>>>>>>>>> doesn't have that yet I think.
>>>>>>>>>
>>>>>>>>>>> Only specific thing would be some hooks to keep the gpu
>>>>>>>>>>> busy in some fashion while we yank the driver.
>>>>>>>>>> Do you mean like staring X and some active rendering on top (like glxgears)
>>>>>>>>>> automatically from within IGT ?
>>>>>>>>> Nope, igt is meant to be bare metal testing so you don't have to drag
>>>>>>>>> the entire winsys around (which in a wayland world, is not really good
>>>>>>>>> for driver testing anyway, since everything is different). We use this
>>>>>>>>> for our pre-merge ci for drm/i915.
>>>>>>>> So i keep it busy by X/glxgers which is manual operation. What you suggest
>>>>>>>> then is some client within IGT which opens the device and starts submitting jobs
>>>>>>>> (which is much like what libdrm amdgpu tests already do) ? And this
>>>>>>>> part is the amdgou specific code I just need to port from libdrm to here ?
>>>>>>> Yup. For i915 tests we have an entire library already for small workloads,
>>>>>>> including some that just spin forever (useful for reset testing and could
>>>>>>> also come handy for unload testing).
>>>>>>> -Daniel
>>>>>> Does it mean I would have to drag in the entire infrastructure code from
>>>>>> within libdrm amdgpu code that allows for command submissions through
>>>>>> our IOCTLs ?
>>>>> No it's perfectly fine to use libdrm in igt tests, we do that too. I
>>>>> just mean we have some additional helpers to submit specific workloads
>>>>> for intel gpu, like rendercpy to move data with the 3d engine (just
>>>>> using copy engines only isn't good enough sometimes for testing), or
>>>>> the special hanging batchbuffers we use for reset testing, or in
>>>>> general for having precise control over race conditions and things
>>>>> like that.
>>>>>
>>>>> One thing that was somewhat annoying for i915 but shouldn't be a
>>>>> problem for amdgpu is that igt builds on intel. So we have stub
>>>>> functions for libdrm-intel, since libdrm-intel doesn't build on arm.
>>>>> Shouldn't be a problem for you.
>>>>> -Daniel
>>>> Tested with igt hot-unplug test. Passed unbind_rebind, unplug-rescan,
>>>> hot-unbind-rebind and hotunplug-rescan
>>>> if disabling the rescan part as I don't support plug-back for now. Also added
>>>> command submission for amdgpu.
>>>> Attached a draft of submitting workload while unbinding the driver or simulating
>>>> detach. Catched 2 issues with unpug if command submission in flight  during
>>>> unplug -
>>>> (unsignaled fence causing a hang in amdgpu_cs_sync and hitting a BUG_ON in
>>>> gfx_v9_0_ring_emit_patch_cond_exec whic is expected i guess).
>>>> Guess glxgears command submissions is at a much slower rate so this was missed.
>>>> Is that what you meant for this test ?
>>> Yup. Would be good if you can submit this one for inclusion.
>>> -Daniel
>>
>> Will do together with exported dma-buf test once I do it.
>>
>> P.S How am i supposed to do exported fence test. Exporting a fence from
>> device A, importing it into device B, unplugging
>> device A then signaling the fence from device B - this supposed to call a
>> fence cb which was registered
>> by the exporter which by now is dead and hence will cause a 'use after free' ?
> Yeah in the end we'd need 2 hw devices for testing full fence
> functionality. A useful intermediate step would be to just export the
> fence (either as sync_file, which I think amdgpu doesn't support because
> no android egl support in mesa) or drm_syncobj (which you can do as
> standalone fd too iirc), and then just using the fence a bit from
> userspace (like wait on it or get its status) after the device is
> unplugged.
>
> I think this should cover most of the cross-driver issues that fences
> bring in, and I think for the other problems we can worry once we spot.
> -Daniel


OK, will write up all the tests and submit a merge request for all of them 
together to IGT gitlab

Andrey


>
>> Andrey
>>
>>>> Andrey
>>>>
>>>>
>>>>>> Andrey
>>>>>>
>>>>>>>> Andrey
>>>>>>>>
>>>>>>>>
>>>>>>>>>>> But just to get it started
>>>>>>>>>>> you can throw in entirely amdgpu specific subtests and just share some of
>>>>>>>>>>> the test code.
>>>>>>>>>>> -Daniel
>>>>>>>>>> Im general, I wasn't aware of this test suite and looks like it does what i test
>>>>>>>>>> among other stuff.
>>>>>>>>>> I will definitely  try to run with it although the rescan part will not work as
>>>>>>>>>> plugging
>>>>>>>>>> the device back is in my TODO list and not part of the scope for this patchset
>>>>>>>>>> and so I will
>>>>>>>>>> probably comment the re-scan section out while testing.
>>>>>>>>> amd gem has been using libdrm-amd thus far iirc, but for things like
>>>>>>>>> this I think it'd be worth to at least consider switching. Display
>>>>>>>>> team has already started to use some of the test and contribute stuff
>>>>>>>>> (I think the VRR testcase is from amd).
>>>>>>>>> -Daniel
>>>>>>>>>
>>>>>>>>>> Andrey
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>>> Andrey Grodzovsky (13):
>>>>>>>>>>>>         drm/ttm: Remap all page faults to per process dummy page.
>>>>>>>>>>>>         drm: Unamp the entire device address space on device unplug
>>>>>>>>>>>>         drm/ttm: Expose ttm_tt_unpopulate for driver use
>>>>>>>>>>>>         drm/sched: Cancel and flush all oustatdning jobs before finish.
>>>>>>>>>>>>         drm/amdgpu: Split amdgpu_device_fini into early and late
>>>>>>>>>>>>         drm/amdgpu: Add early fini callback
>>>>>>>>>>>>         drm/amdgpu: Register IOMMU topology notifier per device.
>>>>>>>>>>>>         drm/amdgpu: Fix a bunch of sdma code crash post device unplug
>>>>>>>>>>>>         drm/amdgpu: Remap all page faults to per process dummy page.
>>>>>>>>>>>>         dmr/amdgpu: Move some sysfs attrs creation to default_attr
>>>>>>>>>>>>         drm/amdgpu: Guard against write accesses after device removal
>>>>>>>>>>>>         drm/sched: Make timeout timer rearm conditional.
>>>>>>>>>>>>         drm/amdgpu: Prevent any job recoveries after device is unplugged.
>>>>>>>>>>>>
>>>>>>>>>>>> Luben Tuikov (1):
>>>>>>>>>>>>         drm/scheduler: Job timeout handler returns status
>>>>>>>>>>>>
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/amdgpu.h               |  11 +-
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c      |  17 +--
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/amdgpu_device.c        | 149 ++++++++++++++++++++--
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c           |  20 ++-
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c         |  15 ++-
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c          |   2 +-
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/amdgpu_gart.h          |   1 +
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c           |   9 ++
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c       |  25 ++--
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c           |  26 ++--
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/amdgpu_irq.h           |   3 +-
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/amdgpu_job.c           |  19 ++-
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c           |  12 +-
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/amdgpu_object.c        |  10 ++
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/amdgpu_object.h        |   2 +
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c           |  53 +++++---
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h           |   3 +
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c           |   1 +
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c          |  70 ++++++++++
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h          |  52 +-------
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c           |  21 ++-
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c            |   8 +-
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c      |  14 +-
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/cik_ih.c               |   2 +-
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/cz_ih.c                |   2 +-
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/iceland_ih.c           |   2 +-
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/navi10_ih.c            |   2 +-
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/psp_v11_0.c            |  16 +--
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/psp_v12_0.c            |   8 +-
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/psp_v3_1.c             |   8 +-
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/si_ih.c                |   2 +-
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/tonga_ih.c             |   2 +-
>>>>>>>>>>>>        drivers/gpu/drm/amd/amdgpu/vega10_ih.c            |   2 +-
>>>>>>>>>>>>        drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  12 +-
>>>>>>>>>>>>        drivers/gpu/drm/amd/include/amd_shared.h          |   2 +
>>>>>>>>>>>>        drivers/gpu/drm/drm_drv.c                         |   3 +
>>>>>>>>>>>>        drivers/gpu/drm/etnaviv/etnaviv_sched.c           |  10 +-
>>>>>>>>>>>>        drivers/gpu/drm/lima/lima_sched.c                 |   4 +-
>>>>>>>>>>>>        drivers/gpu/drm/panfrost/panfrost_job.c           |   9 +-
>>>>>>>>>>>>        drivers/gpu/drm/scheduler/sched_main.c            |  18 ++-
>>>>>>>>>>>>        drivers/gpu/drm/ttm/ttm_bo_vm.c                   |  82 +++++++++++-
>>>>>>>>>>>>        drivers/gpu/drm/ttm/ttm_tt.c                      |   1 +
>>>>>>>>>>>>        drivers/gpu/drm/v3d/v3d_sched.c                   |  32 ++---
>>>>>>>>>>>>        include/drm/gpu_scheduler.h                       |  17 ++-
>>>>>>>>>>>>        include/drm/ttm/ttm_bo_api.h                      |   2 +
>>>>>>>>>>>>        45 files changed, 583 insertions(+), 198 deletions(-)
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> 2.7.4
>>>>>>>>>>>>
>>>
Andrey Grodzovsky Feb. 18, 2021, 8:03 p.m. UTC | #13
Looked a bit into it, I want to export sync_object to FD and import  from that FD
such that I will wait on the imported sync object handle from one thread while
signaling the exported sync object handle from another (post device unplug) ?

My problem is how to create a sync object with a non signaled 'fake' fence ?
I only see API that creates it with already signaled fence (or none) - 
https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/drm_syncobj.c#L56

P.S I expect the kernel to crash since unlike with dma_bufs we don't hold
drm device reference here on export.

Andrey

On 2/9/21 4:50 AM, Daniel Vetter wrote:
> Yeah in the end we'd need 2 hw devices for testing full fence
> functionality. A useful intermediate step would be to just export the
> fence (either as sync_file, which I think amdgpu doesn't support because
> no android egl support in mesa) or drm_syncobj (which you can do as
> standalone fd too iirc), and then just using the fence a bit from
> userspace (like wait on it or get its status) after the device is
> unplugged.
Daniel Vetter Feb. 19, 2021, 10:24 a.m. UTC | #14
On Thu, Feb 18, 2021 at 9:03 PM Andrey Grodzovsky
<Andrey.Grodzovsky@amd.com> wrote:
>
> Looked a bit into it, I want to export sync_object to FD and import  from that FD
> such that I will wait on the imported sync object handle from one thread while
> signaling the exported sync object handle from another (post device unplug) ?
>
> My problem is how to create a sync object with a non signaled 'fake' fence ?
> I only see API that creates it with already signaled fence (or none) -
> https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/drm_syncobj.c#L56
>
> P.S I expect the kernel to crash since unlike with dma_bufs we don't hold
> drm device reference here on export.

Well maybe there's no crash. I think if you go through all your
dma_fence that you have and force-complete them, then I think external
callers wont go into the driver anymore. But there's still pointers
potentially pointing at your device struct and all that, but should
work. Still needs some audit ofc.

Wrt how you get such a free-standing fence, that's amdgpu specific. Roughly
- submit cs
- get the fence for that (either sync_file, but I don't think amdgpu
supports that, or maybe through drm_syncobj)
- hotunplug
- wait on that fence somehow (drm_syncobj has direct uapi for this,
same for sync_file I think)

Cheers, Daniel

>
> Andrey
>
> On 2/9/21 4:50 AM, Daniel Vetter wrote:
> > Yeah in the end we'd need 2 hw devices for testing full fence
> > functionality. A useful intermediate step would be to just export the
> > fence (either as sync_file, which I think amdgpu doesn't support because
> > no android egl support in mesa) or drm_syncobj (which you can do as
> > standalone fd too iirc), and then just using the fence a bit from
> > userspace (like wait on it or get its status) after the device is
> > unplugged.
Andrey Grodzovsky Feb. 24, 2021, 4:30 p.m. UTC | #15
On 2021-02-19 5:24 a.m., Daniel Vetter wrote:
> On Thu, Feb 18, 2021 at 9:03 PM Andrey Grodzovsky
> <Andrey.Grodzovsky@amd.com> wrote:
>> Looked a bit into it, I want to export sync_object to FD and import  from that FD
>> such that I will wait on the imported sync object handle from one thread while
>> signaling the exported sync object handle from another (post device unplug) ?
>>
>> My problem is how to create a sync object with a non signaled 'fake' fence ?
>> I only see API that creates it with already signaled fence (or none) -
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Flatest%2Fsource%2Fdrivers%2Fgpu%2Fdrm%2Fdrm_syncobj.c%23L56&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7C5085bdd151c642514d2408d8d4c08e56%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637493270792459284%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=sZWIn0Lo7ZujBq0e7MdFPhJDARXWpOlLgLzANMS8cCY%3D&amp;reserved=0
>>
>> P.S I expect the kernel to crash since unlike with dma_bufs we don't hold
>> drm device reference here on export.
> Well maybe there's no crash. I think if you go through all your
> dma_fence that you have and force-complete them, then I think external
> callers wont go into the driver anymore. But there's still pointers
> potentially pointing at your device struct and all that, but should
> work. Still needs some audit ofc.
>
> Wrt how you get such a free-standing fence, that's amdgpu specific. Roughly
> - submit cs
> - get the fence for that (either sync_file, but I don't think amdgpu
> supports that, or maybe through drm_syncobj)
> - hotunplug
> - wait on that fence somehow (drm_syncobj has direct uapi for this,
> same for sync_file I think)
>
> Cheers, Daniel


Indeed worked fine, did with 2 devices. Since syncobj is refcounted, 
even after I
destroyed the original syncobj and unplugged the device, the exported 
syncobj and the
fence inside didn't go anywhere.

See my 3 tests in my branch on Gitlab 
https://gitlab.freedesktop.org/agrodzov/igt-gpu-tools/-/commits/master
and let me know if I should go ahead and do a merge request (into which 
target project/branch ?) or you
have more comments.

Andrey


>
>> Andrey
>>
>> On 2/9/21 4:50 AM, Daniel Vetter wrote:
>>> Yeah in the end we'd need 2 hw devices for testing full fence
>>> functionality. A useful intermediate step would be to just export the
>>> fence (either as sync_file, which I think amdgpu doesn't support because
>>> no android egl support in mesa) or drm_syncobj (which you can do as
>>> standalone fd too iirc), and then just using the fence a bit from
>>> userspace (like wait on it or get its status) after the device is
>>> unplugged.
>
>
Daniel Vetter Feb. 25, 2021, 10:25 a.m. UTC | #16
On Wed, Feb 24, 2021 at 11:30:50AM -0500, Andrey Grodzovsky wrote:
> 
> On 2021-02-19 5:24 a.m., Daniel Vetter wrote:
> > On Thu, Feb 18, 2021 at 9:03 PM Andrey Grodzovsky
> > <Andrey.Grodzovsky@amd.com> wrote:
> > > Looked a bit into it, I want to export sync_object to FD and import  from that FD
> > > such that I will wait on the imported sync object handle from one thread while
> > > signaling the exported sync object handle from another (post device unplug) ?
> > > 
> > > My problem is how to create a sync object with a non signaled 'fake' fence ?
> > > I only see API that creates it with already signaled fence (or none) -
> > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Flatest%2Fsource%2Fdrivers%2Fgpu%2Fdrm%2Fdrm_syncobj.c%23L56&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7C5085bdd151c642514d2408d8d4c08e56%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637493270792459284%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=sZWIn0Lo7ZujBq0e7MdFPhJDARXWpOlLgLzANMS8cCY%3D&amp;reserved=0
> > > 
> > > P.S I expect the kernel to crash since unlike with dma_bufs we don't hold
> > > drm device reference here on export.
> > Well maybe there's no crash. I think if you go through all your
> > dma_fence that you have and force-complete them, then I think external
> > callers wont go into the driver anymore. But there's still pointers
> > potentially pointing at your device struct and all that, but should
> > work. Still needs some audit ofc.
> > 
> > Wrt how you get such a free-standing fence, that's amdgpu specific. Roughly
> > - submit cs
> > - get the fence for that (either sync_file, but I don't think amdgpu
> > supports that, or maybe through drm_syncobj)
> > - hotunplug
> > - wait on that fence somehow (drm_syncobj has direct uapi for this,
> > same for sync_file I think)
> > 
> > Cheers, Daniel
> 
> 
> Indeed worked fine, did with 2 devices. Since syncobj is refcounted, even
> after I
> destroyed the original syncobj and unplugged the device, the exported
> syncobj and the
> fence inside didn't go anywhere.
> 
> See my 3 tests in my branch on Gitlab
> https://gitlab.freedesktop.org/agrodzov/igt-gpu-tools/-/commits/master
> and let me know if I should go ahead and do a merge request (into which
> target project/branch ?) or you
> have more comments.

igt still works with patch submission.
-Daniel

> 
> Andrey
> 
> 
> > 
> > > Andrey
> > > 
> > > On 2/9/21 4:50 AM, Daniel Vetter wrote:
> > > > Yeah in the end we'd need 2 hw devices for testing full fence
> > > > functionality. A useful intermediate step would be to just export the
> > > > fence (either as sync_file, which I think amdgpu doesn't support because
> > > > no android egl support in mesa) or drm_syncobj (which you can do as
> > > > standalone fd too iirc), and then just using the fence a bit from
> > > > userspace (like wait on it or get its status) after the device is
> > > > unplugged.
> > 
> >
Andrey Grodzovsky Feb. 25, 2021, 4:12 p.m. UTC | #17
On 2021-02-25 5:25 a.m., Daniel Vetter wrote:
> On Wed, Feb 24, 2021 at 11:30:50AM -0500, Andrey Grodzovsky wrote:
>> On 2021-02-19 5:24 a.m., Daniel Vetter wrote:
>>> On Thu, Feb 18, 2021 at 9:03 PM Andrey Grodzovsky
>>> <Andrey.Grodzovsky@amd.com> wrote:
>>>> Looked a bit into it, I want to export sync_object to FD and import  from that FD
>>>> such that I will wait on the imported sync object handle from one thread while
>>>> signaling the exported sync object handle from another (post device unplug) ?
>>>>
>>>> My problem is how to create a sync object with a non signaled 'fake' fence ?
>>>> I only see API that creates it with already signaled fence (or none) -
>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Flatest%2Fsource%2Fdrivers%2Fgpu%2Fdrm%2Fdrm_syncobj.c%23L56&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7Cc6828a032b80464fc0f008d8d977bc32%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637498455582209331%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=Izca%2BYNliCefXqAgOIX%2Bs3XQ1vWVGXbfAh28B%2F51blQ%3D&amp;reserved=0
>>>>
>>>> P.S I expect the kernel to crash since unlike with dma_bufs we don't hold
>>>> drm device reference here on export.
>>> Well maybe there's no crash. I think if you go through all your
>>> dma_fence that you have and force-complete them, then I think external
>>> callers wont go into the driver anymore. But there's still pointers
>>> potentially pointing at your device struct and all that, but should
>>> work. Still needs some audit ofc.
>>>
>>> Wrt how you get such a free-standing fence, that's amdgpu specific. Roughly
>>> - submit cs
>>> - get the fence for that (either sync_file, but I don't think amdgpu
>>> supports that, or maybe through drm_syncobj)
>>> - hotunplug
>>> - wait on that fence somehow (drm_syncobj has direct uapi for this,
>>> same for sync_file I think)
>>>
>>> Cheers, Daniel
>>
>> Indeed worked fine, did with 2 devices. Since syncobj is refcounted, even
>> after I
>> destroyed the original syncobj and unplugged the device, the exported
>> syncobj and the
>> fence inside didn't go anywhere.
>>
>> See my 3 tests in my branch on Gitlab
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fagrodzov%2Figt-gpu-tools%2F-%2Fcommits%2Fmaster&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7Cc6828a032b80464fc0f008d8d977bc32%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637498455582209331%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=IvoCIggDUV3EgDqOhrJokWei%2B6byg%2Be9cfaJel9y2RY%3D&amp;reserved=0
>> and let me know if I should go ahead and do a merge request (into which
>> target project/branch ?) or you
>> have more comments.
> igt still works with patch submission.
> -Daniel


I see, Need to divert to other work for a while, will get to it once I 
am back to device unplug.

Andrey


>
>> Andrey
>>
>>
>>>> Andrey
>>>>
>>>> On 2/9/21 4:50 AM, Daniel Vetter wrote:
>>>>> Yeah in the end we'd need 2 hw devices for testing full fence
>>>>> functionality. A useful intermediate step would be to just export the
>>>>> fence (either as sync_file, which I think amdgpu doesn't support because
>>>>> no android egl support in mesa) or drm_syncobj (which you can do as
>>>>> standalone fd too iirc), and then just using the fence a bit from
>>>>> userspace (like wait on it or get its status) after the device is
>>>>> unplugged.
>>>