diff mbox series

[02/25] dma-fence: prime lockdep annotations

Message ID 20200707201229.472834-3-daniel.vetter@ffwll.ch (mailing list archive)
State New, archived
Headers show
Series dma-fence annotations, round 3 | expand

Commit Message

Daniel Vetter July 7, 2020, 8:12 p.m. UTC
Two in one go:
- it is allowed to call dma_fence_wait() while holding a
  dma_resv_lock(). This is fundamental to how eviction works with ttm,
  so required.

- it is allowed to call dma_fence_wait() from memory reclaim contexts,
  specifically from shrinker callbacks (which i915 does), and from mmu
  notifier callbacks (which amdgpu does, and which i915 sometimes also
  does, and probably always should, but that's kinda a debate). Also
  for stuff like HMM we really need to be able to do this, or things
  get real dicey.

Consequence is that any critical path necessary to get to a
dma_fence_signal for a fence must never a) call dma_resv_lock nor b)
allocate memory with GFP_KERNEL. Also by implication of
dma_resv_lock(), no userspace faulting allowed. That's some supremely
obnoxious limitations, which is why we need to sprinkle the right
annotations to all relevant paths.

The one big locking context we're leaving out here is mmu notifiers,
added in

commit 23b68395c7c78a764e8963fc15a7cfd318bf187f
Author: Daniel Vetter <daniel.vetter@ffwll.ch>
Date:   Mon Aug 26 22:14:21 2019 +0200

    mm/mmu_notifiers: add a lockdep map for invalidate_range_start/end

that one covers a lot of other callsites, and it's also allowed to
wait on dma-fences from mmu notifiers. But there's no ready-made
functions exposed to prime this, so I've left it out for now.

v2: Also track against mmu notifier context.

v3: kerneldoc to spec the cross-driver contract. Note that currently
i915 throws in a hard-coded 10s timeout on foreign fences (not sure
why that was done, but it's there), which is why that rule is worded
with SHOULD instead of MUST.

Also some of the mmu_notifier/shrinker rules might surprise SoC
drivers, I haven't fully audited them all. Which is infeasible anyway,
we'll need to run them with lockdep and dma-fence annotations and see
what goes boom.

v4: A spelling fix from Mika

v5: #ifdef for CONFIG_MMU_NOTIFIER. Reported by 0day. Unfortunately
this means lockdep enforcement is slightly inconsistent, it won't spot
GFP_NOIO and GFP_NOFS allocations in the wrong spot if
CONFIG_MMU_NOTIFIER is disabled in the kernel config. Oh well.

v5: Note that only drivers/gpu has a reasonable (or at least
historical) excuse to use dma_fence_wait() from shrinker and mmu
notifier callbacks. Everyone else should either have a better memory
manager model, or better hardware. This reflects discussions with
Jason Gunthorpe.

Cc: Jason Gunthorpe <jgg@mellanox.com>
Cc: Felix Kuehling <Felix.Kuehling@amd.com>
Cc: kernel test robot <lkp@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@intel.com> (v4)
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: Thomas Hellstrom <thomas.hellstrom@intel.com>
Cc: linux-media@vger.kernel.org
Cc: linaro-mm-sig@lists.linaro.org
Cc: linux-rdma@vger.kernel.org
Cc: amd-gfx@lists.freedesktop.org
Cc: intel-gfx@lists.freedesktop.org
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Christian König <christian.koenig@amd.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
---
 Documentation/driver-api/dma-buf.rst |  6 ++++
 drivers/dma-buf/dma-fence.c          | 46 ++++++++++++++++++++++++++++
 drivers/dma-buf/dma-resv.c           |  8 +++++
 include/linux/dma-fence.h            |  1 +
 4 files changed, 61 insertions(+)

Comments

Daniel Vetter July 9, 2020, 8:09 a.m. UTC | #1
Hi Jason,

Below the paragraph I've added after our discussions around dma-fences
outside of drivers/gpu. Good enough for an ack on this, or want something
changed?

Thanks, Daniel

> + * Note that only GPU drivers have a reasonable excuse for both requiring
> + * &mmu_interval_notifier and &shrinker callbacks at the same time as having to
> + * track asynchronous compute work using &dma_fence. No driver outside of
> + * drivers/gpu should ever call dma_fence_wait() in such contexts.


On Tue, Jul 07, 2020 at 10:12:06PM +0200, Daniel Vetter wrote:
> Two in one go:
> - it is allowed to call dma_fence_wait() while holding a
>   dma_resv_lock(). This is fundamental to how eviction works with ttm,
>   so required.
> 
> - it is allowed to call dma_fence_wait() from memory reclaim contexts,
>   specifically from shrinker callbacks (which i915 does), and from mmu
>   notifier callbacks (which amdgpu does, and which i915 sometimes also
>   does, and probably always should, but that's kinda a debate). Also
>   for stuff like HMM we really need to be able to do this, or things
>   get real dicey.
> 
> Consequence is that any critical path necessary to get to a
> dma_fence_signal for a fence must never a) call dma_resv_lock nor b)
> allocate memory with GFP_KERNEL. Also by implication of
> dma_resv_lock(), no userspace faulting allowed. That's some supremely
> obnoxious limitations, which is why we need to sprinkle the right
> annotations to all relevant paths.
> 
> The one big locking context we're leaving out here is mmu notifiers,
> added in
> 
> commit 23b68395c7c78a764e8963fc15a7cfd318bf187f
> Author: Daniel Vetter <daniel.vetter@ffwll.ch>
> Date:   Mon Aug 26 22:14:21 2019 +0200
> 
>     mm/mmu_notifiers: add a lockdep map for invalidate_range_start/end
> 
> that one covers a lot of other callsites, and it's also allowed to
> wait on dma-fences from mmu notifiers. But there's no ready-made
> functions exposed to prime this, so I've left it out for now.
> 
> v2: Also track against mmu notifier context.
> 
> v3: kerneldoc to spec the cross-driver contract. Note that currently
> i915 throws in a hard-coded 10s timeout on foreign fences (not sure
> why that was done, but it's there), which is why that rule is worded
> with SHOULD instead of MUST.
> 
> Also some of the mmu_notifier/shrinker rules might surprise SoC
> drivers, I haven't fully audited them all. Which is infeasible anyway,
> we'll need to run them with lockdep and dma-fence annotations and see
> what goes boom.
> 
> v4: A spelling fix from Mika
> 
> v5: #ifdef for CONFIG_MMU_NOTIFIER. Reported by 0day. Unfortunately
> this means lockdep enforcement is slightly inconsistent, it won't spot
> GFP_NOIO and GFP_NOFS allocations in the wrong spot if
> CONFIG_MMU_NOTIFIER is disabled in the kernel config. Oh well.
> 
> v5: Note that only drivers/gpu has a reasonable (or at least
> historical) excuse to use dma_fence_wait() from shrinker and mmu
> notifier callbacks. Everyone else should either have a better memory
> manager model, or better hardware. This reflects discussions with
> Jason Gunthorpe.
> 
> Cc: Jason Gunthorpe <jgg@mellanox.com>
> Cc: Felix Kuehling <Felix.Kuehling@amd.com>
> Cc: kernel test robot <lkp@intel.com>
> Reviewed-by: Thomas Hellström <thomas.hellstrom@intel.com> (v4)
> Cc: Mika Kuoppala <mika.kuoppala@intel.com>
> Cc: Thomas Hellstrom <thomas.hellstrom@intel.com>
> Cc: linux-media@vger.kernel.org
> Cc: linaro-mm-sig@lists.linaro.org
> Cc: linux-rdma@vger.kernel.org
> Cc: amd-gfx@lists.freedesktop.org
> Cc: intel-gfx@lists.freedesktop.org
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Cc: Christian König <christian.koenig@amd.com>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> ---
>  Documentation/driver-api/dma-buf.rst |  6 ++++
>  drivers/dma-buf/dma-fence.c          | 46 ++++++++++++++++++++++++++++
>  drivers/dma-buf/dma-resv.c           |  8 +++++
>  include/linux/dma-fence.h            |  1 +
>  4 files changed, 61 insertions(+)
> 
> diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst
> index 05d856131140..f8f6decde359 100644
> --- a/Documentation/driver-api/dma-buf.rst
> +++ b/Documentation/driver-api/dma-buf.rst
> @@ -133,6 +133,12 @@ DMA Fences
>  .. kernel-doc:: drivers/dma-buf/dma-fence.c
>     :doc: DMA fences overview
>  
> +DMA Fence Cross-Driver Contract
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +.. kernel-doc:: drivers/dma-buf/dma-fence.c
> +   :doc: fence cross-driver contract
> +
>  DMA Fence Signalling Annotations
>  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>  
> diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
> index 0005bc002529..af1d8ea926b3 100644
> --- a/drivers/dma-buf/dma-fence.c
> +++ b/drivers/dma-buf/dma-fence.c
> @@ -64,6 +64,52 @@ static atomic64_t dma_fence_context_counter = ATOMIC64_INIT(1);
>   *   &dma_buf.resv pointer.
>   */
>  
> +/**
> + * DOC: fence cross-driver contract
> + *
> + * Since &dma_fence provide a cross driver contract, all drivers must follow the
> + * same rules:
> + *
> + * * Fences must complete in a reasonable time. Fences which represent kernels
> + *   and shaders submitted by userspace, which could run forever, must be backed
> + *   up by timeout and gpu hang recovery code. Minimally that code must prevent
> + *   further command submission and force complete all in-flight fences, e.g.
> + *   when the driver or hardware do not support gpu reset, or if the gpu reset
> + *   failed for some reason. Ideally the driver supports gpu recovery which only
> + *   affects the offending userspace context, and no other userspace
> + *   submissions.
> + *
> + * * Drivers may have different ideas of what completion within a reasonable
> + *   time means. Some hang recovery code uses a fixed timeout, others a mix
> + *   between observing forward progress and increasingly strict timeouts.
> + *   Drivers should not try to second guess timeout handling of fences from
> + *   other drivers.
> + *
> + * * To ensure there's no deadlocks of dma_fence_wait() against other locks
> + *   drivers should annotate all code required to reach dma_fence_signal(),
> + *   which completes the fences, with dma_fence_begin_signalling() and
> + *   dma_fence_end_signalling().
> + *
> + * * Drivers are allowed to call dma_fence_wait() while holding dma_resv_lock().
> + *   This means any code required for fence completion cannot acquire a
> + *   &dma_resv lock. Note that this also pulls in the entire established
> + *   locking hierarchy around dma_resv_lock() and dma_resv_unlock().
> + *
> + * * Drivers are allowed to call dma_fence_wait() from their &shrinker
> + *   callbacks. This means any code required for fence completion cannot
> + *   allocate memory with GFP_KERNEL.
> + *
> + * * Drivers are allowed to call dma_fence_wait() from their &mmu_notifier
> + *   respectively &mmu_interval_notifier callbacks. This means any code required
> + *   for fence completeion cannot allocate memory with GFP_NOFS or GFP_NOIO.
> + *   Only GFP_ATOMIC is permissible, which might fail.
> + *
> + * Note that only GPU drivers have a reasonable excuse for both requiring
> + * &mmu_interval_notifier and &shrinker callbacks at the same time as having to
> + * track asynchronous compute work using &dma_fence. No driver outside of
> + * drivers/gpu should ever call dma_fence_wait() in such contexts.
> + */
> +
>  static const char *dma_fence_stub_get_name(struct dma_fence *fence)
>  {
>          return "stub";
> diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
> index e7d7197d48ce..0e6675ec1d11 100644
> --- a/drivers/dma-buf/dma-resv.c
> +++ b/drivers/dma-buf/dma-resv.c
> @@ -36,6 +36,7 @@
>  #include <linux/export.h>
>  #include <linux/mm.h>
>  #include <linux/sched/mm.h>
> +#include <linux/mmu_notifier.h>
>  
>  /**
>   * DOC: Reservation Object Overview
> @@ -116,6 +117,13 @@ static int __init dma_resv_lockdep(void)
>  	if (ret == -EDEADLK)
>  		dma_resv_lock_slow(&obj, &ctx);
>  	fs_reclaim_acquire(GFP_KERNEL);
> +#ifdef CONFIG_MMU_NOTIFIER
> +	lock_map_acquire(&__mmu_notifier_invalidate_range_start_map);
> +	__dma_fence_might_wait();
> +	lock_map_release(&__mmu_notifier_invalidate_range_start_map);
> +#else
> +	__dma_fence_might_wait();
> +#endif
>  	fs_reclaim_release(GFP_KERNEL);
>  	ww_mutex_unlock(&obj.lock);
>  	ww_acquire_fini(&ctx);
> diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
> index 3f288f7db2ef..09e23adb351d 100644
> --- a/include/linux/dma-fence.h
> +++ b/include/linux/dma-fence.h
> @@ -360,6 +360,7 @@ dma_fence_get_rcu_safe(struct dma_fence __rcu **fencep)
>  #ifdef CONFIG_LOCKDEP
>  bool dma_fence_begin_signalling(void);
>  void dma_fence_end_signalling(bool cookie);
> +void __dma_fence_might_wait(void);
>  #else
>  static inline bool dma_fence_begin_signalling(void)
>  {
> -- 
> 2.27.0
>
Jason Gunthorpe July 10, 2020, 12:43 p.m. UTC | #2
On Thu, Jul 09, 2020 at 10:09:11AM +0200, Daniel Vetter wrote:
> Hi Jason,
> 
> Below the paragraph I've added after our discussions around dma-fences
> outside of drivers/gpu. Good enough for an ack on this, or want something
> changed?
> 
> Thanks, Daniel
> 
> > + * Note that only GPU drivers have a reasonable excuse for both requiring
> > + * &mmu_interval_notifier and &shrinker callbacks at the same time as having to
> > + * track asynchronous compute work using &dma_fence. No driver outside of
> > + * drivers/gpu should ever call dma_fence_wait() in such contexts.

I was hoping we'd get to 'no driver outside GPU should even use
dma_fence()'

Is that not reasonable?

When your annotations once anything uses dma_fence it has to assume
the worst cases, right?

Jason
Christian König July 10, 2020, 12:48 p.m. UTC | #3
Am 10.07.20 um 14:43 schrieb Jason Gunthorpe:
> On Thu, Jul 09, 2020 at 10:09:11AM +0200, Daniel Vetter wrote:
>> Hi Jason,
>>
>> Below the paragraph I've added after our discussions around dma-fences
>> outside of drivers/gpu. Good enough for an ack on this, or want something
>> changed?
>>
>> Thanks, Daniel
>>
>>> + * Note that only GPU drivers have a reasonable excuse for both requiring
>>> + * &mmu_interval_notifier and &shrinker callbacks at the same time as having to
>>> + * track asynchronous compute work using &dma_fence. No driver outside of
>>> + * drivers/gpu should ever call dma_fence_wait() in such contexts.
> I was hoping we'd get to 'no driver outside GPU should even use
> dma_fence()'

My last status was that V4L could come use dma_fences as well.

I'm not 100% sure, but wouldn't MMU notifier + dma_fence be a valid use 
case for things like custom FPGA interfaces as well?

> Is that not reasonable?
>
> When your annotations once anything uses dma_fence it has to assume
> the worst cases, right?

Well a defensive approach is usually the best idea, yes.

Christian.

>
> Jason
Jason Gunthorpe July 10, 2020, 12:54 p.m. UTC | #4
On Fri, Jul 10, 2020 at 02:48:16PM +0200, Christian König wrote:
> Am 10.07.20 um 14:43 schrieb Jason Gunthorpe:
> > On Thu, Jul 09, 2020 at 10:09:11AM +0200, Daniel Vetter wrote:
> > > Hi Jason,
> > > 
> > > Below the paragraph I've added after our discussions around dma-fences
> > > outside of drivers/gpu. Good enough for an ack on this, or want something
> > > changed?
> > > 
> > > Thanks, Daniel
> > > 
> > > > + * Note that only GPU drivers have a reasonable excuse for both requiring
> > > > + * &mmu_interval_notifier and &shrinker callbacks at the same time as having to
> > > > + * track asynchronous compute work using &dma_fence. No driver outside of
> > > > + * drivers/gpu should ever call dma_fence_wait() in such contexts.
> > I was hoping we'd get to 'no driver outside GPU should even use
> > dma_fence()'
> 
> My last status was that V4L could come use dma_fences as well.

I'm sure lots of places *could* use it, but I think I understood that
it is a bad idea unless you have to fit into the DRM uAPI?

You are better to do something contained in the single driver where
locking can be analyzed.

> I'm not 100% sure, but wouldn't MMU notifier + dma_fence be a valid use case
> for things like custom FPGA interfaces as well?

I don't think we should expand the list of drivers that use this
technique. 

Drivers that can't suspend should pin memory, not use blocked
notifiers to created pinned memory.

Jason
Christian König July 10, 2020, 1:01 p.m. UTC | #5
Am 10.07.20 um 14:54 schrieb Jason Gunthorpe:
> On Fri, Jul 10, 2020 at 02:48:16PM +0200, Christian König wrote:
>> Am 10.07.20 um 14:43 schrieb Jason Gunthorpe:
>>> On Thu, Jul 09, 2020 at 10:09:11AM +0200, Daniel Vetter wrote:
>>>> Hi Jason,
>>>>
>>>> Below the paragraph I've added after our discussions around dma-fences
>>>> outside of drivers/gpu. Good enough for an ack on this, or want something
>>>> changed?
>>>>
>>>> Thanks, Daniel
>>>>
>>>>> + * Note that only GPU drivers have a reasonable excuse for both requiring
>>>>> + * &mmu_interval_notifier and &shrinker callbacks at the same time as having to
>>>>> + * track asynchronous compute work using &dma_fence. No driver outside of
>>>>> + * drivers/gpu should ever call dma_fence_wait() in such contexts.
>>> I was hoping we'd get to 'no driver outside GPU should even use
>>> dma_fence()'
>> My last status was that V4L could come use dma_fences as well.
> I'm sure lots of places *could* use it, but I think I understood that
> it is a bad idea unless you have to fit into the DRM uAPI?

It would be a bit questionable if you use the container objects we came 
up with in the DRM subsystem outside of it.

But using the dma_fence itself makes sense for everything which could do 
async DMA in general.

> You are better to do something contained in the single driver where
> locking can be analyzed.
>
>> I'm not 100% sure, but wouldn't MMU notifier + dma_fence be a valid use case
>> for things like custom FPGA interfaces as well?
> I don't think we should expand the list of drivers that use this
> technique.
> Drivers that can't suspend should pin memory, not use blocked
> notifiers to created pinned memory.

Agreed totally, it's a complete pain to maintain even for the GPU drivers.

Unfortunately that doesn't change users from requesting it. So I'm 
pretty sure we are going to see more of this.

Christian.

>
> Jason
Jason Gunthorpe July 10, 2020, 1:48 p.m. UTC | #6
On Fri, Jul 10, 2020 at 03:01:10PM +0200, Christian König wrote:
> Am 10.07.20 um 14:54 schrieb Jason Gunthorpe:
> > On Fri, Jul 10, 2020 at 02:48:16PM +0200, Christian König wrote:
> > > Am 10.07.20 um 14:43 schrieb Jason Gunthorpe:
> > > > On Thu, Jul 09, 2020 at 10:09:11AM +0200, Daniel Vetter wrote:
> > > > > Hi Jason,
> > > > > 
> > > > > Below the paragraph I've added after our discussions around dma-fences
> > > > > outside of drivers/gpu. Good enough for an ack on this, or want something
> > > > > changed?
> > > > > 
> > > > > Thanks, Daniel
> > > > > 
> > > > > > + * Note that only GPU drivers have a reasonable excuse for both requiring
> > > > > > + * &mmu_interval_notifier and &shrinker callbacks at the same time as having to
> > > > > > + * track asynchronous compute work using &dma_fence. No driver outside of
> > > > > > + * drivers/gpu should ever call dma_fence_wait() in such contexts.
> > > > I was hoping we'd get to 'no driver outside GPU should even use
> > > > dma_fence()'
> > > My last status was that V4L could come use dma_fences as well.
> > I'm sure lots of places *could* use it, but I think I understood that
> > it is a bad idea unless you have to fit into the DRM uAPI?
> 
> It would be a bit questionable if you use the container objects we came up
> with in the DRM subsystem outside of it.
> 
> But using the dma_fence itself makes sense for everything which could do
> async DMA in general.

dma_fence only possibly makes some sense if you intend to expose the
completion outside a single driver. 

The prefered kernel design pattern for this is to connect things with
a function callback.

So the actual use case of dma_fence is quite narrow and tightly linked
to DRM.

I don't think we should spread this beyond DRM, I can't see a reason.

> > You are better to do something contained in the single driver where
> > locking can be analyzed.
> > 
> > > I'm not 100% sure, but wouldn't MMU notifier + dma_fence be a valid use case
> > > for things like custom FPGA interfaces as well?
> > I don't think we should expand the list of drivers that use this
> > technique.
> > Drivers that can't suspend should pin memory, not use blocked
> > notifiers to created pinned memory.
> 
> Agreed totally, it's a complete pain to maintain even for the GPU drivers.
> 
> Unfortunately that doesn't change users from requesting it. So I'm pretty
> sure we are going to see more of this.

Kernel maintainers need to say no.

The proper way to do DMA on no-faulting hardware is page pinning.

Trying to improve performance of limited HW by using sketchy
techniques at the cost of general system stability should be a NAK.

Jason
Daniel Vetter July 10, 2020, 2:02 p.m. UTC | #7
On Fri, Jul 10, 2020 at 3:48 PM Jason Gunthorpe <jgg@mellanox.com> wrote:
>
> On Fri, Jul 10, 2020 at 03:01:10PM +0200, Christian König wrote:
> > Am 10.07.20 um 14:54 schrieb Jason Gunthorpe:
> > > On Fri, Jul 10, 2020 at 02:48:16PM +0200, Christian König wrote:
> > > > Am 10.07.20 um 14:43 schrieb Jason Gunthorpe:
> > > > > On Thu, Jul 09, 2020 at 10:09:11AM +0200, Daniel Vetter wrote:
> > > > > > Hi Jason,
> > > > > >
> > > > > > Below the paragraph I've added after our discussions around dma-fences
> > > > > > outside of drivers/gpu. Good enough for an ack on this, or want something
> > > > > > changed?
> > > > > >
> > > > > > Thanks, Daniel
> > > > > >
> > > > > > > + * Note that only GPU drivers have a reasonable excuse for both requiring
> > > > > > > + * &mmu_interval_notifier and &shrinker callbacks at the same time as having to
> > > > > > > + * track asynchronous compute work using &dma_fence. No driver outside of
> > > > > > > + * drivers/gpu should ever call dma_fence_wait() in such contexts.
> > > > > I was hoping we'd get to 'no driver outside GPU should even use
> > > > > dma_fence()'
> > > > My last status was that V4L could come use dma_fences as well.
> > > I'm sure lots of places *could* use it, but I think I understood that
> > > it is a bad idea unless you have to fit into the DRM uAPI?
> >
> > It would be a bit questionable if you use the container objects we came up
> > with in the DRM subsystem outside of it.
> >
> > But using the dma_fence itself makes sense for everything which could do
> > async DMA in general.
>
> dma_fence only possibly makes some sense if you intend to expose the
> completion outside a single driver.
>
> The prefered kernel design pattern for this is to connect things with
> a function callback.
>
> So the actual use case of dma_fence is quite narrow and tightly linked
> to DRM.
>
> I don't think we should spread this beyond DRM, I can't see a reason.

Yeah v4l has a legit reason to use dma_fence, android wants that
there. There's even been patches proposed years ago, but never landed
because android is using some vendor hack horror show for camera
drivers right now.

But there is an effort going on to fix that (under the libcamera
heading), and I expect that once we have that, it'll want dma_fence
support. So outright excluding everyone from dma_fence is a bit too
much. They definitely shouldn't be used though for entirely
independent stuff.

> > > You are better to do something contained in the single driver where
> > > locking can be analyzed.
> > >
> > > > I'm not 100% sure, but wouldn't MMU notifier + dma_fence be a valid use case
> > > > for things like custom FPGA interfaces as well?
> > > I don't think we should expand the list of drivers that use this
> > > technique.
> > > Drivers that can't suspend should pin memory, not use blocked
> > > notifiers to created pinned memory.
> >
> > Agreed totally, it's a complete pain to maintain even for the GPU drivers.
> >
> > Unfortunately that doesn't change users from requesting it. So I'm pretty
> > sure we are going to see more of this.
>
> Kernel maintainers need to say no.
>
> The proper way to do DMA on no-faulting hardware is page pinning.
>
> Trying to improve performance of limited HW by using sketchy
> techniques at the cost of general system stability should be a NAK.

Well that's pretty much gpu drivers, all the horrors for a bit more speed :-)

On the text itself, should I upgrade to "must not" instead of "should
not"? Or more needed?
-Daniel
Jason Gunthorpe July 10, 2020, 2:23 p.m. UTC | #8
On Fri, Jul 10, 2020 at 04:02:35PM +0200, Daniel Vetter wrote:

> > dma_fence only possibly makes some sense if you intend to expose the
> > completion outside a single driver.
> >
> > The prefered kernel design pattern for this is to connect things with
> > a function callback.
> >
> > So the actual use case of dma_fence is quite narrow and tightly linked
> > to DRM.
> >
> > I don't think we should spread this beyond DRM, I can't see a reason.
> 
> Yeah v4l has a legit reason to use dma_fence, android wants that
> there. 

'legit' in the sense the v4l is supposed to trigger stuff in DRM when
V4L DMA completes? I would still see that as part of DRM

Or is it building a parallel DRM like DMA completion graph?

> > Trying to improve performance of limited HW by using sketchy
> > techniques at the cost of general system stability should be a NAK.
>
> Well that's pretty much gpu drivers, all the horrors for a bit more speed :-)
> 
> On the text itself, should I upgrade to "must not" instead of "should
> not"? Or more needed?

Fundamentally having some unknowable graph of dependencies where parts
of the graph can be placed in critical regions like notifiers is a
complete maintenance nightmare.

I think building systems like this should be discouraged :\

Jason
Daniel Vetter July 10, 2020, 8:02 p.m. UTC | #9
On Fri, Jul 10, 2020 at 4:23 PM Jason Gunthorpe <jgg@mellanox.com> wrote:
>
> On Fri, Jul 10, 2020 at 04:02:35PM +0200, Daniel Vetter wrote:
>
> > > dma_fence only possibly makes some sense if you intend to expose the
> > > completion outside a single driver.
> > >
> > > The prefered kernel design pattern for this is to connect things with
> > > a function callback.
> > >
> > > So the actual use case of dma_fence is quite narrow and tightly linked
> > > to DRM.
> > >
> > > I don't think we should spread this beyond DRM, I can't see a reason.
> >
> > Yeah v4l has a legit reason to use dma_fence, android wants that
> > there.
>
> 'legit' in the sense the v4l is supposed to trigger stuff in DRM when
> V4L DMA completes? I would still see that as part of DRM

Yes, and also the other way around. But thus far it didn't land.
-Daniel

> Or is it building a parallel DRM like DMA completion graph?
>
> > > Trying to improve performance of limited HW by using sketchy
> > > techniques at the cost of general system stability should be a NAK.
> >
> > Well that's pretty much gpu drivers, all the horrors for a bit more speed :-)
> >
> > On the text itself, should I upgrade to "must not" instead of "should
> > not"? Or more needed?
>
> Fundamentally having some unknowable graph of dependencies where parts
> of the graph can be placed in critical regions like notifiers is a
> complete maintenance nightmare.
>
> I think building systems like this should be discouraged :\
diff mbox series

Patch

diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst
index 05d856131140..f8f6decde359 100644
--- a/Documentation/driver-api/dma-buf.rst
+++ b/Documentation/driver-api/dma-buf.rst
@@ -133,6 +133,12 @@  DMA Fences
 .. kernel-doc:: drivers/dma-buf/dma-fence.c
    :doc: DMA fences overview
 
+DMA Fence Cross-Driver Contract
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. kernel-doc:: drivers/dma-buf/dma-fence.c
+   :doc: fence cross-driver contract
+
 DMA Fence Signalling Annotations
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
index 0005bc002529..af1d8ea926b3 100644
--- a/drivers/dma-buf/dma-fence.c
+++ b/drivers/dma-buf/dma-fence.c
@@ -64,6 +64,52 @@  static atomic64_t dma_fence_context_counter = ATOMIC64_INIT(1);
  *   &dma_buf.resv pointer.
  */
 
+/**
+ * DOC: fence cross-driver contract
+ *
+ * Since &dma_fence provide a cross driver contract, all drivers must follow the
+ * same rules:
+ *
+ * * Fences must complete in a reasonable time. Fences which represent kernels
+ *   and shaders submitted by userspace, which could run forever, must be backed
+ *   up by timeout and gpu hang recovery code. Minimally that code must prevent
+ *   further command submission and force complete all in-flight fences, e.g.
+ *   when the driver or hardware do not support gpu reset, or if the gpu reset
+ *   failed for some reason. Ideally the driver supports gpu recovery which only
+ *   affects the offending userspace context, and no other userspace
+ *   submissions.
+ *
+ * * Drivers may have different ideas of what completion within a reasonable
+ *   time means. Some hang recovery code uses a fixed timeout, others a mix
+ *   between observing forward progress and increasingly strict timeouts.
+ *   Drivers should not try to second guess timeout handling of fences from
+ *   other drivers.
+ *
+ * * To ensure there's no deadlocks of dma_fence_wait() against other locks
+ *   drivers should annotate all code required to reach dma_fence_signal(),
+ *   which completes the fences, with dma_fence_begin_signalling() and
+ *   dma_fence_end_signalling().
+ *
+ * * Drivers are allowed to call dma_fence_wait() while holding dma_resv_lock().
+ *   This means any code required for fence completion cannot acquire a
+ *   &dma_resv lock. Note that this also pulls in the entire established
+ *   locking hierarchy around dma_resv_lock() and dma_resv_unlock().
+ *
+ * * Drivers are allowed to call dma_fence_wait() from their &shrinker
+ *   callbacks. This means any code required for fence completion cannot
+ *   allocate memory with GFP_KERNEL.
+ *
+ * * Drivers are allowed to call dma_fence_wait() from their &mmu_notifier
+ *   respectively &mmu_interval_notifier callbacks. This means any code required
+ *   for fence completeion cannot allocate memory with GFP_NOFS or GFP_NOIO.
+ *   Only GFP_ATOMIC is permissible, which might fail.
+ *
+ * Note that only GPU drivers have a reasonable excuse for both requiring
+ * &mmu_interval_notifier and &shrinker callbacks at the same time as having to
+ * track asynchronous compute work using &dma_fence. No driver outside of
+ * drivers/gpu should ever call dma_fence_wait() in such contexts.
+ */
+
 static const char *dma_fence_stub_get_name(struct dma_fence *fence)
 {
         return "stub";
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
index e7d7197d48ce..0e6675ec1d11 100644
--- a/drivers/dma-buf/dma-resv.c
+++ b/drivers/dma-buf/dma-resv.c
@@ -36,6 +36,7 @@ 
 #include <linux/export.h>
 #include <linux/mm.h>
 #include <linux/sched/mm.h>
+#include <linux/mmu_notifier.h>
 
 /**
  * DOC: Reservation Object Overview
@@ -116,6 +117,13 @@  static int __init dma_resv_lockdep(void)
 	if (ret == -EDEADLK)
 		dma_resv_lock_slow(&obj, &ctx);
 	fs_reclaim_acquire(GFP_KERNEL);
+#ifdef CONFIG_MMU_NOTIFIER
+	lock_map_acquire(&__mmu_notifier_invalidate_range_start_map);
+	__dma_fence_might_wait();
+	lock_map_release(&__mmu_notifier_invalidate_range_start_map);
+#else
+	__dma_fence_might_wait();
+#endif
 	fs_reclaim_release(GFP_KERNEL);
 	ww_mutex_unlock(&obj.lock);
 	ww_acquire_fini(&ctx);
diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
index 3f288f7db2ef..09e23adb351d 100644
--- a/include/linux/dma-fence.h
+++ b/include/linux/dma-fence.h
@@ -360,6 +360,7 @@  dma_fence_get_rcu_safe(struct dma_fence __rcu **fencep)
 #ifdef CONFIG_LOCKDEP
 bool dma_fence_begin_signalling(void);
 void dma_fence_end_signalling(bool cookie);
+void __dma_fence_might_wait(void);
 #else
 static inline bool dma_fence_begin_signalling(void)
 {