diff mbox

[1/4] dma-buf: add caching of sg_table

Message ID 20180718104741.2524-1-christian.koenig@amd.com (mailing list archive)
State New, archived
Headers show

Commit Message

Christian König July 18, 2018, 10:47 a.m. UTC
To allow a smooth transition from pinning buffer objects to dynamic
invalidation we first start to cache the sg_table for an attachment
unless the driver explicitly says to not do so.

Signed-off-by: Christian König <christian.koenig@amd.com>
---
 drivers/dma-buf/dma-buf.c | 24 ++++++++++++++++++++++++
 include/linux/dma-buf.h   | 11 +++++++++++
 2 files changed, 35 insertions(+)

Comments

Christian König July 18, 2018, 1:24 p.m. UTC | #1
Hi Daniel,

Am 18.07.2018 um 14:07 schrieb Patchwork:
> == Series Details ==
>
> Series: series starting with [1/4] dma-buf: add caching of sg_table
> URL   : https://patchwork.freedesktop.org/series/46778/
> State : failure
> [SNIP]

it looks like I'm a step further understanding the problems which come 
with this change.

I've more or less audited all use cases and think that only i915 is left 
with the following lock inversion: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_9705/fi-cfl-8700k/igt@gem_mmap_gtt@basic-small-bo-tiledx.html

Now my question is what is &obj->mm.lock used for and why do you guys 
call dma_buf_map_attachment() while holding it?

Thanks in advance,
Christian.
Daniel Vetter Aug. 7, 2018, 1:21 p.m. UTC | #2
On Wed, Jul 18, 2018 at 03:24:26PM +0200, Christian König wrote:
> Hi Daniel,
> 
> Am 18.07.2018 um 14:07 schrieb Patchwork:
> > == Series Details ==
> > 
> > Series: series starting with [1/4] dma-buf: add caching of sg_table
> > URL   : https://patchwork.freedesktop.org/series/46778/
> > State : failure
> > [SNIP]
> 
> it looks like I'm a step further understanding the problems which come with
> this change.
> 
> I've more or less audited all use cases and think that only i915 is left
> with the following lock inversion: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_9705/fi-cfl-8700k/igt@gem_mmap_gtt@basic-small-bo-tiledx.html
> 
> Now my question is what is &obj->mm.lock used for and why do you guys call
> dma_buf_map_attachment() while holding it?

obj->mm.lock is the lock to protect getting at the backing storage.
i915_gem_object_get_pages and _put_pages are the relevant functions.

Existing paths want to pin the backing storage while holding the
reservation lock. And your new path needs to do the inverse since
dma_buf_map_attachment now also requires the reservation lock. And that is
obviously called from within the dma-buf importer version of get_pages.

I think there's 2 solutions:

- Merge obj->mm.lock and the reservation lock. Probably the cleaner
  solution, but likely more work.

- Make sure the obj->mm.lock always nests within the reservation lock, and
  grab the reservation lock anywhere it's not yet grabbed. Then you can
  use the dma_buf_map_attachment_locked variant in
  i915_gem_object_get_pages_dmabuf to avoid the locking inversion. This
  would essentially make the obj->mm.lock fully redundant.

Either way is going to be quite a bit of work. I expect that you need to
replace all the cases of dma_buf_map_attachment in i915 with
dma_buf_map_attachment_locked, and adjust the entire callchain to the new
locking scheme.

The real trouble here imo is that i915 CI is just the canary, I expect a
bunch of other drivers will also look at an inverted locking hierarchy if
dma_buf_map_attachment needs the reservation lock. And there's no
convenient CI for them, and code audit won't cut it really (at least I'm
too stupid to keep the locking hierarchy of an entire driver in my head).
-Daniel
Tvrtko Ursulin Aug. 7, 2018, 3:13 p.m. UTC | #3
On 07/08/2018 14:21, Daniel Vetter wrote:
> On Wed, Jul 18, 2018 at 03:24:26PM +0200, Christian König wrote:
>> Hi Daniel,
>>
>> Am 18.07.2018 um 14:07 schrieb Patchwork:
>>> == Series Details ==
>>>
>>> Series: series starting with [1/4] dma-buf: add caching of sg_table
>>> URL   : https://patchwork.freedesktop.org/series/46778/
>>> State : failure
>>> [SNIP]
>>
>> it looks like I'm a step further understanding the problems which come with
>> this change.
>>
>> I've more or less audited all use cases and think that only i915 is left
>> with the following lock inversion: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_9705/fi-cfl-8700k/igt@gem_mmap_gtt@basic-small-bo-tiledx.html
>>
>> Now my question is what is &obj->mm.lock used for and why do you guys call
>> dma_buf_map_attachment() while holding it?
> 
> obj->mm.lock is the lock to protect getting at the backing storage.
> i915_gem_object_get_pages and _put_pages are the relevant functions.
> 
> Existing paths want to pin the backing storage while holding the
> reservation lock. And your new path needs to do the inverse since
> dma_buf_map_attachment now also requires the reservation lock. And that is
> obviously called from within the dma-buf importer version of get_pages.
> 
> I think there's 2 solutions:
> 
> - Merge obj->mm.lock and the reservation lock. Probably the cleaner
>    solution, but likely more work.
> 
> - Make sure the obj->mm.lock always nests within the reservation lock, and
>    grab the reservation lock anywhere it's not yet grabbed. Then you can
>    use the dma_buf_map_attachment_locked variant in
>    i915_gem_object_get_pages_dmabuf to avoid the locking inversion. This
>    would essentially make the obj->mm.lock fully redundant.
> 
> Either way is going to be quite a bit of work. I expect that you need to
> replace all the cases of dma_buf_map_attachment in i915 with
> dma_buf_map_attachment_locked, and adjust the entire callchain to the new
> locking scheme.
> 
> The real trouble here imo is that i915 CI is just the canary, I expect a
> bunch of other drivers will also look at an inverted locking hierarchy if
> dma_buf_map_attachment needs the reservation lock. And there's no
> convenient CI for them, and code audit won't cut it really (at least I'm
> too stupid to keep the locking hierarchy of an entire driver in my head).

We chatted about this on #intel-gfx and concluded that either solution 
derives to replacing the obj->mm.lock with the reservation lock. And 
that is problematic for i915, both from the reason of a general 
direction towards more fine-grained locking, and also issue that 
reservation lock needs to be avoided under the shrinker path (we lock 
obj->mm.lock when dropping the backing store there).

I proposed that maybe we could re-jig how we use obj->mm.lock a bit, to 
ensure backing store vfunc (get_pages) is not called under it (although 
I haven't thought it fully through it may be possible without too 
significant drawbacks), but Chris also has some patches which may work 
around this in a different way. So I'll wait to see those first.

On whether or not reservation lock is the right lock to use from dma-buf 
for this purpose I'll leave other guys to comment - I am not fully into 
the details of dma-buf design.

Regards,

Tvrtko
Daniel Vetter Aug. 14, 2018, 9:42 a.m. UTC | #4
On Tue, Aug 07, 2018 at 04:13:53PM +0100, Tvrtko Ursulin wrote:
> 
> On 07/08/2018 14:21, Daniel Vetter wrote:
> > On Wed, Jul 18, 2018 at 03:24:26PM +0200, Christian König wrote:
> > > Hi Daniel,
> > > 
> > > Am 18.07.2018 um 14:07 schrieb Patchwork:
> > > > == Series Details ==
> > > > 
> > > > Series: series starting with [1/4] dma-buf: add caching of sg_table
> > > > URL   : https://patchwork.freedesktop.org/series/46778/
> > > > State : failure
> > > > [SNIP]
> > > 
> > > it looks like I'm a step further understanding the problems which come with
> > > this change.
> > > 
> > > I've more or less audited all use cases and think that only i915 is left
> > > with the following lock inversion: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_9705/fi-cfl-8700k/igt@gem_mmap_gtt@basic-small-bo-tiledx.html
> > > 
> > > Now my question is what is &obj->mm.lock used for and why do you guys call
> > > dma_buf_map_attachment() while holding it?
> > 
> > obj->mm.lock is the lock to protect getting at the backing storage.
> > i915_gem_object_get_pages and _put_pages are the relevant functions.
> > 
> > Existing paths want to pin the backing storage while holding the
> > reservation lock. And your new path needs to do the inverse since
> > dma_buf_map_attachment now also requires the reservation lock. And that is
> > obviously called from within the dma-buf importer version of get_pages.
> > 
> > I think there's 2 solutions:
> > 
> > - Merge obj->mm.lock and the reservation lock. Probably the cleaner
> >    solution, but likely more work.
> > 
> > - Make sure the obj->mm.lock always nests within the reservation lock, and
> >    grab the reservation lock anywhere it's not yet grabbed. Then you can
> >    use the dma_buf_map_attachment_locked variant in
> >    i915_gem_object_get_pages_dmabuf to avoid the locking inversion. This
> >    would essentially make the obj->mm.lock fully redundant.
> > 
> > Either way is going to be quite a bit of work. I expect that you need to
> > replace all the cases of dma_buf_map_attachment in i915 with
> > dma_buf_map_attachment_locked, and adjust the entire callchain to the new
> > locking scheme.
> > 
> > The real trouble here imo is that i915 CI is just the canary, I expect a
> > bunch of other drivers will also look at an inverted locking hierarchy if
> > dma_buf_map_attachment needs the reservation lock. And there's no
> > convenient CI for them, and code audit won't cut it really (at least I'm
> > too stupid to keep the locking hierarchy of an entire driver in my head).
> 
> We chatted about this on #intel-gfx and concluded that either solution
> derives to replacing the obj->mm.lock with the reservation lock. And that is
> problematic for i915, both from the reason of a general direction towards
> more fine-grained locking, and also issue that reservation lock needs to be
> avoided under the shrinker path (we lock obj->mm.lock when dropping the
> backing store there).

So the way this works for other drivers (well atm there's only ttm ones)
is that they try-lock the reservation lock when trying to evict stuff from
the shrinker. Would be good to elaborate on why we can't use that
approach ..

> I proposed that maybe we could re-jig how we use obj->mm.lock a bit, to
> ensure backing store vfunc (get_pages) is not called under it (although I
> haven't thought it fully through it may be possible without too significant
> drawbacks), but Chris also has some patches which may work around this in a
> different way. So I'll wait to see those first.

.. or is that Chris' work?

Honest question since I'm entirely out of the loop here on the i915 side.

> On whether or not reservation lock is the right lock to use from dma-buf for
> this purpose I'll leave other guys to comment - I am not fully into the
> details of dma-buf design.

If we want to do dynamic cross-device buffer management, which is
Christian's goal here, then we must somehow harmonize the locking schemes
used for buffer management, at least for backing storage handling. That's
the entire challenge of this undertaking, and the big trouble is that we
don't have a surplus on people who understand more than their own drivers
buffer management code and locking scheme. That's the information I'm
trying to help extract from all involved parties here.

Cheers, Daniel
diff mbox

Patch

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 13884474d158..0bea5eecf554 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -574,6 +574,20 @@  struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
 	list_add(&attach->node, &dmabuf->attachments);
 
 	mutex_unlock(&dmabuf->lock);
+
+	if (!dmabuf->ops->no_sgt_cache) {
+		struct sg_table *sgt;
+
+		sgt = dmabuf->ops->map_dma_buf(attach, DMA_BIDIRECTIONAL);
+		if (!sgt)
+			sgt = ERR_PTR(-ENOMEM);
+		if (IS_ERR(sgt)) {
+			dma_buf_detach(dmabuf, attach);
+			return ERR_CAST(sgt);
+		}
+		attach->sgt = sgt;
+	}
+
 	return attach;
 
 err_attach:
@@ -596,6 +610,10 @@  void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach)
 	if (WARN_ON(!dmabuf || !attach))
 		return;
 
+	if (attach->sgt)
+		dmabuf->ops->unmap_dma_buf(attach, attach->sgt,
+					   DMA_BIDIRECTIONAL);
+
 	mutex_lock(&dmabuf->lock);
 	list_del(&attach->node);
 	if (dmabuf->ops->detach)
@@ -631,6 +649,9 @@  struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach,
 	if (WARN_ON(!attach || !attach->dmabuf))
 		return ERR_PTR(-EINVAL);
 
+	if (attach->sgt)
+		return attach->sgt;
+
 	sg_table = attach->dmabuf->ops->map_dma_buf(attach, direction);
 	if (!sg_table)
 		sg_table = ERR_PTR(-ENOMEM);
@@ -658,6 +679,9 @@  void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
 	if (WARN_ON(!attach || !attach->dmabuf || !sg_table))
 		return;
 
+	if (attach->sgt == sg_table)
+		return;
+
 	attach->dmabuf->ops->unmap_dma_buf(attach, sg_table,
 						direction);
 }
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index 58725f890b5b..6534a6769e17 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -51,6 +51,16 @@  struct dma_buf_attachment;
  * @vunmap: [optional] unmaps a vmap from the buffer
  */
 struct dma_buf_ops {
+	/**
+	 * @no_sgt_cache:
+	 *
+	 * Flag controlling the caching of the sg_table in the DMA-buf helpers.
+	 * If not set the sg_table is created during device attaching, if set
+	 * the sg_table is created dynamically when dma_buf_map_attachment() is
+	 * called.
+	 */
+	bool no_sgt_cache;
+
 	/**
 	 * @attach:
 	 *
@@ -323,6 +333,7 @@  struct dma_buf_attachment {
 	struct device *dev;
 	struct list_head node;
 	void *priv;
+	struct sg_table *sgt;
 };
 
 /**