diff mbox

intel: Track known prime buffers for re-use

Message ID 1385127354-28573-1-git-send-email-keithp@keithp.com (mailing list archive)
State New, archived
Headers show

Commit Message

Keith Packard Nov. 22, 2013, 1:35 p.m. UTC
If the application sends us a file descriptor pointing at a prime
buffer that we've already got, we have to re-use the same bo_gem
structure or chaos will result.

Track the set of all known prime objects and look to see if the kernel
has returned one of those for a new file descriptor.

Signed-off-by: Keith Packard <keithp@keithp.com>
---

This one took a while to find -- multiple bo_gem structs pointing at
the same gem handle would either cause the object to be destroyed
before we were done using it, or we'd end up sending the same
gem_handle for multiple buffers.

This looks a lot like the named object handling stuff, as one would
expect.

 intel/intel_bufmgr_gem.c | 22 +++++++++++++++++++++-
 1 file changed, 21 insertions(+), 1 deletion(-)

Comments

Daniel Vetter Nov. 25, 2013, 3:39 p.m. UTC | #1
On Fri, Nov 22, 2013 at 05:35:54AM -0800, Keith Packard wrote:
> If the application sends us a file descriptor pointing at a prime
> buffer that we've already got, we have to re-use the same bo_gem
> structure or chaos will result.
> 
> Track the set of all known prime objects and look to see if the kernel
> has returned one of those for a new file descriptor.
> 
> Signed-off-by: Keith Packard <keithp@keithp.com>
> ---
> 
> This one took a while to find -- multiple bo_gem structs pointing at
> the same gem handle would either cause the object to be destroyed
> before we were done using it, or we'd end up sending the same
> gem_handle for multiple buffers.
> 
> This looks a lot like the named object handling stuff, as one would
> expect.

Yeah, it unfortunately took a few rounds of kernel fixes and other
haggling to get the semantics right on this one. The kernel atm promises
to userspace (minus one big in a racy corner case no one should care
about, still need to fix that one) that it'll return the same gem handle
if userspace already has one for the underlying object.

We need that to make sure userspace doesn't submit the same bo in execbuf
multiple times and then upsets the kernel - we'll reject such batches as
userspace bugs.

I guess I should have reviewed userspace when doing the relevant kernel
fixes, shame on me. Two questions below.

> 
>  intel/intel_bufmgr_gem.c | 22 +++++++++++++++++++++-
>  1 file changed, 21 insertions(+), 1 deletion(-)
> 
> diff --git a/intel/intel_bufmgr_gem.c b/intel/intel_bufmgr_gem.c
> index df6fcec..2897bb2 100644
> --- a/intel/intel_bufmgr_gem.c
> +++ b/intel/intel_bufmgr_gem.c
> @@ -112,6 +112,7 @@ typedef struct _drm_intel_bufmgr_gem {
>  
>  	drmMMListHead named;
>  	drmMMListHead vma_cache;
> +        drmMMListHead prime;
>  	int vma_count, vma_open, vma_max;
>  
>  	uint64_t gtt_size;
> @@ -2451,8 +2452,25 @@ drm_intel_bo_gem_create_from_prime(drm_intel_bufmgr *bufmgr, int prime_fd, int s
>  	uint32_t handle;
>  	drm_intel_bo_gem *bo_gem;
>  	struct drm_i915_gem_get_tiling get_tiling;
> +	drmMMListHead *list;
>  
>  	ret = drmPrimeFDToHandle(bufmgr_gem->fd, prime_fd, &handle);
> +
> +	/*
> +	 * See if the kernel has already returned this buffer to us. Just as
> +	 * for named buffers, we must not create two bo's pointing at the same
> +	 * kernel object
> +	 */
> +	for (list = bufmgr_gem->prime.next;
> +	     list != &bufmgr_gem->prime;
> +	     list = list->next) {
> +		bo_gem = DRMLISTENTRY(drm_intel_bo_gem, list, name_list);
> +		if (bo_gem->gem_handle == handle) {
> +			drm_intel_gem_bo_reference(&bo_gem->bo);
> +			return &bo_gem->bo;
> +		}
> +	}
> +
>  	if (ret) {
>  	  fprintf(stderr,"ret is %d %d\n", ret, errno);
>  		return NULL;
> @@ -2487,8 +2505,8 @@ drm_intel_bo_gem_create_from_prime(drm_intel_bufmgr *bufmgr, int prime_fd, int s
>  	bo_gem->has_error = false;
>  	bo_gem->reusable = false;
>  
> -	DRMINITLISTHEAD(&bo_gem->name_list);
>  	DRMINITLISTHEAD(&bo_gem->vma_list);
> +	DRMLISTADDTAIL(&bo_gem->name_list, &bufmgr_gem->prime);

Won't this result in us having fun when a buffer is both imported from a
prime buffer and then also used with legacy flink? Or is this something
the X server won't support?

The second one is about exporting: With flink names we also add the name
to the lookup list in drm_intel_gem_bo_flink. I think we should do the
same for exported prime buffers just as a precaution - the kernel will
return the (existing) gem name also for a prime buffer that has been
exported by yourself. I guess that would imply insane userspace, but
better safe than sorry.

Cheers, Daniel

>  
>  	VG_CLEAR(get_tiling);
>  	get_tiling.handle = bo_gem->gem_handle;
> @@ -3301,5 +3319,7 @@ drm_intel_bufmgr_gem_init(int fd, int batch_size)
>  	DRMINITLISTHEAD(&bufmgr_gem->vma_cache);
>  	bufmgr_gem->vma_max = -1; /* unlimited by default */
>  
> +	DRMINITLISTHEAD(&bufmgr_gem->prime);
> +
>  	return &bufmgr_gem->bufmgr;
>  }
> -- 
> 1.8.4.3
> 
> _______________________________________________
> mesa-dev mailing list
> mesa-dev@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Keith Packard Nov. 25, 2013, 9:21 p.m. UTC | #2
Daniel Vetter <daniel@ffwll.ch> writes:

> Yeah, it unfortunately took a few rounds of kernel fixes and other
> haggling to get the semantics right on this one. The kernel atm promises
> to userspace (minus one big in a racy corner case no one should care
> about, still need to fix that one) that it'll return the same gem handle
> if userspace already has one for the underlying object.

That's definitely something we want it to do -- returning different
handles to the same object would result in madness. We just need to deal with the
userspace consequences.

> We need that to make sure userspace doesn't submit the same bo in execbuf
> multiple times and then upsets the kernel - we'll reject such batches as
> userspace bugs.

Oh, I'm well aware of that; you can imagine the adventures I had trying
to debug this...

>> -	DRMINITLISTHEAD(&bo_gem->name_list);
>>  	DRMINITLISTHEAD(&bo_gem->vma_list);
>> +	DRMLISTADDTAIL(&bo_gem->name_list, &bufmgr_gem->prime);
>
> Won't this result in us having fun when a buffer is both imported from a
> prime buffer and then also used with legacy flink? Or is this something
> the X server won't support?

Well, the whole point of prime-based FD buffer passing is to *not* use
flink, of course. However, you could use both DRI2 and DRI3 on
the same pixmap (presumably through different APIs).

Ok, I just tried to create a completely separate prime list for this,
and I think that's wrong. If the question is whether the kernel might
return the same object from two calls, then we'd best actually keep a
single list and look things up for both APIs there. *and*, I think we
need to do the flink->gem handle conversion and then look in the list
again to see if that gem handle was already returned from another flink
or prime fd.

> The second one is about exporting: With flink names we also add the name
> to the lookup list in drm_intel_gem_bo_flink. I think we should do the
> same for exported prime buffers just as a precaution - the kernel will
> return the (existing) gem name also for a prime buffer that has been
> exported by yourself. I guess that would imply insane userspace, but
> better safe than sorry.

yeah, that would seem like crazy user-space behaviour, but user space
often seems insane.

Thanks for your review; replacement patch to follow shortly.
diff mbox

Patch

diff --git a/intel/intel_bufmgr_gem.c b/intel/intel_bufmgr_gem.c
index df6fcec..2897bb2 100644
--- a/intel/intel_bufmgr_gem.c
+++ b/intel/intel_bufmgr_gem.c
@@ -112,6 +112,7 @@  typedef struct _drm_intel_bufmgr_gem {
 
 	drmMMListHead named;
 	drmMMListHead vma_cache;
+        drmMMListHead prime;
 	int vma_count, vma_open, vma_max;
 
 	uint64_t gtt_size;
@@ -2451,8 +2452,25 @@  drm_intel_bo_gem_create_from_prime(drm_intel_bufmgr *bufmgr, int prime_fd, int s
 	uint32_t handle;
 	drm_intel_bo_gem *bo_gem;
 	struct drm_i915_gem_get_tiling get_tiling;
+	drmMMListHead *list;
 
 	ret = drmPrimeFDToHandle(bufmgr_gem->fd, prime_fd, &handle);
+
+	/*
+	 * See if the kernel has already returned this buffer to us. Just as
+	 * for named buffers, we must not create two bo's pointing at the same
+	 * kernel object
+	 */
+	for (list = bufmgr_gem->prime.next;
+	     list != &bufmgr_gem->prime;
+	     list = list->next) {
+		bo_gem = DRMLISTENTRY(drm_intel_bo_gem, list, name_list);
+		if (bo_gem->gem_handle == handle) {
+			drm_intel_gem_bo_reference(&bo_gem->bo);
+			return &bo_gem->bo;
+		}
+	}
+
 	if (ret) {
 	  fprintf(stderr,"ret is %d %d\n", ret, errno);
 		return NULL;
@@ -2487,8 +2505,8 @@  drm_intel_bo_gem_create_from_prime(drm_intel_bufmgr *bufmgr, int prime_fd, int s
 	bo_gem->has_error = false;
 	bo_gem->reusable = false;
 
-	DRMINITLISTHEAD(&bo_gem->name_list);
 	DRMINITLISTHEAD(&bo_gem->vma_list);
+	DRMLISTADDTAIL(&bo_gem->name_list, &bufmgr_gem->prime);
 
 	VG_CLEAR(get_tiling);
 	get_tiling.handle = bo_gem->gem_handle;
@@ -3301,5 +3319,7 @@  drm_intel_bufmgr_gem_init(int fd, int batch_size)
 	DRMINITLISTHEAD(&bufmgr_gem->vma_cache);
 	bufmgr_gem->vma_max = -1; /* unlimited by default */
 
+	DRMINITLISTHEAD(&bufmgr_gem->prime);
+
 	return &bufmgr_gem->bufmgr;
 }