diff mbox series

drm/i915: Implement SINGLE_TIMELINE with a syncobj (v2)

Message ID 20210323175149.3390801-1-jason@jlekstrand.net (mailing list archive)
State New, archived
Headers show
Series drm/i915: Implement SINGLE_TIMELINE with a syncobj (v2) | expand

Commit Message

Jason Ekstrand March 23, 2021, 5:51 p.m. UTC
This API is entirely unnecessary and I'd love to get rid of it.  If
userspace wants a single timeline across multiple contexts, they can
either use implicit synchronization or a syncobj, both of which existed
at the time this feature landed.  The justification given at the time
was that it would help GL drivers which are inherently single-timeline.
However, neither of our GL drivers actually wanted the feature.  i965
was already in maintenance mode at the time and iris uses syncobj for
everything.

Unfortunately, as much as I'd love to get rid of it, it is used by the
media driver so we can't do that.  We can, however, do the next-best
thing which is to embed a syncobj in the context and do exactly what
we'd expect from userspace internally.  This isn't an entirely identical
implementation because it's no longer atomic if userspace races with
itself by calling execbuffer2 twice simultaneously from different
threads.  It won't crash in that case; it just doesn't guarantee any
ordering between those two submits.

Moving SINGLE_TIMELINE to a syncobj emulation has a couple of technical
advantages beyond mere annoyance.  One is that intel_timeline is no
longer an api-visible object and can remain entirely an implementation
detail.  This may be advantageous as we make scheduler changes going
forward.  Second is that, together with deleting the CLONE_CONTEXT API,
we should now have a 1:1 mapping between intel_context and
intel_timeline which may help us reduce locking.

v2 (Jason Ekstrand):
 - Update the comment on i915_gem_context::syncobj to mention that it's
   an emulation and the possible race if userspace calls execbuffer2
   twice on the same context concurrently.
 - Wrap the checks for eb.gem_context->syncobj in unlikely()
 - Drop the dma_fence reference
 - Improved commit message

Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c   | 47 ++++---------------
 .../gpu/drm/i915/gem/i915_gem_context_types.h | 14 +++++-
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    | 16 +++++++
 3 files changed, 39 insertions(+), 38 deletions(-)

Comments

Tvrtko Ursulin March 24, 2021, 9:28 a.m. UTC | #1
On 23/03/2021 17:51, Jason Ekstrand wrote:
> This API is entirely unnecessary and I'd love to get rid of it.  If
> userspace wants a single timeline across multiple contexts, they can
> either use implicit synchronization or a syncobj, both of which existed
> at the time this feature landed.  The justification given at the time
> was that it would help GL drivers which are inherently single-timeline.
> However, neither of our GL drivers actually wanted the feature.  i965
> was already in maintenance mode at the time and iris uses syncobj for
> everything.
> 
> Unfortunately, as much as I'd love to get rid of it, it is used by the
> media driver so we can't do that.  We can, however, do the next-best
> thing which is to embed a syncobj in the context and do exactly what
> we'd expect from userspace internally.  This isn't an entirely identical
> implementation because it's no longer atomic if userspace races with
> itself by calling execbuffer2 twice simultaneously from different
> threads.  It won't crash in that case; it just doesn't guarantee any
> ordering between those two submits.
> 
> Moving SINGLE_TIMELINE to a syncobj emulation has a couple of technical
> advantages beyond mere annoyance.  One is that intel_timeline is no
> longer an api-visible object and can remain entirely an implementation
> detail.  This may be advantageous as we make scheduler changes going
> forward.  Second is that, together with deleting the CLONE_CONTEXT API,
> we should now have a 1:1 mapping between intel_context and
> intel_timeline which may help us reduce locking.

Much, much better commit message although I still fail to understand 
where do you see implementation details leaking out. So for me this is 
still something I'd like to get to the bottom of.

I would also mention the difference regarding fence context change.

And in general I would maintain this patch as part of a series which 
ends up demonstrating the "mays" and "shoulds".

> 
> v2 (Jason Ekstrand):
>   - Update the comment on i915_gem_context::syncobj to mention that it's
>     an emulation and the possible race if userspace calls execbuffer2
>     twice on the same context concurrently.
>   - Wrap the checks for eb.gem_context->syncobj in unlikely()
>   - Drop the dma_fence reference
>   - Improved commit message
> 
> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> ---
>   drivers/gpu/drm/i915/gem/i915_gem_context.c   | 47 ++++---------------
>   .../gpu/drm/i915/gem/i915_gem_context_types.h | 14 +++++-
>   .../gpu/drm/i915/gem/i915_gem_execbuffer.c    | 16 +++++++
>   3 files changed, 39 insertions(+), 38 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> index f88bac19333ec..e094f4a1ca4cd 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> @@ -67,6 +67,8 @@
>   #include <linux/log2.h>
>   #include <linux/nospec.h>
>   
> +#include <drm/drm_syncobj.h>
> +
>   #include "gt/gen6_ppgtt.h"
>   #include "gt/intel_context.h"
>   #include "gt/intel_engine_heartbeat.h"
> @@ -224,10 +226,6 @@ static void intel_context_set_gem(struct intel_context *ce,
>   		ce->vm = vm;
>   	}
>   
> -	GEM_BUG_ON(ce->timeline);
> -	if (ctx->timeline)
> -		ce->timeline = intel_timeline_get(ctx->timeline);
> -
>   	if (ctx->sched.priority >= I915_PRIORITY_NORMAL &&
>   	    intel_engine_has_timeslices(ce->engine))
>   		__set_bit(CONTEXT_USE_SEMAPHORES, &ce->flags);
> @@ -344,8 +342,8 @@ void i915_gem_context_release(struct kref *ref)
>   	mutex_destroy(&ctx->engines_mutex);
>   	mutex_destroy(&ctx->lut_mutex);
>   
> -	if (ctx->timeline)
> -		intel_timeline_put(ctx->timeline);
> +	if (ctx->syncobj)
> +		drm_syncobj_put(ctx->syncobj);
>   
>   	put_pid(ctx->pid);
>   	mutex_destroy(&ctx->mutex);
> @@ -790,33 +788,11 @@ static void __assign_ppgtt(struct i915_gem_context *ctx,
>   		i915_vm_close(vm);
>   }
>   
> -static void __set_timeline(struct intel_timeline **dst,
> -			   struct intel_timeline *src)
> -{
> -	struct intel_timeline *old = *dst;
> -
> -	*dst = src ? intel_timeline_get(src) : NULL;
> -
> -	if (old)
> -		intel_timeline_put(old);
> -}
> -
> -static void __apply_timeline(struct intel_context *ce, void *timeline)
> -{
> -	__set_timeline(&ce->timeline, timeline);
> -}
> -
> -static void __assign_timeline(struct i915_gem_context *ctx,
> -			      struct intel_timeline *timeline)
> -{
> -	__set_timeline(&ctx->timeline, timeline);
> -	context_apply_all(ctx, __apply_timeline, timeline);
> -}
> -
>   static struct i915_gem_context *
>   i915_gem_create_context(struct drm_i915_private *i915, unsigned int flags)
>   {
>   	struct i915_gem_context *ctx;
> +	int ret;
>   
>   	if (flags & I915_CONTEXT_CREATE_FLAGS_SINGLE_TIMELINE &&
>   	    !HAS_EXECLISTS(i915))
> @@ -845,16 +821,13 @@ i915_gem_create_context(struct drm_i915_private *i915, unsigned int flags)
>   	}
>   
>   	if (flags & I915_CONTEXT_CREATE_FLAGS_SINGLE_TIMELINE) {
> -		struct intel_timeline *timeline;
> -
> -		timeline = intel_timeline_create(&i915->gt);
> -		if (IS_ERR(timeline)) {
> +		ret = drm_syncobj_create(&ctx->syncobj,
> +					 DRM_SYNCOBJ_CREATE_SIGNALED,
> +					 NULL);
> +		if (ret) {
>   			context_close(ctx);
> -			return ERR_CAST(timeline);
> +			return ERR_PTR(ret);
>   		}
> -
> -		__assign_timeline(ctx, timeline);
> -		intel_timeline_put(timeline);
>   	}
>   
>   	trace_i915_context_create(ctx);
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
> index 676592e27e7d2..df76767f0c41b 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
> @@ -83,7 +83,19 @@ struct i915_gem_context {
>   	struct i915_gem_engines __rcu *engines;
>   	struct mutex engines_mutex; /* guards writes to engines */
>   
> -	struct intel_timeline *timeline;
> +	/**
> +	 * @syncobj: Shared timeline syncobj
> +	 *
> +	 * When the SHARED_TIMELINE flag is set on context creation, we
> +	 * emulate a single timeline across all engines using this syncobj.
> +	 * For every execbuffer2 call, this syncobj is used as both an in-
> +	 * and out-fence.  Unlike the real intel_timeline, this doesn't
> +	 * provide perfect atomic in-order guarantees if the client races
> +	 * with itself by calling execbuffer2 twice concurrently.  However,
> +	 * if userspace races with itself, that's not likely to yield well-
> +	 * defined results anyway so we choose to not care.
> +	 */
> +	struct drm_syncobj *syncobj;
>   
>   	/**
>   	 * @vm: unique address space (GTT)
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> index 96403130a373d..2e9748c1edddf 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> @@ -3295,6 +3295,16 @@ i915_gem_do_execbuffer(struct drm_device *dev,
>   		goto err_vma;
>   	}
>   
> +	if (unlikely(eb.gem_context->syncobj)) {
> +		struct dma_fence *fence;
> +
> +		fence = drm_syncobj_fence_get(eb.gem_context->syncobj);
> +		err = i915_request_await_dma_fence(eb.request, fence);
> +		if (err)
> +			goto err_ext;
> +		dma_fence_put(fence);

I think put goes before the error bail.

> +	}
> +
>   	if (in_fence) {
>   		if (args->flags & I915_EXEC_FENCE_SUBMIT)
>   			err = i915_request_await_execution(eb.request,
> @@ -3351,6 +3361,12 @@ i915_gem_do_execbuffer(struct drm_device *dev,
>   			fput(out_fence->file);
>   		}
>   	}
> +
> +	if (unlikely(eb.gem_context->syncobj)) {
> +		drm_syncobj_replace_fence(eb.gem_context->syncobj,
> +					  &eb.request->fence);
> +	}
> +
>   	i915_request_put(eb.request);
>   
>   err_vma:
> 

Regards,

Tvrtko
Daniel Vetter March 24, 2021, 9:46 a.m. UTC | #2
On Tue, Mar 23, 2021 at 12:51:49PM -0500, Jason Ekstrand wrote:
> This API is entirely unnecessary and I'd love to get rid of it.  If
> userspace wants a single timeline across multiple contexts, they can
> either use implicit synchronization or a syncobj, both of which existed
> at the time this feature landed.  The justification given at the time
> was that it would help GL drivers which are inherently single-timeline.
> However, neither of our GL drivers actually wanted the feature.  i965
> was already in maintenance mode at the time and iris uses syncobj for
> everything.
> 
> Unfortunately, as much as I'd love to get rid of it, it is used by the
> media driver so we can't do that.  We can, however, do the next-best
> thing which is to embed a syncobj in the context and do exactly what
> we'd expect from userspace internally.  This isn't an entirely identical
> implementation because it's no longer atomic if userspace races with
> itself by calling execbuffer2 twice simultaneously from different
> threads.  It won't crash in that case; it just doesn't guarantee any
> ordering between those two submits.
> 
> Moving SINGLE_TIMELINE to a syncobj emulation has a couple of technical
> advantages beyond mere annoyance.  One is that intel_timeline is no
> longer an api-visible object and can remain entirely an implementation
> detail.  This may be advantageous as we make scheduler changes going
> forward.  Second is that, together with deleting the CLONE_CONTEXT API,
> we should now have a 1:1 mapping between intel_context and
> intel_timeline which may help us reduce locking.

Yeah I think this captures everything we need to say here.

Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>

No full r-b because I have a pile of other things to do too.
-Daniel

> 
> v2 (Jason Ekstrand):
>  - Update the comment on i915_gem_context::syncobj to mention that it's
>    an emulation and the possible race if userspace calls execbuffer2
>    twice on the same context concurrently.
>  - Wrap the checks for eb.gem_context->syncobj in unlikely()
>  - Drop the dma_fence reference
>  - Improved commit message
> 
> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> ---
>  drivers/gpu/drm/i915/gem/i915_gem_context.c   | 47 ++++---------------
>  .../gpu/drm/i915/gem/i915_gem_context_types.h | 14 +++++-
>  .../gpu/drm/i915/gem/i915_gem_execbuffer.c    | 16 +++++++
>  3 files changed, 39 insertions(+), 38 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> index f88bac19333ec..e094f4a1ca4cd 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> @@ -67,6 +67,8 @@
>  #include <linux/log2.h>
>  #include <linux/nospec.h>
>  
> +#include <drm/drm_syncobj.h>
> +
>  #include "gt/gen6_ppgtt.h"
>  #include "gt/intel_context.h"
>  #include "gt/intel_engine_heartbeat.h"
> @@ -224,10 +226,6 @@ static void intel_context_set_gem(struct intel_context *ce,
>  		ce->vm = vm;
>  	}
>  
> -	GEM_BUG_ON(ce->timeline);
> -	if (ctx->timeline)
> -		ce->timeline = intel_timeline_get(ctx->timeline);
> -
>  	if (ctx->sched.priority >= I915_PRIORITY_NORMAL &&
>  	    intel_engine_has_timeslices(ce->engine))
>  		__set_bit(CONTEXT_USE_SEMAPHORES, &ce->flags);
> @@ -344,8 +342,8 @@ void i915_gem_context_release(struct kref *ref)
>  	mutex_destroy(&ctx->engines_mutex);
>  	mutex_destroy(&ctx->lut_mutex);
>  
> -	if (ctx->timeline)
> -		intel_timeline_put(ctx->timeline);
> +	if (ctx->syncobj)
> +		drm_syncobj_put(ctx->syncobj);
>  
>  	put_pid(ctx->pid);
>  	mutex_destroy(&ctx->mutex);
> @@ -790,33 +788,11 @@ static void __assign_ppgtt(struct i915_gem_context *ctx,
>  		i915_vm_close(vm);
>  }
>  
> -static void __set_timeline(struct intel_timeline **dst,
> -			   struct intel_timeline *src)
> -{
> -	struct intel_timeline *old = *dst;
> -
> -	*dst = src ? intel_timeline_get(src) : NULL;
> -
> -	if (old)
> -		intel_timeline_put(old);
> -}
> -
> -static void __apply_timeline(struct intel_context *ce, void *timeline)
> -{
> -	__set_timeline(&ce->timeline, timeline);
> -}
> -
> -static void __assign_timeline(struct i915_gem_context *ctx,
> -			      struct intel_timeline *timeline)
> -{
> -	__set_timeline(&ctx->timeline, timeline);
> -	context_apply_all(ctx, __apply_timeline, timeline);
> -}
> -
>  static struct i915_gem_context *
>  i915_gem_create_context(struct drm_i915_private *i915, unsigned int flags)
>  {
>  	struct i915_gem_context *ctx;
> +	int ret;
>  
>  	if (flags & I915_CONTEXT_CREATE_FLAGS_SINGLE_TIMELINE &&
>  	    !HAS_EXECLISTS(i915))
> @@ -845,16 +821,13 @@ i915_gem_create_context(struct drm_i915_private *i915, unsigned int flags)
>  	}
>  
>  	if (flags & I915_CONTEXT_CREATE_FLAGS_SINGLE_TIMELINE) {
> -		struct intel_timeline *timeline;
> -
> -		timeline = intel_timeline_create(&i915->gt);
> -		if (IS_ERR(timeline)) {
> +		ret = drm_syncobj_create(&ctx->syncobj,
> +					 DRM_SYNCOBJ_CREATE_SIGNALED,
> +					 NULL);
> +		if (ret) {
>  			context_close(ctx);
> -			return ERR_CAST(timeline);
> +			return ERR_PTR(ret);
>  		}
> -
> -		__assign_timeline(ctx, timeline);
> -		intel_timeline_put(timeline);
>  	}
>  
>  	trace_i915_context_create(ctx);
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
> index 676592e27e7d2..df76767f0c41b 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
> @@ -83,7 +83,19 @@ struct i915_gem_context {
>  	struct i915_gem_engines __rcu *engines;
>  	struct mutex engines_mutex; /* guards writes to engines */
>  
> -	struct intel_timeline *timeline;
> +	/**
> +	 * @syncobj: Shared timeline syncobj
> +	 *
> +	 * When the SHARED_TIMELINE flag is set on context creation, we
> +	 * emulate a single timeline across all engines using this syncobj.
> +	 * For every execbuffer2 call, this syncobj is used as both an in-
> +	 * and out-fence.  Unlike the real intel_timeline, this doesn't
> +	 * provide perfect atomic in-order guarantees if the client races
> +	 * with itself by calling execbuffer2 twice concurrently.  However,
> +	 * if userspace races with itself, that's not likely to yield well-
> +	 * defined results anyway so we choose to not care.
> +	 */
> +	struct drm_syncobj *syncobj;
>  
>  	/**
>  	 * @vm: unique address space (GTT)
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> index 96403130a373d..2e9748c1edddf 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> @@ -3295,6 +3295,16 @@ i915_gem_do_execbuffer(struct drm_device *dev,
>  		goto err_vma;
>  	}
>  
> +	if (unlikely(eb.gem_context->syncobj)) {
> +		struct dma_fence *fence;
> +
> +		fence = drm_syncobj_fence_get(eb.gem_context->syncobj);
> +		err = i915_request_await_dma_fence(eb.request, fence);
> +		if (err)
> +			goto err_ext;
> +		dma_fence_put(fence);
> +	}
> +
>  	if (in_fence) {
>  		if (args->flags & I915_EXEC_FENCE_SUBMIT)
>  			err = i915_request_await_execution(eb.request,
> @@ -3351,6 +3361,12 @@ i915_gem_do_execbuffer(struct drm_device *dev,
>  			fput(out_fence->file);
>  		}
>  	}
> +
> +	if (unlikely(eb.gem_context->syncobj)) {
> +		drm_syncobj_replace_fence(eb.gem_context->syncobj,
> +					  &eb.request->fence);
> +	}
> +
>  	i915_request_put(eb.request);
>  
>  err_vma:
> -- 
> 2.29.2
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
Daniel Vetter March 24, 2021, 9:52 a.m. UTC | #3
On Wed, Mar 24, 2021 at 09:28:58AM +0000, Tvrtko Ursulin wrote:
> 
> On 23/03/2021 17:51, Jason Ekstrand wrote:
> > This API is entirely unnecessary and I'd love to get rid of it.  If
> > userspace wants a single timeline across multiple contexts, they can
> > either use implicit synchronization or a syncobj, both of which existed
> > at the time this feature landed.  The justification given at the time
> > was that it would help GL drivers which are inherently single-timeline.
> > However, neither of our GL drivers actually wanted the feature.  i965
> > was already in maintenance mode at the time and iris uses syncobj for
> > everything.
> > 
> > Unfortunately, as much as I'd love to get rid of it, it is used by the
> > media driver so we can't do that.  We can, however, do the next-best
> > thing which is to embed a syncobj in the context and do exactly what
> > we'd expect from userspace internally.  This isn't an entirely identical
> > implementation because it's no longer atomic if userspace races with
> > itself by calling execbuffer2 twice simultaneously from different
> > threads.  It won't crash in that case; it just doesn't guarantee any
> > ordering between those two submits.
> > 
> > Moving SINGLE_TIMELINE to a syncobj emulation has a couple of technical
> > advantages beyond mere annoyance.  One is that intel_timeline is no
> > longer an api-visible object and can remain entirely an implementation
> > detail.  This may be advantageous as we make scheduler changes going
> > forward.  Second is that, together with deleting the CLONE_CONTEXT API,
> > we should now have a 1:1 mapping between intel_context and
> > intel_timeline which may help us reduce locking.
> 
> Much, much better commit message although I still fail to understand where
> do you see implementation details leaking out. So for me this is still
> something I'd like to get to the bottom of.
> 
> I would also mention the difference regarding fence context change.
> 
> And in general I would maintain this patch as part of a series which ends up
> demonstrating the "mays" and "shoulds".

I disagree. The past few years we've merged way too much patches and
features without carefully answering the high level questions of
- do we really need to solve this problem
- and if so, are we really solving this problem in the right place

Now we're quite in a hole, and we're not going to get out of this hole if
we keep applying the same standards that got us here. Anything that does
not clearly and without reservation the above two questions with "yes"
needs to be removed or walled off, just so we can eventually see which
complexity we really need, and what is actually superflous.

Especially when the kernel patch is this simple.
-Daniel

> 
> > 
> > v2 (Jason Ekstrand):
> >   - Update the comment on i915_gem_context::syncobj to mention that it's
> >     an emulation and the possible race if userspace calls execbuffer2
> >     twice on the same context concurrently.
> >   - Wrap the checks for eb.gem_context->syncobj in unlikely()
> >   - Drop the dma_fence reference
> >   - Improved commit message
> > 
> > Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> > Cc: Matthew Brost <matthew.brost@intel.com>
> > ---
> >   drivers/gpu/drm/i915/gem/i915_gem_context.c   | 47 ++++---------------
> >   .../gpu/drm/i915/gem/i915_gem_context_types.h | 14 +++++-
> >   .../gpu/drm/i915/gem/i915_gem_execbuffer.c    | 16 +++++++
> >   3 files changed, 39 insertions(+), 38 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> > index f88bac19333ec..e094f4a1ca4cd 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> > @@ -67,6 +67,8 @@
> >   #include <linux/log2.h>
> >   #include <linux/nospec.h>
> > +#include <drm/drm_syncobj.h>
> > +
> >   #include "gt/gen6_ppgtt.h"
> >   #include "gt/intel_context.h"
> >   #include "gt/intel_engine_heartbeat.h"
> > @@ -224,10 +226,6 @@ static void intel_context_set_gem(struct intel_context *ce,
> >   		ce->vm = vm;
> >   	}
> > -	GEM_BUG_ON(ce->timeline);
> > -	if (ctx->timeline)
> > -		ce->timeline = intel_timeline_get(ctx->timeline);
> > -
> >   	if (ctx->sched.priority >= I915_PRIORITY_NORMAL &&
> >   	    intel_engine_has_timeslices(ce->engine))
> >   		__set_bit(CONTEXT_USE_SEMAPHORES, &ce->flags);
> > @@ -344,8 +342,8 @@ void i915_gem_context_release(struct kref *ref)
> >   	mutex_destroy(&ctx->engines_mutex);
> >   	mutex_destroy(&ctx->lut_mutex);
> > -	if (ctx->timeline)
> > -		intel_timeline_put(ctx->timeline);
> > +	if (ctx->syncobj)
> > +		drm_syncobj_put(ctx->syncobj);
> >   	put_pid(ctx->pid);
> >   	mutex_destroy(&ctx->mutex);
> > @@ -790,33 +788,11 @@ static void __assign_ppgtt(struct i915_gem_context *ctx,
> >   		i915_vm_close(vm);
> >   }
> > -static void __set_timeline(struct intel_timeline **dst,
> > -			   struct intel_timeline *src)
> > -{
> > -	struct intel_timeline *old = *dst;
> > -
> > -	*dst = src ? intel_timeline_get(src) : NULL;
> > -
> > -	if (old)
> > -		intel_timeline_put(old);
> > -}
> > -
> > -static void __apply_timeline(struct intel_context *ce, void *timeline)
> > -{
> > -	__set_timeline(&ce->timeline, timeline);
> > -}
> > -
> > -static void __assign_timeline(struct i915_gem_context *ctx,
> > -			      struct intel_timeline *timeline)
> > -{
> > -	__set_timeline(&ctx->timeline, timeline);
> > -	context_apply_all(ctx, __apply_timeline, timeline);
> > -}
> > -
> >   static struct i915_gem_context *
> >   i915_gem_create_context(struct drm_i915_private *i915, unsigned int flags)
> >   {
> >   	struct i915_gem_context *ctx;
> > +	int ret;
> >   	if (flags & I915_CONTEXT_CREATE_FLAGS_SINGLE_TIMELINE &&
> >   	    !HAS_EXECLISTS(i915))
> > @@ -845,16 +821,13 @@ i915_gem_create_context(struct drm_i915_private *i915, unsigned int flags)
> >   	}
> >   	if (flags & I915_CONTEXT_CREATE_FLAGS_SINGLE_TIMELINE) {
> > -		struct intel_timeline *timeline;
> > -
> > -		timeline = intel_timeline_create(&i915->gt);
> > -		if (IS_ERR(timeline)) {
> > +		ret = drm_syncobj_create(&ctx->syncobj,
> > +					 DRM_SYNCOBJ_CREATE_SIGNALED,
> > +					 NULL);
> > +		if (ret) {
> >   			context_close(ctx);
> > -			return ERR_CAST(timeline);
> > +			return ERR_PTR(ret);
> >   		}
> > -
> > -		__assign_timeline(ctx, timeline);
> > -		intel_timeline_put(timeline);
> >   	}
> >   	trace_i915_context_create(ctx);
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
> > index 676592e27e7d2..df76767f0c41b 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
> > @@ -83,7 +83,19 @@ struct i915_gem_context {
> >   	struct i915_gem_engines __rcu *engines;
> >   	struct mutex engines_mutex; /* guards writes to engines */
> > -	struct intel_timeline *timeline;
> > +	/**
> > +	 * @syncobj: Shared timeline syncobj
> > +	 *
> > +	 * When the SHARED_TIMELINE flag is set on context creation, we
> > +	 * emulate a single timeline across all engines using this syncobj.
> > +	 * For every execbuffer2 call, this syncobj is used as both an in-
> > +	 * and out-fence.  Unlike the real intel_timeline, this doesn't
> > +	 * provide perfect atomic in-order guarantees if the client races
> > +	 * with itself by calling execbuffer2 twice concurrently.  However,
> > +	 * if userspace races with itself, that's not likely to yield well-
> > +	 * defined results anyway so we choose to not care.
> > +	 */
> > +	struct drm_syncobj *syncobj;
> >   	/**
> >   	 * @vm: unique address space (GTT)
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> > index 96403130a373d..2e9748c1edddf 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> > @@ -3295,6 +3295,16 @@ i915_gem_do_execbuffer(struct drm_device *dev,
> >   		goto err_vma;
> >   	}
> > +	if (unlikely(eb.gem_context->syncobj)) {
> > +		struct dma_fence *fence;
> > +
> > +		fence = drm_syncobj_fence_get(eb.gem_context->syncobj);
> > +		err = i915_request_await_dma_fence(eb.request, fence);
> > +		if (err)
> > +			goto err_ext;
> > +		dma_fence_put(fence);
> 
> I think put goes before the error bail.
> 
> > +	}
> > +
> >   	if (in_fence) {
> >   		if (args->flags & I915_EXEC_FENCE_SUBMIT)
> >   			err = i915_request_await_execution(eb.request,
> > @@ -3351,6 +3361,12 @@ i915_gem_do_execbuffer(struct drm_device *dev,
> >   			fput(out_fence->file);
> >   		}
> >   	}
> > +
> > +	if (unlikely(eb.gem_context->syncobj)) {
> > +		drm_syncobj_replace_fence(eb.gem_context->syncobj,
> > +					  &eb.request->fence);
> > +	}
> > +
> >   	i915_request_put(eb.request);
> >   err_vma:
> > 
> 
> Regards,
> 
> Tvrtko
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
Tvrtko Ursulin March 24, 2021, 11:36 a.m. UTC | #4
On 24/03/2021 09:52, Daniel Vetter wrote:
> On Wed, Mar 24, 2021 at 09:28:58AM +0000, Tvrtko Ursulin wrote:
>>
>> On 23/03/2021 17:51, Jason Ekstrand wrote:
>>> This API is entirely unnecessary and I'd love to get rid of it.  If
>>> userspace wants a single timeline across multiple contexts, they can
>>> either use implicit synchronization or a syncobj, both of which existed
>>> at the time this feature landed.  The justification given at the time
>>> was that it would help GL drivers which are inherently single-timeline.
>>> However, neither of our GL drivers actually wanted the feature.  i965
>>> was already in maintenance mode at the time and iris uses syncobj for
>>> everything.
>>>
>>> Unfortunately, as much as I'd love to get rid of it, it is used by the
>>> media driver so we can't do that.  We can, however, do the next-best
>>> thing which is to embed a syncobj in the context and do exactly what
>>> we'd expect from userspace internally.  This isn't an entirely identical
>>> implementation because it's no longer atomic if userspace races with
>>> itself by calling execbuffer2 twice simultaneously from different
>>> threads.  It won't crash in that case; it just doesn't guarantee any
>>> ordering between those two submits.
>>>
>>> Moving SINGLE_TIMELINE to a syncobj emulation has a couple of technical
>>> advantages beyond mere annoyance.  One is that intel_timeline is no
>>> longer an api-visible object and can remain entirely an implementation
>>> detail.  This may be advantageous as we make scheduler changes going
>>> forward.  Second is that, together with deleting the CLONE_CONTEXT API,
>>> we should now have a 1:1 mapping between intel_context and
>>> intel_timeline which may help us reduce locking.
>>
>> Much, much better commit message although I still fail to understand where
>> do you see implementation details leaking out. So for me this is still
>> something I'd like to get to the bottom of.
>>
>> I would also mention the difference regarding fence context change.
>>
>> And in general I would maintain this patch as part of a series which ends up
>> demonstrating the "mays" and "shoulds".
> 
> I disagree. The past few years we've merged way too much patches and
> features without carefully answering the high level questions of
> - do we really need to solve this problem
> - and if so, are we really solving this problem in the right place
> 
> Now we're quite in a hole, and we're not going to get out of this hole if
> we keep applying the same standards that got us here. Anything that does
> not clearly and without reservation the above two questions with "yes"
> needs to be removed or walled off, just so we can eventually see which
> complexity we really need, and what is actually superflous.

I understand your general point but when I apply it to this specific 
patch, even if it is simple, it is neither removing the uapi or walling 
it off. So I see it as the usual review standard to ask to see what 
"mays" and "shoulds" actually get us in concrete terms.

I would be able to understand putting the uapi behind the "if gen > 12 
|| is_discrete EINVAL", or whatever, since it is fair game to deprecate 
for any new platform or say GuC submission.

Not doing simply that makes me worry there are still misunderstandings 
on what kind of problems were encountered with regards to 
intel_timeline, by work item this or work item that, and that there 
isn't still a confusion about what is the internal timeline object and 
what is the internal hwsp object. I feel there hasn't been full 
transparency on these technical issues which is why I think seeing the 
actual series which is supposed to build on top of this would be helpful.

I even worry that we still have a big disconnect on whether this flag is 
leaking any internal implementation details out to userspace. If the 
commit message was reworded without actual agreement that implementation 
details are not exposed we will continue disagreeing going forward, 
which is not a good start.

We exchanged so many emails on this but I don't feel we are getting 
anywhere so I really have no idea - obviously you will steamroll this in 
regardless so I don't think there is point to argue further.

Regards,

Tvrtko
Jason Ekstrand March 24, 2021, 5:18 p.m. UTC | #5
On Wed, Mar 24, 2021 at 6:36 AM Tvrtko Ursulin
<tvrtko.ursulin@linux.intel.com> wrote:
>
>
> On 24/03/2021 09:52, Daniel Vetter wrote:
> > On Wed, Mar 24, 2021 at 09:28:58AM +0000, Tvrtko Ursulin wrote:
> >>
> >> On 23/03/2021 17:51, Jason Ekstrand wrote:
> >>> This API is entirely unnecessary and I'd love to get rid of it.  If
> >>> userspace wants a single timeline across multiple contexts, they can
> >>> either use implicit synchronization or a syncobj, both of which existed
> >>> at the time this feature landed.  The justification given at the time
> >>> was that it would help GL drivers which are inherently single-timeline.
> >>> However, neither of our GL drivers actually wanted the feature.  i965
> >>> was already in maintenance mode at the time and iris uses syncobj for
> >>> everything.
> >>>
> >>> Unfortunately, as much as I'd love to get rid of it, it is used by the
> >>> media driver so we can't do that.  We can, however, do the next-best
> >>> thing which is to embed a syncobj in the context and do exactly what
> >>> we'd expect from userspace internally.  This isn't an entirely identical
> >>> implementation because it's no longer atomic if userspace races with
> >>> itself by calling execbuffer2 twice simultaneously from different
> >>> threads.  It won't crash in that case; it just doesn't guarantee any
> >>> ordering between those two submits.
> >>>
> >>> Moving SINGLE_TIMELINE to a syncobj emulation has a couple of technical
> >>> advantages beyond mere annoyance.  One is that intel_timeline is no
> >>> longer an api-visible object and can remain entirely an implementation
> >>> detail.  This may be advantageous as we make scheduler changes going
> >>> forward.  Second is that, together with deleting the CLONE_CONTEXT API,
> >>> we should now have a 1:1 mapping between intel_context and
> >>> intel_timeline which may help us reduce locking.
> >>
> >> Much, much better commit message although I still fail to understand where
> >> do you see implementation details leaking out. So for me this is still
> >> something I'd like to get to the bottom of.

I didn't say "leaking".  I said it's no longer API-visible.  That's
just true.  It used to be a kernel object that userspace was unaware
of, then we added SINGLE_TIMELINE and userspace now has some influence
over the object.  With this patch, it's hidden again.  I don't get why
that's confusing.

> >> I would also mention the difference regarding fence context change.

There are no fence context changes.  The fence that we stuff in the
syncobj is an i915 fence and the fence we pull back out is an i915
fence.  A syncobj is just a fancy wrapper for a dma_buf pointer.

> >> And in general I would maintain this patch as part of a series which ends up
> >> demonstrating the "mays" and "shoulds".
> >
> > I disagree. The past few years we've merged way too much patches and
> > features without carefully answering the high level questions of
> > - do we really need to solve this problem
> > - and if so, are we really solving this problem in the right place
> >
> > Now we're quite in a hole, and we're not going to get out of this hole if
> > we keep applying the same standards that got us here. Anything that does
> > not clearly and without reservation the above two questions with "yes"
> > needs to be removed or walled off, just so we can eventually see which
> > complexity we really need, and what is actually superflous.
>
> I understand your general point but when I apply it to this specific
> patch, even if it is simple, it is neither removing the uapi or walling
> it off. So I see it as the usual review standard to ask to see what
> "mays" and "shoulds" actually get us in concrete terms.

Instead of focusing on the term "leak", let's focus on this line of
the commit message instead:

>  Second is that, together with deleting the CLONE_CONTEXT API,
> we should now have a 1:1 mapping between intel_context and
> intel_timeline which may help us reduce locking.

Now, I've not written any patches yet which actually reduce the
locking.  I can try to look into that today.  I CC'd Maarten on the
first send of this because I was hoping he would have good intuition
about this.  It may be that this object will always have to require
some amount of locking if the scheduler has to touch them in parallel
with other stuff.  What I can say concretely, however, is that this
does reduce the sharing of an internal object even if it doesn't get
rid of it completely.  The one thing that is shared all over the place
with this patch is a syncobj which is explicitly designed for exactly
this sort of case and can be used and abused by as many threads as
you'd like.  That seems like it's going in the right direction.

I can further weasel-word the commit message to make the above more
prominent if you'd like.

> I would be able to understand putting the uapi behind the "if gen > 12
> || is_discrete EINVAL", or whatever, since it is fair game to deprecate
> for any new platform or say GuC submission.
>
> Not doing simply that makes me worry there are still misunderstandings
> on what kind of problems were encountered with regards to
> intel_timeline, by work item this or work item that, and that there
> isn't still a confusion about what is the internal timeline object and
> what is the internal hwsp object. I feel there hasn't been full
> transparency on these technical issues which is why I think seeing the
> actual series which is supposed to build on top of this would be helpful.
>
> I even worry that we still have a big disconnect on whether this flag is
> leaking any internal implementation details out to userspace. If the
> commit message was reworded without actual agreement that implementation
> details are not exposed we will continue disagreeing going forward,
> which is not a good start.
>
> We exchanged so many emails on this but I don't feel we are getting
> anywhere so I really have no idea - obviously you will steamroll this in
> regardless so I don't think there is point to argue further.
>
> Regards,
>
> Tvrtko

> > +     if (unlikely(eb.gem_context->syncobj)) {
> > +             struct dma_fence *fence;
> > +
> > +             fence = drm_syncobj_fence_get(eb.gem_context->syncobj);
> > +             err = i915_request_await_dma_fence(eb.request, fence);
> > +             if (err)
> > +                     goto err_ext;
> > +             dma_fence_put(fence);
>
> I think put goes before the error bail.

Fixed locally.  It'll be in v3.

--Jason
Tvrtko Ursulin March 25, 2021, 9:48 a.m. UTC | #6
On 24/03/2021 17:18, Jason Ekstrand wrote:
> On Wed, Mar 24, 2021 at 6:36 AM Tvrtko Ursulin
> <tvrtko.ursulin@linux.intel.com> wrote:
>>
>>
>> On 24/03/2021 09:52, Daniel Vetter wrote:
>>> On Wed, Mar 24, 2021 at 09:28:58AM +0000, Tvrtko Ursulin wrote:
>>>>
>>>> On 23/03/2021 17:51, Jason Ekstrand wrote:
>>>>> This API is entirely unnecessary and I'd love to get rid of it.  If
>>>>> userspace wants a single timeline across multiple contexts, they can
>>>>> either use implicit synchronization or a syncobj, both of which existed
>>>>> at the time this feature landed.  The justification given at the time
>>>>> was that it would help GL drivers which are inherently single-timeline.
>>>>> However, neither of our GL drivers actually wanted the feature.  i965
>>>>> was already in maintenance mode at the time and iris uses syncobj for
>>>>> everything.
>>>>>
>>>>> Unfortunately, as much as I'd love to get rid of it, it is used by the
>>>>> media driver so we can't do that.  We can, however, do the next-best
>>>>> thing which is to embed a syncobj in the context and do exactly what
>>>>> we'd expect from userspace internally.  This isn't an entirely identical
>>>>> implementation because it's no longer atomic if userspace races with
>>>>> itself by calling execbuffer2 twice simultaneously from different
>>>>> threads.  It won't crash in that case; it just doesn't guarantee any
>>>>> ordering between those two submits.
>>>>>
>>>>> Moving SINGLE_TIMELINE to a syncobj emulation has a couple of technical
>>>>> advantages beyond mere annoyance.  One is that intel_timeline is no
>>>>> longer an api-visible object and can remain entirely an implementation
>>>>> detail.  This may be advantageous as we make scheduler changes going
>>>>> forward.  Second is that, together with deleting the CLONE_CONTEXT API,
>>>>> we should now have a 1:1 mapping between intel_context and
>>>>> intel_timeline which may help us reduce locking.
>>>>
>>>> Much, much better commit message although I still fail to understand where
>>>> do you see implementation details leaking out. So for me this is still
>>>> something I'd like to get to the bottom of.
> 
> I didn't say "leaking".  I said it's no longer API-visible.  That's
> just true.  It used to be a kernel object that userspace was unaware
> of, then we added SINGLE_TIMELINE and userspace now has some influence
> over the object.  With this patch, it's hidden again.  I don't get why
> that's confusing.

I am definitely glad we moved on from leaking to less dramatic wording, 
but it is still not API "visible" in so much struct file_operations * is 
not user visible in any negative way when you do open(2), being just 
implementation detail. But I give up.

>>>> I would also mention the difference regarding fence context change.
> 
> There are no fence context changes.  The fence that we stuff in the
> syncobj is an i915 fence and the fence we pull back out is an i915
> fence.  A syncobj is just a fancy wrapper for a dma_buf pointer.

Change is in the dma_fence->context.

Current code single timeline is one fence context. With this patch that 
is no longer the case.

Not sure that it has any practical implications for the internal 
dma_fence code like is_later checks , haven't analysed it.

And sync fence info ioctl exposes this value to userspace which probably 
has no practical implications. Unless some indirect effect when merging 
fences.

Main point is that I think these are the things which need to be 
discussed in the commit message.

>>>> And in general I would maintain this patch as part of a series which ends up
>>>> demonstrating the "mays" and "shoulds".
>>>
>>> I disagree. The past few years we've merged way too much patches and
>>> features without carefully answering the high level questions of
>>> - do we really need to solve this problem
>>> - and if so, are we really solving this problem in the right place
>>>
>>> Now we're quite in a hole, and we're not going to get out of this hole if
>>> we keep applying the same standards that got us here. Anything that does
>>> not clearly and without reservation the above two questions with "yes"
>>> needs to be removed or walled off, just so we can eventually see which
>>> complexity we really need, and what is actually superflous.
>>
>> I understand your general point but when I apply it to this specific
>> patch, even if it is simple, it is neither removing the uapi or walling
>> it off. So I see it as the usual review standard to ask to see what
>> "mays" and "shoulds" actually get us in concrete terms.
> 
> Instead of focusing on the term "leak", let's focus on this line of
> the commit message instead:
> 
>>   Second is that, together with deleting the CLONE_CONTEXT API,
>> we should now have a 1:1 mapping between intel_context and
>> intel_timeline which may help us reduce locking.
> 
> Now, I've not written any patches yet which actually reduce the
> locking.  I can try to look into that today.  I CC'd Maarten on the
> first send of this because I was hoping he would have good intuition
> about this.  It may be that this object will always have to require
> some amount of locking if the scheduler has to touch them in parallel
> with other stuff.  What I can say concretely, however, is that this
> does reduce the sharing of an internal object even if it doesn't get
> rid of it completely.  The one thing that is shared all over the place
> with this patch is a syncobj which is explicitly designed for exactly
> this sort of case and can be used and abused by as many threads as
> you'd like.  That seems like it's going in the right direction.
> 
> I can further weasel-word the commit message to make the above more
> prominent if you'd like.

I am not interested in making you weasel-word anything but reaching a 
consensus and what is actually true and accurate.

Regards,

Tvrtko
Daniel Vetter March 25, 2021, 9:54 a.m. UTC | #7
On Thu, Mar 25, 2021 at 10:48 AM Tvrtko Ursulin
<tvrtko.ursulin@linux.intel.com> wrote:
>
>
> On 24/03/2021 17:18, Jason Ekstrand wrote:
> > On Wed, Mar 24, 2021 at 6:36 AM Tvrtko Ursulin
> > <tvrtko.ursulin@linux.intel.com> wrote:
> >>
> >>
> >> On 24/03/2021 09:52, Daniel Vetter wrote:
> >>> On Wed, Mar 24, 2021 at 09:28:58AM +0000, Tvrtko Ursulin wrote:
> >>>>
> >>>> On 23/03/2021 17:51, Jason Ekstrand wrote:
> >>>>> This API is entirely unnecessary and I'd love to get rid of it.  If
> >>>>> userspace wants a single timeline across multiple contexts, they can
> >>>>> either use implicit synchronization or a syncobj, both of which existed
> >>>>> at the time this feature landed.  The justification given at the time
> >>>>> was that it would help GL drivers which are inherently single-timeline.
> >>>>> However, neither of our GL drivers actually wanted the feature.  i965
> >>>>> was already in maintenance mode at the time and iris uses syncobj for
> >>>>> everything.
> >>>>>
> >>>>> Unfortunately, as much as I'd love to get rid of it, it is used by the
> >>>>> media driver so we can't do that.  We can, however, do the next-best
> >>>>> thing which is to embed a syncobj in the context and do exactly what
> >>>>> we'd expect from userspace internally.  This isn't an entirely identical
> >>>>> implementation because it's no longer atomic if userspace races with
> >>>>> itself by calling execbuffer2 twice simultaneously from different
> >>>>> threads.  It won't crash in that case; it just doesn't guarantee any
> >>>>> ordering between those two submits.
> >>>>>
> >>>>> Moving SINGLE_TIMELINE to a syncobj emulation has a couple of technical
> >>>>> advantages beyond mere annoyance.  One is that intel_timeline is no
> >>>>> longer an api-visible object and can remain entirely an implementation
> >>>>> detail.  This may be advantageous as we make scheduler changes going
> >>>>> forward.  Second is that, together with deleting the CLONE_CONTEXT API,
> >>>>> we should now have a 1:1 mapping between intel_context and
> >>>>> intel_timeline which may help us reduce locking.
> >>>>
> >>>> Much, much better commit message although I still fail to understand where
> >>>> do you see implementation details leaking out. So for me this is still
> >>>> something I'd like to get to the bottom of.
> >
> > I didn't say "leaking".  I said it's no longer API-visible.  That's
> > just true.  It used to be a kernel object that userspace was unaware
> > of, then we added SINGLE_TIMELINE and userspace now has some influence
> > over the object.  With this patch, it's hidden again.  I don't get why
> > that's confusing.
>
> I am definitely glad we moved on from leaking to less dramatic wording,
> but it is still not API "visible" in so much struct file_operations * is
> not user visible in any negative way when you do open(2), being just
> implementation detail. But I give up.
>
> >>>> I would also mention the difference regarding fence context change.
> >
> > There are no fence context changes.  The fence that we stuff in the
> > syncobj is an i915 fence and the fence we pull back out is an i915
> > fence.  A syncobj is just a fancy wrapper for a dma_buf pointer.
>
> Change is in the dma_fence->context.
>
> Current code single timeline is one fence context. With this patch that
> is no longer the case.
>
> Not sure that it has any practical implications for the internal
> dma_fence code like is_later checks , haven't analysed it.

We merge fewer fences at the higher levels and rely more on the fence
dependency tracking of the scheduler frontend to sort stuff out.

> And sync fence info ioctl exposes this value to userspace which probably
> has no practical implications. Unless some indirect effect when merging
> fences.

Userspace can use that to do fence merging. Which is always only a
strict optimization. I'm not even sure whether Android does that or
not.

> Main point is that I think these are the things which need to be
> discussed in the commit message.

Yeah makes sense to add these as impact, next to the "we don't deal
with races anymore" part.
-Daniel

>
> >>>> And in general I would maintain this patch as part of a series which ends up
> >>>> demonstrating the "mays" and "shoulds".
> >>>
> >>> I disagree. The past few years we've merged way too much patches and
> >>> features without carefully answering the high level questions of
> >>> - do we really need to solve this problem
> >>> - and if so, are we really solving this problem in the right place
> >>>
> >>> Now we're quite in a hole, and we're not going to get out of this hole if
> >>> we keep applying the same standards that got us here. Anything that does
> >>> not clearly and without reservation the above two questions with "yes"
> >>> needs to be removed or walled off, just so we can eventually see which
> >>> complexity we really need, and what is actually superflous.
> >>
> >> I understand your general point but when I apply it to this specific
> >> patch, even if it is simple, it is neither removing the uapi or walling
> >> it off. So I see it as the usual review standard to ask to see what
> >> "mays" and "shoulds" actually get us in concrete terms.
> >
> > Instead of focusing on the term "leak", let's focus on this line of
> > the commit message instead:
> >
> >>   Second is that, together with deleting the CLONE_CONTEXT API,
> >> we should now have a 1:1 mapping between intel_context and
> >> intel_timeline which may help us reduce locking.
> >
> > Now, I've not written any patches yet which actually reduce the
> > locking.  I can try to look into that today.  I CC'd Maarten on the
> > first send of this because I was hoping he would have good intuition
> > about this.  It may be that this object will always have to require
> > some amount of locking if the scheduler has to touch them in parallel
> > with other stuff.  What I can say concretely, however, is that this
> > does reduce the sharing of an internal object even if it doesn't get
> > rid of it completely.  The one thing that is shared all over the place
> > with this patch is a syncobj which is explicitly designed for exactly
> > this sort of case and can be used and abused by as many threads as
> > you'd like.  That seems like it's going in the right direction.
> >
> > I can further weasel-word the commit message to make the above more
> > prominent if you'd like.
>
> I am not interested in making you weasel-word anything but reaching a
> consensus and what is actually true and accurate.
>
> Regards,
>
> Tvrtko
Matthew Brost March 25, 2021, 9:13 p.m. UTC | #8
On Tue, Mar 23, 2021 at 12:51:49PM -0500, Jason Ekstrand wrote:
> This API is entirely unnecessary and I'd love to get rid of it.  If
> userspace wants a single timeline across multiple contexts, they can
> either use implicit synchronization or a syncobj, both of which existed
> at the time this feature landed.  The justification given at the time
> was that it would help GL drivers which are inherently single-timeline.
> However, neither of our GL drivers actually wanted the feature.  i965
> was already in maintenance mode at the time and iris uses syncobj for
> everything.
> 
> Unfortunately, as much as I'd love to get rid of it, it is used by the
> media driver so we can't do that.  We can, however, do the next-best
> thing which is to embed a syncobj in the context and do exactly what
> we'd expect from userspace internally.  This isn't an entirely identical
> implementation because it's no longer atomic if userspace races with
> itself by calling execbuffer2 twice simultaneously from different
> threads.  It won't crash in that case; it just doesn't guarantee any
> ordering between those two submits.
> 
> Moving SINGLE_TIMELINE to a syncobj emulation has a couple of technical
> advantages beyond mere annoyance.  One is that intel_timeline is no
> longer an api-visible object and can remain entirely an implementation
> detail.  This may be advantageous as we make scheduler changes going
> forward.  Second is that, together with deleting the CLONE_CONTEXT API,
> we should now have a 1:1 mapping between intel_context and
> intel_timeline which may help us reduce locking.
> 

I'm not going to dive into the philosophical debate that this patch has
pigeonholed into, but a 1:1 mapping between intel_context and
intel_timeline is huge win IMO - these are the types of cleanups I can
get really get behind.

> v2 (Jason Ekstrand):
>  - Update the comment on i915_gem_context::syncobj to mention that it's
>    an emulation and the possible race if userspace calls execbuffer2
>    twice on the same context concurrently.
>  - Wrap the checks for eb.gem_context->syncobj in unlikely()
>  - Drop the dma_fence reference
>  - Improved commit message
> 
> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> ---
>  drivers/gpu/drm/i915/gem/i915_gem_context.c   | 47 ++++---------------
>  .../gpu/drm/i915/gem/i915_gem_context_types.h | 14 +++++-
>  .../gpu/drm/i915/gem/i915_gem_execbuffer.c    | 16 +++++++
>  3 files changed, 39 insertions(+), 38 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> index f88bac19333ec..e094f4a1ca4cd 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> @@ -67,6 +67,8 @@
>  #include <linux/log2.h>
>  #include <linux/nospec.h>
>  
> +#include <drm/drm_syncobj.h>
> +
>  #include "gt/gen6_ppgtt.h"
>  #include "gt/intel_context.h"
>  #include "gt/intel_engine_heartbeat.h"
> @@ -224,10 +226,6 @@ static void intel_context_set_gem(struct intel_context *ce,
>  		ce->vm = vm;
>  	}
>  
> -	GEM_BUG_ON(ce->timeline);
> -	if (ctx->timeline)
> -		ce->timeline = intel_timeline_get(ctx->timeline);
> -
>  	if (ctx->sched.priority >= I915_PRIORITY_NORMAL &&
>  	    intel_engine_has_timeslices(ce->engine))
>  		__set_bit(CONTEXT_USE_SEMAPHORES, &ce->flags);
> @@ -344,8 +342,8 @@ void i915_gem_context_release(struct kref *ref)
>  	mutex_destroy(&ctx->engines_mutex);
>  	mutex_destroy(&ctx->lut_mutex);
>  
> -	if (ctx->timeline)
> -		intel_timeline_put(ctx->timeline);
> +	if (ctx->syncobj)
> +		drm_syncobj_put(ctx->syncobj);
>  
>  	put_pid(ctx->pid);
>  	mutex_destroy(&ctx->mutex);
> @@ -790,33 +788,11 @@ static void __assign_ppgtt(struct i915_gem_context *ctx,
>  		i915_vm_close(vm);
>  }
>  
> -static void __set_timeline(struct intel_timeline **dst,
> -			   struct intel_timeline *src)
> -{
> -	struct intel_timeline *old = *dst;
> -
> -	*dst = src ? intel_timeline_get(src) : NULL;
> -
> -	if (old)
> -		intel_timeline_put(old);
> -}
> -
> -static void __apply_timeline(struct intel_context *ce, void *timeline)
> -{
> -	__set_timeline(&ce->timeline, timeline);
> -}
> -
> -static void __assign_timeline(struct i915_gem_context *ctx,
> -			      struct intel_timeline *timeline)
> -{
> -	__set_timeline(&ctx->timeline, timeline);
> -	context_apply_all(ctx, __apply_timeline, timeline);
> -}
> -
>  static struct i915_gem_context *
>  i915_gem_create_context(struct drm_i915_private *i915, unsigned int flags)
>  {
>  	struct i915_gem_context *ctx;
> +	int ret;
>  
>  	if (flags & I915_CONTEXT_CREATE_FLAGS_SINGLE_TIMELINE &&
>  	    !HAS_EXECLISTS(i915))
> @@ -845,16 +821,13 @@ i915_gem_create_context(struct drm_i915_private *i915, unsigned int flags)
>  	}
>  
>  	if (flags & I915_CONTEXT_CREATE_FLAGS_SINGLE_TIMELINE) {
> -		struct intel_timeline *timeline;
> -
> -		timeline = intel_timeline_create(&i915->gt);
> -		if (IS_ERR(timeline)) {
> +		ret = drm_syncobj_create(&ctx->syncobj,
> +					 DRM_SYNCOBJ_CREATE_SIGNALED,
> +					 NULL);
> +		if (ret) {
>  			context_close(ctx);
> -			return ERR_CAST(timeline);
> +			return ERR_PTR(ret);
>  		}
> -
> -		__assign_timeline(ctx, timeline);
> -		intel_timeline_put(timeline);
>  	}
>  
>  	trace_i915_context_create(ctx);
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
> index 676592e27e7d2..df76767f0c41b 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
> @@ -83,7 +83,19 @@ struct i915_gem_context {
>  	struct i915_gem_engines __rcu *engines;
>  	struct mutex engines_mutex; /* guards writes to engines */
>  
> -	struct intel_timeline *timeline;
> +	/**
> +	 * @syncobj: Shared timeline syncobj
> +	 *
> +	 * When the SHARED_TIMELINE flag is set on context creation, we
> +	 * emulate a single timeline across all engines using this syncobj.
> +	 * For every execbuffer2 call, this syncobj is used as both an in-
> +	 * and out-fence.  Unlike the real intel_timeline, this doesn't
> +	 * provide perfect atomic in-order guarantees if the client races
> +	 * with itself by calling execbuffer2 twice concurrently.  However,
> +	 * if userspace races with itself, that's not likely to yield well-
> +	 * defined results anyway so we choose to not care.
> +	 */
> +	struct drm_syncobj *syncobj;
>  
>  	/**
>  	 * @vm: unique address space (GTT)
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> index 96403130a373d..2e9748c1edddf 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> @@ -3295,6 +3295,16 @@ i915_gem_do_execbuffer(struct drm_device *dev,
>  		goto err_vma;
>  	}
>  
> +	if (unlikely(eb.gem_context->syncobj)) {
> +		struct dma_fence *fence;
> +
> +		fence = drm_syncobj_fence_get(eb.gem_context->syncobj);
> +		err = i915_request_await_dma_fence(eb.request, fence);
> +		if (err)
> +			goto err_ext;
> +		dma_fence_put(fence);

Also think this needs to be moved above the error checking, as this put
corresponds to 'drm_syncobj_fence_get', right?

Matt

> +	}
> +
>  	if (in_fence) {
>  		if (args->flags & I915_EXEC_FENCE_SUBMIT)
>  			err = i915_request_await_execution(eb.request,
> @@ -3351,6 +3361,12 @@ i915_gem_do_execbuffer(struct drm_device *dev,
>  			fput(out_fence->file);
>  		}
>  	}
> +
> +	if (unlikely(eb.gem_context->syncobj)) {
> +		drm_syncobj_replace_fence(eb.gem_context->syncobj,
> +					  &eb.request->fence);
> +	}
> +
>  	i915_request_put(eb.request);
>  
>  err_vma:
> -- 
> 2.29.2
>
Jason Ekstrand March 25, 2021, 10:19 p.m. UTC | #9
On Thu, Mar 25, 2021 at 4:21 PM Matthew Brost <matthew.brost@intel.com> wrote:
>
> On Tue, Mar 23, 2021 at 12:51:49PM -0500, Jason Ekstrand wrote:
> > This API is entirely unnecessary and I'd love to get rid of it.  If
> > userspace wants a single timeline across multiple contexts, they can
> > either use implicit synchronization or a syncobj, both of which existed
> > at the time this feature landed.  The justification given at the time
> > was that it would help GL drivers which are inherently single-timeline.
> > However, neither of our GL drivers actually wanted the feature.  i965
> > was already in maintenance mode at the time and iris uses syncobj for
> > everything.
> >
> > Unfortunately, as much as I'd love to get rid of it, it is used by the
> > media driver so we can't do that.  We can, however, do the next-best
> > thing which is to embed a syncobj in the context and do exactly what
> > we'd expect from userspace internally.  This isn't an entirely identical
> > implementation because it's no longer atomic if userspace races with
> > itself by calling execbuffer2 twice simultaneously from different
> > threads.  It won't crash in that case; it just doesn't guarantee any
> > ordering between those two submits.
> >
> > Moving SINGLE_TIMELINE to a syncobj emulation has a couple of technical
> > advantages beyond mere annoyance.  One is that intel_timeline is no
> > longer an api-visible object and can remain entirely an implementation
> > detail.  This may be advantageous as we make scheduler changes going
> > forward.  Second is that, together with deleting the CLONE_CONTEXT API,
> > we should now have a 1:1 mapping between intel_context and
> > intel_timeline which may help us reduce locking.
> >
>
> I'm not going to dive into the philosophical debate that this patch has
> pigeonholed into, but a 1:1 mapping between intel_context and
> intel_timeline is huge win IMO - these are the types of cleanups I can
> get really get behind.
>
> > v2 (Jason Ekstrand):
> >  - Update the comment on i915_gem_context::syncobj to mention that it's
> >    an emulation and the possible race if userspace calls execbuffer2
> >    twice on the same context concurrently.
> >  - Wrap the checks for eb.gem_context->syncobj in unlikely()
> >  - Drop the dma_fence reference
> >  - Improved commit message
> >
> > Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> > Cc: Matthew Brost <matthew.brost@intel.com>
> > ---
> >  drivers/gpu/drm/i915/gem/i915_gem_context.c   | 47 ++++---------------
> >  .../gpu/drm/i915/gem/i915_gem_context_types.h | 14 +++++-
> >  .../gpu/drm/i915/gem/i915_gem_execbuffer.c    | 16 +++++++
> >  3 files changed, 39 insertions(+), 38 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> > index f88bac19333ec..e094f4a1ca4cd 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> > @@ -67,6 +67,8 @@
> >  #include <linux/log2.h>
> >  #include <linux/nospec.h>
> >
> > +#include <drm/drm_syncobj.h>
> > +
> >  #include "gt/gen6_ppgtt.h"
> >  #include "gt/intel_context.h"
> >  #include "gt/intel_engine_heartbeat.h"
> > @@ -224,10 +226,6 @@ static void intel_context_set_gem(struct intel_context *ce,
> >               ce->vm = vm;
> >       }
> >
> > -     GEM_BUG_ON(ce->timeline);
> > -     if (ctx->timeline)
> > -             ce->timeline = intel_timeline_get(ctx->timeline);
> > -
> >       if (ctx->sched.priority >= I915_PRIORITY_NORMAL &&
> >           intel_engine_has_timeslices(ce->engine))
> >               __set_bit(CONTEXT_USE_SEMAPHORES, &ce->flags);
> > @@ -344,8 +342,8 @@ void i915_gem_context_release(struct kref *ref)
> >       mutex_destroy(&ctx->engines_mutex);
> >       mutex_destroy(&ctx->lut_mutex);
> >
> > -     if (ctx->timeline)
> > -             intel_timeline_put(ctx->timeline);
> > +     if (ctx->syncobj)
> > +             drm_syncobj_put(ctx->syncobj);
> >
> >       put_pid(ctx->pid);
> >       mutex_destroy(&ctx->mutex);
> > @@ -790,33 +788,11 @@ static void __assign_ppgtt(struct i915_gem_context *ctx,
> >               i915_vm_close(vm);
> >  }
> >
> > -static void __set_timeline(struct intel_timeline **dst,
> > -                        struct intel_timeline *src)
> > -{
> > -     struct intel_timeline *old = *dst;
> > -
> > -     *dst = src ? intel_timeline_get(src) : NULL;
> > -
> > -     if (old)
> > -             intel_timeline_put(old);
> > -}
> > -
> > -static void __apply_timeline(struct intel_context *ce, void *timeline)
> > -{
> > -     __set_timeline(&ce->timeline, timeline);
> > -}
> > -
> > -static void __assign_timeline(struct i915_gem_context *ctx,
> > -                           struct intel_timeline *timeline)
> > -{
> > -     __set_timeline(&ctx->timeline, timeline);
> > -     context_apply_all(ctx, __apply_timeline, timeline);
> > -}
> > -
> >  static struct i915_gem_context *
> >  i915_gem_create_context(struct drm_i915_private *i915, unsigned int flags)
> >  {
> >       struct i915_gem_context *ctx;
> > +     int ret;
> >
> >       if (flags & I915_CONTEXT_CREATE_FLAGS_SINGLE_TIMELINE &&
> >           !HAS_EXECLISTS(i915))
> > @@ -845,16 +821,13 @@ i915_gem_create_context(struct drm_i915_private *i915, unsigned int flags)
> >       }
> >
> >       if (flags & I915_CONTEXT_CREATE_FLAGS_SINGLE_TIMELINE) {
> > -             struct intel_timeline *timeline;
> > -
> > -             timeline = intel_timeline_create(&i915->gt);
> > -             if (IS_ERR(timeline)) {
> > +             ret = drm_syncobj_create(&ctx->syncobj,
> > +                                      DRM_SYNCOBJ_CREATE_SIGNALED,
> > +                                      NULL);
> > +             if (ret) {
> >                       context_close(ctx);
> > -                     return ERR_CAST(timeline);
> > +                     return ERR_PTR(ret);
> >               }
> > -
> > -             __assign_timeline(ctx, timeline);
> > -             intel_timeline_put(timeline);
> >       }
> >
> >       trace_i915_context_create(ctx);
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
> > index 676592e27e7d2..df76767f0c41b 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
> > @@ -83,7 +83,19 @@ struct i915_gem_context {
> >       struct i915_gem_engines __rcu *engines;
> >       struct mutex engines_mutex; /* guards writes to engines */
> >
> > -     struct intel_timeline *timeline;
> > +     /**
> > +      * @syncobj: Shared timeline syncobj
> > +      *
> > +      * When the SHARED_TIMELINE flag is set on context creation, we
> > +      * emulate a single timeline across all engines using this syncobj.
> > +      * For every execbuffer2 call, this syncobj is used as both an in-
> > +      * and out-fence.  Unlike the real intel_timeline, this doesn't
> > +      * provide perfect atomic in-order guarantees if the client races
> > +      * with itself by calling execbuffer2 twice concurrently.  However,
> > +      * if userspace races with itself, that's not likely to yield well-
> > +      * defined results anyway so we choose to not care.
> > +      */
> > +     struct drm_syncobj *syncobj;
> >
> >       /**
> >        * @vm: unique address space (GTT)
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> > index 96403130a373d..2e9748c1edddf 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> > @@ -3295,6 +3295,16 @@ i915_gem_do_execbuffer(struct drm_device *dev,
> >               goto err_vma;
> >       }
> >
> > +     if (unlikely(eb.gem_context->syncobj)) {
> > +             struct dma_fence *fence;
> > +
> > +             fence = drm_syncobj_fence_get(eb.gem_context->syncobj);
> > +             err = i915_request_await_dma_fence(eb.request, fence);
> > +             if (err)
> > +                     goto err_ext;
> > +             dma_fence_put(fence);
>
> Also think this needs to be moved above the error checking, as this put
> corresponds to 'drm_syncobj_fence_get', right?

Already done.  Let me send v3.

> Matt
>
> > +     }
> > +
> >       if (in_fence) {
> >               if (args->flags & I915_EXEC_FENCE_SUBMIT)
> >                       err = i915_request_await_execution(eb.request,
> > @@ -3351,6 +3361,12 @@ i915_gem_do_execbuffer(struct drm_device *dev,
> >                       fput(out_fence->file);
> >               }
> >       }
> > +
> > +     if (unlikely(eb.gem_context->syncobj)) {
> > +             drm_syncobj_replace_fence(eb.gem_context->syncobj,
> > +                                       &eb.request->fence);
> > +     }
> > +
> >       i915_request_put(eb.request);
> >
> >  err_vma:
> > --
> > 2.29.2
> >
diff mbox series

Patch

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index f88bac19333ec..e094f4a1ca4cd 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -67,6 +67,8 @@ 
 #include <linux/log2.h>
 #include <linux/nospec.h>
 
+#include <drm/drm_syncobj.h>
+
 #include "gt/gen6_ppgtt.h"
 #include "gt/intel_context.h"
 #include "gt/intel_engine_heartbeat.h"
@@ -224,10 +226,6 @@  static void intel_context_set_gem(struct intel_context *ce,
 		ce->vm = vm;
 	}
 
-	GEM_BUG_ON(ce->timeline);
-	if (ctx->timeline)
-		ce->timeline = intel_timeline_get(ctx->timeline);
-
 	if (ctx->sched.priority >= I915_PRIORITY_NORMAL &&
 	    intel_engine_has_timeslices(ce->engine))
 		__set_bit(CONTEXT_USE_SEMAPHORES, &ce->flags);
@@ -344,8 +342,8 @@  void i915_gem_context_release(struct kref *ref)
 	mutex_destroy(&ctx->engines_mutex);
 	mutex_destroy(&ctx->lut_mutex);
 
-	if (ctx->timeline)
-		intel_timeline_put(ctx->timeline);
+	if (ctx->syncobj)
+		drm_syncobj_put(ctx->syncobj);
 
 	put_pid(ctx->pid);
 	mutex_destroy(&ctx->mutex);
@@ -790,33 +788,11 @@  static void __assign_ppgtt(struct i915_gem_context *ctx,
 		i915_vm_close(vm);
 }
 
-static void __set_timeline(struct intel_timeline **dst,
-			   struct intel_timeline *src)
-{
-	struct intel_timeline *old = *dst;
-
-	*dst = src ? intel_timeline_get(src) : NULL;
-
-	if (old)
-		intel_timeline_put(old);
-}
-
-static void __apply_timeline(struct intel_context *ce, void *timeline)
-{
-	__set_timeline(&ce->timeline, timeline);
-}
-
-static void __assign_timeline(struct i915_gem_context *ctx,
-			      struct intel_timeline *timeline)
-{
-	__set_timeline(&ctx->timeline, timeline);
-	context_apply_all(ctx, __apply_timeline, timeline);
-}
-
 static struct i915_gem_context *
 i915_gem_create_context(struct drm_i915_private *i915, unsigned int flags)
 {
 	struct i915_gem_context *ctx;
+	int ret;
 
 	if (flags & I915_CONTEXT_CREATE_FLAGS_SINGLE_TIMELINE &&
 	    !HAS_EXECLISTS(i915))
@@ -845,16 +821,13 @@  i915_gem_create_context(struct drm_i915_private *i915, unsigned int flags)
 	}
 
 	if (flags & I915_CONTEXT_CREATE_FLAGS_SINGLE_TIMELINE) {
-		struct intel_timeline *timeline;
-
-		timeline = intel_timeline_create(&i915->gt);
-		if (IS_ERR(timeline)) {
+		ret = drm_syncobj_create(&ctx->syncobj,
+					 DRM_SYNCOBJ_CREATE_SIGNALED,
+					 NULL);
+		if (ret) {
 			context_close(ctx);
-			return ERR_CAST(timeline);
+			return ERR_PTR(ret);
 		}
-
-		__assign_timeline(ctx, timeline);
-		intel_timeline_put(timeline);
 	}
 
 	trace_i915_context_create(ctx);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
index 676592e27e7d2..df76767f0c41b 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
@@ -83,7 +83,19 @@  struct i915_gem_context {
 	struct i915_gem_engines __rcu *engines;
 	struct mutex engines_mutex; /* guards writes to engines */
 
-	struct intel_timeline *timeline;
+	/**
+	 * @syncobj: Shared timeline syncobj
+	 *
+	 * When the SHARED_TIMELINE flag is set on context creation, we
+	 * emulate a single timeline across all engines using this syncobj.
+	 * For every execbuffer2 call, this syncobj is used as both an in-
+	 * and out-fence.  Unlike the real intel_timeline, this doesn't
+	 * provide perfect atomic in-order guarantees if the client races
+	 * with itself by calling execbuffer2 twice concurrently.  However,
+	 * if userspace races with itself, that's not likely to yield well-
+	 * defined results anyway so we choose to not care.
+	 */
+	struct drm_syncobj *syncobj;
 
 	/**
 	 * @vm: unique address space (GTT)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index 96403130a373d..2e9748c1edddf 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -3295,6 +3295,16 @@  i915_gem_do_execbuffer(struct drm_device *dev,
 		goto err_vma;
 	}
 
+	if (unlikely(eb.gem_context->syncobj)) {
+		struct dma_fence *fence;
+
+		fence = drm_syncobj_fence_get(eb.gem_context->syncobj);
+		err = i915_request_await_dma_fence(eb.request, fence);
+		if (err)
+			goto err_ext;
+		dma_fence_put(fence);
+	}
+
 	if (in_fence) {
 		if (args->flags & I915_EXEC_FENCE_SUBMIT)
 			err = i915_request_await_execution(eb.request,
@@ -3351,6 +3361,12 @@  i915_gem_do_execbuffer(struct drm_device *dev,
 			fput(out_fence->file);
 		}
 	}
+
+	if (unlikely(eb.gem_context->syncobj)) {
+		drm_syncobj_replace_fence(eb.gem_context->syncobj,
+					  &eb.request->fence);
+	}
+
 	i915_request_put(eb.request);
 
 err_vma: