diff mbox series

[3/3] drm/i915: Try hard to bind the context

Message ID 20191202204316.2665847-3-chris@chris-wilson.co.uk (mailing list archive)
State New, archived
Headers show
Series [1/3] drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit() | expand

Commit Message

Chris Wilson Dec. 2, 2019, 8:43 p.m. UTC
It is not acceptable for context pinning to fail with -ENOSPC as we
should always be able to make space in the GGTT. The only reason we may
fail is that other "temporary" context pins are reserving their space
and we need to wait for an available slot.

Closes: https://gitlab.freedesktop.org/drm/intel/issues/676
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem_gtt.c | 21 ++++++++++++++++++---
 1 file changed, 18 insertions(+), 3 deletions(-)

Comments

Mika Kuoppala Dec. 3, 2019, 1:24 p.m. UTC | #1
Chris Wilson <chris@chris-wilson.co.uk> writes:

> It is not acceptable for context pinning to fail with -ENOSPC as we
> should always be able to make space in the GGTT. The only reason we may
> fail is that other "temporary" context pins are reserving their space
> and we need to wait for an available slot.
>
> Closes: https://gitlab.freedesktop.org/drm/intel/issues/676
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  drivers/gpu/drm/i915/i915_gem_gtt.c | 21 ++++++++++++++++++---
>  1 file changed, 18 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
> index 10b3d6d44045..7e20c6f62cd5 100644
> --- a/drivers/gpu/drm/i915/i915_gem_gtt.c
> +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
> @@ -1934,9 +1934,24 @@ int gen6_ppgtt_pin(struct i915_ppgtt *base)
>  	 * size. We allocate at the top of the GTT to avoid fragmentation.
>  	 */
>  	if (!atomic_read(&ppgtt->pin_count)) {
> -		err = i915_vma_pin(ppgtt->vma,
> -				   0, GEN6_PD_ALIGN,
> -				   PIN_GLOBAL | PIN_HIGH);
> +		do {
> +			struct i915_address_space *vm = ppgtt->vma->vm;
> +
> +			err = i915_vma_pin(ppgtt->vma,
> +					   0, GEN6_PD_ALIGN,
> +					   PIN_GLOBAL | PIN_HIGH);
> +			if (err != -ENOSPC)
> +				break;
> +
> +			/* We don't take no for an answer! */
> +			err = mutex_lock_interruptible(&vm->mutex);
> +			if (err == 0) {
> +				err = i915_gem_evict_vm(vm);

Why would we hold any significant amount of vmas?
I thought we need to kick the ggtt hard in order for this
to fit in.

-Mika

> +				mutex_unlock(&vm->mutex);
> +			}
> +			if (err)
> +				break;
> +		} while (1);
>  	}
>  	if (!err)
>  		atomic_inc(&ppgtt->pin_count);
> -- 
> 2.24.0
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
Chris Wilson Dec. 3, 2019, 1:28 p.m. UTC | #2
Quoting Mika Kuoppala (2019-12-03 13:24:03)
> Chris Wilson <chris@chris-wilson.co.uk> writes:
> 
> > It is not acceptable for context pinning to fail with -ENOSPC as we
> > should always be able to make space in the GGTT. The only reason we may
> > fail is that other "temporary" context pins are reserving their space
> > and we need to wait for an available slot.
> >
> > Closes: https://gitlab.freedesktop.org/drm/intel/issues/676
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > ---
> >  drivers/gpu/drm/i915/i915_gem_gtt.c | 21 ++++++++++++++++++---
> >  1 file changed, 18 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
> > index 10b3d6d44045..7e20c6f62cd5 100644
> > --- a/drivers/gpu/drm/i915/i915_gem_gtt.c
> > +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
> > @@ -1934,9 +1934,24 @@ int gen6_ppgtt_pin(struct i915_ppgtt *base)
> >        * size. We allocate at the top of the GTT to avoid fragmentation.
> >        */
> >       if (!atomic_read(&ppgtt->pin_count)) {
> > -             err = i915_vma_pin(ppgtt->vma,
> > -                                0, GEN6_PD_ALIGN,
> > -                                PIN_GLOBAL | PIN_HIGH);
> > +             do {
> > +                     struct i915_address_space *vm = ppgtt->vma->vm;
> > +
> > +                     err = i915_vma_pin(ppgtt->vma,
> > +                                        0, GEN6_PD_ALIGN,
> > +                                        PIN_GLOBAL | PIN_HIGH);
> > +                     if (err != -ENOSPC)
> > +                             break;
> > +
> > +                     /* We don't take no for an answer! */
> > +                     err = mutex_lock_interruptible(&vm->mutex);
> > +                     if (err == 0) {
> > +                             err = i915_gem_evict_vm(vm);
> 
> Why would we hold any significant amount of vmas?
> I thought we need to kick the ggtt hard in order for this
> to fit in.

Inflight contexts will be pinned occupying that space until they are
flushed. We used to track context activity so we could explicitly wait
on pinned contexts, but we had to forgo that in order to randomly
reordered execution (guc).

The challenge is that evict-something can not always succeed by itself
in unpinning those contexts, so we need to kick again. In the future,
pipelined evictions... The future is complicated.
-Chris
diff mbox series

Patch

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 10b3d6d44045..7e20c6f62cd5 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -1934,9 +1934,24 @@  int gen6_ppgtt_pin(struct i915_ppgtt *base)
 	 * size. We allocate at the top of the GTT to avoid fragmentation.
 	 */
 	if (!atomic_read(&ppgtt->pin_count)) {
-		err = i915_vma_pin(ppgtt->vma,
-				   0, GEN6_PD_ALIGN,
-				   PIN_GLOBAL | PIN_HIGH);
+		do {
+			struct i915_address_space *vm = ppgtt->vma->vm;
+
+			err = i915_vma_pin(ppgtt->vma,
+					   0, GEN6_PD_ALIGN,
+					   PIN_GLOBAL | PIN_HIGH);
+			if (err != -ENOSPC)
+				break;
+
+			/* We don't take no for an answer! */
+			err = mutex_lock_interruptible(&vm->mutex);
+			if (err == 0) {
+				err = i915_gem_evict_vm(vm);
+				mutex_unlock(&vm->mutex);
+			}
+			if (err)
+				break;
+		} while (1);
 	}
 	if (!err)
 		atomic_inc(&ppgtt->pin_count);