diff mbox series

drm/i915/gtt: Relax pd_used assertion

Message ID 20190820141218.14714-1-chris@chris-wilson.co.uk (mailing list archive)
State New, archived
Headers show
Series drm/i915/gtt: Relax pd_used assertion | expand

Commit Message

Chris Wilson Aug. 20, 2019, 2:12 p.m. UTC
The current assertion tries to make sure that we do not over count the
number of used PDE inside a page directory -- that is with an array of
512 pde, we do not expect more than 512 elements used! However, our
assertion has to take into account that as we pin an element into the
page directory, the caller first pins the page directory so the usage
count is one higher. However, this should be one extra pin per thread,
and the upper bound is that we may have one thread for each entry.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
---
 drivers/gpu/drm/i915/i915_gem_gtt.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Comments

Mika Kuoppala Aug. 20, 2019, 2:25 p.m. UTC | #1
Chris Wilson <chris@chris-wilson.co.uk> writes:

> The current assertion tries to make sure that we do not over count the
> number of used PDE inside a page directory -- that is with an array of
> 512 pde, we do not expect more than 512 elements used! However, our
> assertion has to take into account that as we pin an element into the
> page directory, the caller first pins the page directory so the usage
> count is one higher. However, this should be one extra pin per thread,
> and the upper bound is that we may have one thread for each entry.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
> ---
>  drivers/gpu/drm/i915/i915_gem_gtt.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
> index e48df11a19fb..9435d184ddf2 100644
> --- a/drivers/gpu/drm/i915/i915_gem_gtt.c
> +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
> @@ -771,7 +771,8 @@ __set_pd_entry(struct i915_page_directory * const pd,
>  	       struct i915_page_dma * const to,
>  	       u64 (*encode)(const dma_addr_t, const enum i915_cache_level))
>  {
> -	GEM_BUG_ON(atomic_read(px_used(pd)) > ARRAY_SIZE(pd->entry));
> +	/* Each thread pre-pins the pd, and we may have a thread per each pde */
> +	GEM_BUG_ON(atomic_read(px_used(pd)) > 2 * ARRAY_SIZE(pd->entry));

When I saw +1 wrt array_size that should have rang some bells between
my ears. I did increase it to +1 for the upper pinning but
the parallelism escaped me and no more bells were rung.

from irc 'the upper page directory' could be added to the commit msg
and to the comment to emphasize why this happen leaf like.

Thanks,
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>

>
>  	atomic_inc(px_used(pd));
>  	pd->entry[idx] = to;
> -- 
> 2.23.0.rc1
Chris Wilson Aug. 20, 2019, 2:28 p.m. UTC | #2
Quoting Mika Kuoppala (2019-08-20 15:25:50)
> Chris Wilson <chris@chris-wilson.co.uk> writes:
> 
> > The current assertion tries to make sure that we do not over count the
> > number of used PDE inside a page directory -- that is with an array of
> > 512 pde, we do not expect more than 512 elements used! However, our
> > assertion has to take into account that as we pin an element into the
> > page directory, the caller first pins the page directory so the usage
> > count is one higher. However, this should be one extra pin per thread,
> > and the upper bound is that we may have one thread for each entry.
> >
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
> > ---
> >  drivers/gpu/drm/i915/i915_gem_gtt.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
> > index e48df11a19fb..9435d184ddf2 100644
> > --- a/drivers/gpu/drm/i915/i915_gem_gtt.c
> > +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
> > @@ -771,7 +771,8 @@ __set_pd_entry(struct i915_page_directory * const pd,
> >              struct i915_page_dma * const to,
> >              u64 (*encode)(const dma_addr_t, const enum i915_cache_level))
> >  {
> > -     GEM_BUG_ON(atomic_read(px_used(pd)) > ARRAY_SIZE(pd->entry));
> > +     /* Each thread pre-pins the pd, and we may have a thread per each pde */
> > +     GEM_BUG_ON(atomic_read(px_used(pd)) > 2 * ARRAY_SIZE(pd->entry));
> 
> When I saw +1 wrt array_size that should have rang some bells between
> my ears. I did increase it to +1 for the upper pinning but
> the parallelism escaped me and no more bells were rung.

It completely escaped me and I had the reason to make sure this worked
with multiple threads!

Thanks for the review,
-Chris
diff mbox series

Patch

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index e48df11a19fb..9435d184ddf2 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -771,7 +771,8 @@  __set_pd_entry(struct i915_page_directory * const pd,
 	       struct i915_page_dma * const to,
 	       u64 (*encode)(const dma_addr_t, const enum i915_cache_level))
 {
-	GEM_BUG_ON(atomic_read(px_used(pd)) > ARRAY_SIZE(pd->entry));
+	/* Each thread pre-pins the pd, and we may have a thread per each pde */
+	GEM_BUG_ON(atomic_read(px_used(pd)) > 2 * ARRAY_SIZE(pd->entry));
 
 	atomic_inc(px_used(pd));
 	pd->entry[idx] = to;