diff mbox series

drm/i915/selftest: allow larger memory allocation

Message ID wv4aw6syqjox52lpgkddxykr5namvan4eb7b4obj3rligwyp7m@33c3ko2mj7sp (mailing list archive)
State New
Headers show
Series drm/i915/selftest: allow larger memory allocation | expand

Commit Message

Mikolaj Wasiak March 14, 2025, 2:42 p.m. UTC
Due to changes in allocator, the size of the allocation for
contiguous region is not rounded up to a power-of-two and
instead allocated as is. Thus, change the part of test that
expected the allocation to fail.

Closes: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9311

Signed-off-by: Mikolaj Wasiak <mikolaj.wasiak@intel.com>
---
 .../gpu/drm/i915/selftests/intel_memory_region.c  | 15 +++++----------
 1 file changed, 5 insertions(+), 10 deletions(-)

Comments

Krzysztof Niemiec March 14, 2025, 5:04 p.m. UTC | #1
Hi Mikolaj,

On 2025-03-14 at 15:42:23 GMT, Mikolaj Wasiak wrote:
> Due to changes in allocator, the size of the allocation for
> contiguous region is not rounded up to a power-of-two and
> instead allocated as is. Thus, change the part of test that
> expected the allocation to fail.
> 
> Closes: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9311
> 
> Signed-off-by: Mikolaj Wasiak <mikolaj.wasiak@intel.com>
> ---
>  .../gpu/drm/i915/selftests/intel_memory_region.c  | 15 +++++----------
>  1 file changed, 5 insertions(+), 10 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c
> index f08f6674911e..ce848ae878c8 100644
> --- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c
> +++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c
> @@ -413,22 +413,17 @@ static int igt_mock_splintered_region(void *arg)
>  
>  	close_objects(mem, &objects);
>  
> -	/*
> -	 * While we should be able allocate everything without any flag
> -	 * restrictions, if we consider I915_BO_ALLOC_CONTIGUOUS then we are
> -	 * actually limited to the largest power-of-two for the region size i.e
> -	 * max_order, due to the inner workings of the buddy allocator. So make
> -	 * sure that does indeed hold true.
> -	 */
> -
>  	obj = igt_object_create(mem, &objects, size, I915_BO_ALLOC_CONTIGUOUS);
> -	if (!IS_ERR(obj)) {
> -		pr_err("%s too large contiguous allocation was not rejected\n",
> +	if (!IS_ERR(obj) && !is_contiguous(obj)) {
> +		pr_err("%s large allocation was not contiguous\n",
>  		       __func__);
>  		err = -EINVAL;
>  		goto out_close;
>  	}
>  

If I understand the test correctly, the goal of the part that you're
changing is to see if an attempt at allocating more memory than
max_order is properly rejected. Since the allocations are more granular
now (not only limited to powers of two), and size doesn't get rounded up
to the higher power of two, we should be able to allocate 'size'
exactly. Meaning we lose the intended functionality of the test (check
if we can't allocate too big of an object), because we're not allocating
too big of an object anymore.

I guess a check for contiguousness does not hurt, but the test behavior
is fundamentally different here. Maybe manually rounding size up to the
nearest larger power of two would be a better idea here?

> +	if (!IS_ERR(obj))
> +		close_objects(mem, &objects);
> +
>  	obj = igt_object_create(mem, &objects, rounddown_pow_of_two(size),
>  				I915_BO_ALLOC_CONTIGUOUS);
>  	if (IS_ERR(obj)) {

I'll paste some more lines from that test here:

        obj = igt_object_create(mem, &objects, rounddown_pow_of_two(size),
                                I915_BO_ALLOC_CONTIGUOUS);
        if (IS_ERR(obj)) {
                pr_err("%s largest possible contiguous allocation failed\n",
                       __func__);
                err = PTR_ERR(obj);
                goto out_close;
        }

This is the next check - we see if the largest possible allocation
(according to the old logic, it would be 'size' rounded down to the
largest smaller or equal power of two) _does_ go through. The success of
this particular check isn't affected by the allocator changes, but since
rounddown_pow_of_two(size) is not the largest possible allocation
anymore, maybe it's better to change this too (e.g. drop the rounddown
function). This way we keep the intended test behavior here as well.
I suppose this is still in scope of the patch.

Thanks
Krzysztof

> -- 
> 2.43.0
>
Krzysztof Karas March 18, 2025, 10:39 a.m. UTC | #2
Hi Krzysztof,

> > -	/*
> > -	 * While we should be able allocate everything without any flag
> > -	 * restrictions, if we consider I915_BO_ALLOC_CONTIGUOUS then we are
> > -	 * actually limited to the largest power-of-two for the region size i.e
> > -	 * max_order, due to the inner workings of the buddy allocator. So make
> > -	 * sure that does indeed hold true.
> > -	 */
> > -
> >  	obj = igt_object_create(mem, &objects, size, I915_BO_ALLOC_CONTIGUOUS);
> > -	if (!IS_ERR(obj)) {
> > -		pr_err("%s too large contiguous allocation was not rejected\n",
> > +	if (!IS_ERR(obj) && !is_contiguous(obj)) {
> > +		pr_err("%s large allocation was not contiguous\n",
> >  		       __func__);
> >  		err = -EINVAL;
> >  		goto out_close;
> >  	}
> >  
> 
> If I understand the test correctly, the goal of the part that you're
> changing is to see if an attempt at allocating more memory than
> max_order is properly rejected. Since the allocations are more granular
> now (not only limited to powers of two), and size doesn't get rounded up
> to the higher power of two, we should be able to allocate 'size'
> exactly. Meaning we lose the intended functionality of the test (check
> if we can't allocate too big of an object), because we're not allocating
> too big of an object anymore.

Since the allocator is a lot more lenient now, we could focus on
getting a contiguous object instead.

> I guess a check for contiguousness does not hurt, but the test behavior
> is fundamentally different here. Maybe manually rounding size up to the
> nearest larger power of two would be a better idea here?

Before changes were made to the allocator we knew that there
was a corner case with rounding size to the power of two. Now,
we know that the allocator will take in any appropriate size and
give us a valid object (correct me if I'm wrong here) - if that
is the case, then this should just be a check if we fail on a
size that is too large (unless this is covered by some other
test).

> 
> > +	if (!IS_ERR(obj))
> > +		close_objects(mem, &objects);
> > +

This code would obfuscate the original purpose of this test and
just pass regardless of successful or failed object allocation.

> >  	obj = igt_object_create(mem, &objects, rounddown_pow_of_two(size),
> >  				I915_BO_ALLOC_CONTIGUOUS);
> >  	if (IS_ERR(obj)) {
> 
> I'll paste some more lines from that test here:
> 
>         obj = igt_object_create(mem, &objects, rounddown_pow_of_two(size),
>                                 I915_BO_ALLOC_CONTIGUOUS);
>         if (IS_ERR(obj)) {
>                 pr_err("%s largest possible contiguous allocation failed\n",
>                        __func__);
>                 err = PTR_ERR(obj);
>                 goto out_close;
>         }
> 
> This is the next check - we see if the largest possible allocation
> (according to the old logic, it would be 'size' rounded down to the
> largest smaller or equal power of two) _does_ go through. The success of
> this particular check isn't affected by the allocator changes, but since
> rounddown_pow_of_two(size) is not the largest possible allocation
> anymore, maybe it's better to change this too (e.g. drop the rounddown
> function). This way we keep the intended test behavior here as well.
> I suppose this is still in scope of the patch.

Yes, this would be a bit different than the original behavior,
but we'd be sure to fail object allocation. If no other test
checks this condition then this test could do that here.

Best Regards,
Krzysztof
Krzysztof Niemiec March 18, 2025, 12:54 p.m. UTC | #3
Hi Krzysztof,


On 2025-03-18 at 10:39:41 GMT, Krzysztof Karas wrote:
> Hi Krzysztof,
> 
> > > -	/*
> > > -	 * While we should be able allocate everything without any flag
> > > -	 * restrictions, if we consider I915_BO_ALLOC_CONTIGUOUS then we are
> > > -	 * actually limited to the largest power-of-two for the region size i.e
> > > -	 * max_order, due to the inner workings of the buddy allocator. So make
> > > -	 * sure that does indeed hold true.
> > > -	 */
> > > -
> > >  	obj = igt_object_create(mem, &objects, size, I915_BO_ALLOC_CONTIGUOUS);
> > > -	if (!IS_ERR(obj)) {
> > > -		pr_err("%s too large contiguous allocation was not rejected\n",
> > > +	if (!IS_ERR(obj) && !is_contiguous(obj)) {
> > > +		pr_err("%s large allocation was not contiguous\n",
> > >  		       __func__);
> > >  		err = -EINVAL;
> > >  		goto out_close;
> > >  	}
> > >  
> > 
> > If I understand the test correctly, the goal of the part that you're
> > changing is to see if an attempt at allocating more memory than
> > max_order is properly rejected. Since the allocations are more granular
> > now (not only limited to powers of two), and size doesn't get rounded up
> > to the higher power of two, we should be able to allocate 'size'
> > exactly. Meaning we lose the intended functionality of the test (check
> > if we can't allocate too big of an object), because we're not allocating
> > too big of an object anymore.
> 
> Since the allocator is a lot more lenient now, we could focus on
> getting a contiguous object instead.
> 

I don't think it was the intention of that part of the test, but that's
a valid thing to test against nonetheless.

> > I guess a check for contiguousness does not hurt, but the test behavior
> > is fundamentally different here. Maybe manually rounding size up to the
> > nearest larger power of two would be a better idea here?
> 
> Before changes were made to the allocator we knew that there
> was a corner case with rounding size to the power of two. Now,
> we know that the allocator will take in any appropriate size and
> give us a valid object (correct me if I'm wrong here) - if that
> is the case, then this should just be a check if we fail on a
> size that is too large (unless this is covered by some other
> test).
> 

That's what I meant with rounding up to the power of two - make an
object intentionally larger than we can accomodate so we can check if
that gets rejected. I suggested using that rounding up so we replicate
the previous behavior, but I guess it could as well be size + 1 or size
* 2 or any larger value; the point is to verify that the driver doesn't
allow allocs larger than expected.

> > 
> > > +	if (!IS_ERR(obj))
> > > +		close_objects(mem, &objects);
> > > +
> 
> This code would obfuscate the original purpose of this test and
> just pass regardless of successful or failed object allocation.
> 

(that's a part of the original patch not my reply)

If this part of the test will morph into a contiguousness check, we need
to deallocate for the other check, so those two lines must stay. If that
part keeps being a rejection check, just tweaked to comply with the new
allocator behavior, then the deallocation will be in the error path like
before the change.

> > >  	obj = igt_object_create(mem, &objects, rounddown_pow_of_two(size),
> > >  				I915_BO_ALLOC_CONTIGUOUS);
> > >  	if (IS_ERR(obj)) {
> > 
> > I'll paste some more lines from that test here:
> > 
> >         obj = igt_object_create(mem, &objects, rounddown_pow_of_two(size),
> >                                 I915_BO_ALLOC_CONTIGUOUS);
> >         if (IS_ERR(obj)) {
> >                 pr_err("%s largest possible contiguous allocation failed\n",
> >                        __func__);
> >                 err = PTR_ERR(obj);
> >                 goto out_close;
> >         }
> > 
> > This is the next check - we see if the largest possible allocation
> > (according to the old logic, it would be 'size' rounded down to the
> > largest smaller or equal power of two) _does_ go through. The success of
> > this particular check isn't affected by the allocator changes, but since
> > rounddown_pow_of_two(size) is not the largest possible allocation
> > anymore, maybe it's better to change this too (e.g. drop the rounddown
> > function). This way we keep the intended test behavior here as well.
> > I suppose this is still in scope of the patch.
> 
> Yes, this would be a bit different than the original behavior,
> but we'd be sure to fail object allocation. If no other test
> checks this condition then this test could do that here.
> 

I mean, if this test was doing that, I don't think we should patch that
out for no reason.

IMO we need to tweak those two checks to check for the same thing
functionally but keep the new allocator behavior in mind. Then
contiguousness is a good thing to check, but I think it should be added
as a third check so we don't lose either of the other two.

Thanks
Krzysztof

> Best Regards,
> Krzysztof
Krzysztof Karas March 19, 2025, 12:40 p.m. UTC | #4
Hi Krzysztof,

[...]

> > > I guess a check for contiguousness does not hurt, but the test behavior
> > > is fundamentally different here. Maybe manually rounding size up to the
> > > nearest larger power of two would be a better idea here?
> > 
> > Before changes were made to the allocator we knew that there
> > was a corner case with rounding size to the power of two. Now,
> > we know that the allocator will take in any appropriate size and
> > give us a valid object (correct me if I'm wrong here) - if that
> > is the case, then this should just be a check if we fail on a
> > size that is too large (unless this is covered by some other
> > test).
> > 
> 
> That's what I meant with rounding up to the power of two - make an
> object intentionally larger than we can accomodate so we can check if
> that gets rejected. I suggested using that rounding up so we replicate
> the previous behavior, but I guess it could as well be size + 1 or size
> * 2 or any larger value; the point is to verify that the driver doesn't
> allow allocs larger than expected.

I think what I wanted to convey, but failed with my wording, is
that I do not think we need to add more complexity to this test
by introducing rounding size down to the closest power of two,
but rather explicitly make the size too big (as you mentioned,
something like <max_available_size> + 1 would work).

[...]

> > >         obj = igt_object_create(mem, &objects, rounddown_pow_of_two(size),
> > >                                 I915_BO_ALLOC_CONTIGUOUS);
> > >         if (IS_ERR(obj)) {
> > >                 pr_err("%s largest possible contiguous allocation failed\n",
> > >                        __func__);
> > >                 err = PTR_ERR(obj);
> > >                 goto out_close;
> > >         }
> > > 
> > > This is the next check - we see if the largest possible allocation
> > > (according to the old logic, it would be 'size' rounded down to the
> > > largest smaller or equal power of two) _does_ go through. The success of
> > > this particular check isn't affected by the allocator changes, but since
> > > rounddown_pow_of_two(size) is not the largest possible allocation
> > > anymore, maybe it's better to change this too (e.g. drop the rounddown
> > > function). This way we keep the intended test behavior here as well.
> > > I suppose this is still in scope of the patch.
> > 
> > Yes, this would be a bit different than the original behavior,
> > but we'd be sure to fail object allocation. If no other test
> > checks this condition then this test could do that here.
> > 
> 
> I mean, if this test was doing that, I don't think we should patch that
> out for no reason.

Oh, I got confused and thought that you wanted to add some more
checks there.
I think that we could have this part of the test ensure that the
max possible allocation is still, well, "possible", so this
would turn out to be more of a sanity check.

> IMO we need to tweak those two checks to check for the same thing
> functionally but keep the new allocator behavior in mind. Then
> contiguousness is a good thing to check, but I think it should be added
> as a third check so we don't lose either of the other two.

I agree. We need the following:
 1) Fail check on "too large",
 2) Pass check on "max possible size",
 3) Pass check on contiguous object.

@Mikolaj, could you add these tweaks to this patch?

Best Regards,
Krzysztof
diff mbox series

Patch

diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c
index f08f6674911e..ce848ae878c8 100644
--- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c
+++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c
@@ -413,22 +413,17 @@  static int igt_mock_splintered_region(void *arg)
 
 	close_objects(mem, &objects);
 
-	/*
-	 * While we should be able allocate everything without any flag
-	 * restrictions, if we consider I915_BO_ALLOC_CONTIGUOUS then we are
-	 * actually limited to the largest power-of-two for the region size i.e
-	 * max_order, due to the inner workings of the buddy allocator. So make
-	 * sure that does indeed hold true.
-	 */
-
 	obj = igt_object_create(mem, &objects, size, I915_BO_ALLOC_CONTIGUOUS);
-	if (!IS_ERR(obj)) {
-		pr_err("%s too large contiguous allocation was not rejected\n",
+	if (!IS_ERR(obj) && !is_contiguous(obj)) {
+		pr_err("%s large allocation was not contiguous\n",
 		       __func__);
 		err = -EINVAL;
 		goto out_close;
 	}
 
+	if (!IS_ERR(obj))
+		close_objects(mem, &objects);
+
 	obj = igt_object_create(mem, &objects, rounddown_pow_of_two(size),
 				I915_BO_ALLOC_CONTIGUOUS);
 	if (IS_ERR(obj)) {