diff mbox series

drm/panthor: Fix access to uninitialized variable in tick_ctx_cleanup()

Message ID 20240930161101.67366-1-boris.brezillon@collabora.com (mailing list archive)
State New
Headers show
Series drm/panthor: Fix access to uninitialized variable in tick_ctx_cleanup() | expand

Commit Message

Boris Brezillon Sept. 30, 2024, 4:11 p.m. UTC
The group variable can't be used to retrieve ptdev in our second loop,
because it might be uninitialized or point to a group that's already
gone. Get the ptdev object from the scheduler instead.

Fixes: d72f049087d4 ("drm/panthor: Allow driver compilation")
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Julia Lawall <julia.lawall@inria.fr>
Closes: https://lore.kernel.org/r/202409302306.UDikqa03-lkp@intel.com/
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
---
 drivers/gpu/drm/panthor/panthor_sched.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

Comments

Julia Lawall Sept. 30, 2024, 4:16 p.m. UTC | #1
On Mon, 30 Sep 2024, Boris Brezillon wrote:

> The group variable can't be used to retrieve ptdev in our second loop,
> because it might be uninitialized or point to a group that's already
> gone. Get the ptdev object from the scheduler instead.

Won't it always be pointing to some random place above the list_head at
the start of the list in the last element of the array?

julia

>
> Fixes: d72f049087d4 ("drm/panthor: Allow driver compilation")
> Reported-by: kernel test robot <lkp@intel.com>
> Reported-by: Julia Lawall <julia.lawall@inria.fr>
> Closes: https://lore.kernel.org/r/202409302306.UDikqa03-lkp@intel.com/
> Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
> ---
>  drivers/gpu/drm/panthor/panthor_sched.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c
> index 201d5e7a921e..24ff91c084e4 100644
> --- a/drivers/gpu/drm/panthor/panthor_sched.c
> +++ b/drivers/gpu/drm/panthor/panthor_sched.c
> @@ -2052,6 +2052,7 @@ static void
>  tick_ctx_cleanup(struct panthor_scheduler *sched,
>  		 struct panthor_sched_tick_ctx *ctx)
>  {
> +	struct panthor_device *ptdev = sched->ptdev;
>  	struct panthor_group *group, *tmp;
>  	u32 i;
>
> @@ -2060,7 +2061,7 @@ tick_ctx_cleanup(struct panthor_scheduler *sched,
>  			/* If everything went fine, we should only have groups
>  			 * to be terminated in the old_groups lists.
>  			 */
> -			drm_WARN_ON(&group->ptdev->base, !ctx->csg_upd_failed_mask &&
> +			drm_WARN_ON(&ptdev->base, !ctx->csg_upd_failed_mask &&
>  				    group_can_run(group));
>
>  			if (!group_can_run(group)) {
> @@ -2083,7 +2084,7 @@ tick_ctx_cleanup(struct panthor_scheduler *sched,
>  		/* If everything went fine, the groups to schedule lists should
>  		 * be empty.
>  		 */
> -		drm_WARN_ON(&group->ptdev->base,
> +		drm_WARN_ON(&ptdev->base,
>  			    !ctx->csg_upd_failed_mask && !list_empty(&ctx->groups[i]));
>
>  		list_for_each_entry_safe(group, tmp, &ctx->groups[i], run_node) {
> --
> 2.46.0
>
>
Boris Brezillon Sept. 30, 2024, 4:31 p.m. UTC | #2
On Mon, 30 Sep 2024 18:16:04 +0200 (CEST)
Julia Lawall <julia.lawall@inria.fr> wrote:

> On Mon, 30 Sep 2024, Boris Brezillon wrote:
> 
> > The group variable can't be used to retrieve ptdev in our second loop,
> > because it might be uninitialized or point to a group that's already
> > gone. Get the ptdev object from the scheduler instead.  
> 
> Won't it always be pointing to some random place above the list_head at
> the start of the list in the last element of the array?

Oh, absolutely. I'll fix the commit message and send a v2 shortly.
diff mbox series

Patch

diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c
index 201d5e7a921e..24ff91c084e4 100644
--- a/drivers/gpu/drm/panthor/panthor_sched.c
+++ b/drivers/gpu/drm/panthor/panthor_sched.c
@@ -2052,6 +2052,7 @@  static void
 tick_ctx_cleanup(struct panthor_scheduler *sched,
 		 struct panthor_sched_tick_ctx *ctx)
 {
+	struct panthor_device *ptdev = sched->ptdev;
 	struct panthor_group *group, *tmp;
 	u32 i;
 
@@ -2060,7 +2061,7 @@  tick_ctx_cleanup(struct panthor_scheduler *sched,
 			/* If everything went fine, we should only have groups
 			 * to be terminated in the old_groups lists.
 			 */
-			drm_WARN_ON(&group->ptdev->base, !ctx->csg_upd_failed_mask &&
+			drm_WARN_ON(&ptdev->base, !ctx->csg_upd_failed_mask &&
 				    group_can_run(group));
 
 			if (!group_can_run(group)) {
@@ -2083,7 +2084,7 @@  tick_ctx_cleanup(struct panthor_scheduler *sched,
 		/* If everything went fine, the groups to schedule lists should
 		 * be empty.
 		 */
-		drm_WARN_ON(&group->ptdev->base,
+		drm_WARN_ON(&ptdev->base,
 			    !ctx->csg_upd_failed_mask && !list_empty(&ctx->groups[i]));
 
 		list_for_each_entry_safe(group, tmp, &ctx->groups[i], run_node) {