diff mbox series

[v6,1/2] drm/panthor: Expose size of driver internal BO's over fdinfo

Message ID 20250102203817.956790-2-adrian.larumbe@collabora.com (mailing list archive)
State New
Headers show
Series drm/panthor: Display size of internal kernel BOs through fdinfo | expand

Commit Message

Adrián Martínez Larumbe Jan. 2, 2025, 8:38 p.m. UTC
This will display the sizes of kenrel BO's bound to an open file, which are
otherwise not exposed to UM through a handle.

The sizes recorded are as follows:
 - Per group: suspend buffer, protm-suspend buffer, syncobjcs
 - Per queue: ringbuffer, profiling slots, firmware interface
 - For all heaps in all heap pools across all VM's bound to an open file,
 record size of all heap chuks, and for each pool the gpu_context BO too.

This does not record the size of FW regions, as these aren't bound to a
specific open file and remain active through the whole life of the driver.

Reviewed-by: Liviu Dudau <liviu.dudau@arm.com>
Reviewed-by: Mihail Atanassov <mihail.atanassov@arm.com>
Reviewed-by: Steven Price <steven.price@arm.com>
Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
---
 drivers/gpu/drm/panthor/panthor_drv.c   | 12 ++++++
 drivers/gpu/drm/panthor/panthor_heap.c  | 26 +++++++++++++
 drivers/gpu/drm/panthor/panthor_heap.h  |  2 +
 drivers/gpu/drm/panthor/panthor_mmu.c   | 35 +++++++++++++++++
 drivers/gpu/drm/panthor/panthor_mmu.h   |  3 ++
 drivers/gpu/drm/panthor/panthor_sched.c | 52 ++++++++++++++++++++++++-
 drivers/gpu/drm/panthor/panthor_sched.h |  2 +
 7 files changed, 131 insertions(+), 1 deletion(-)

Comments

kernel test robot Jan. 3, 2025, 1:27 a.m. UTC | #1
Hi Adrián,

kernel test robot noticed the following build warnings:

[auto build test WARNING on drm-misc/drm-misc-next]
[also build test WARNING on next-20241220]
[cannot apply to linus/master v6.13-rc5]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Adri-n-Larumbe/drm-panthor-Expose-size-of-driver-internal-BO-s-over-fdinfo/20250103-044151
base:   git://anongit.freedesktop.org/drm/drm-misc drm-misc-next
patch link:    https://lore.kernel.org/r/20250102203817.956790-2-adrian.larumbe%40collabora.com
patch subject: [PATCH v6 1/2] drm/panthor: Expose size of driver internal BO's over fdinfo
config: i386-buildonly-randconfig-004-20250103 (https://download.01.org/0day-ci/archive/20250103/202501030900.s8FkUPV1-lkp@intel.com/config)
compiler: clang version 19.1.3 (https://github.com/llvm/llvm-project ab51eccf88f5321e7c60591c5546b254b6afab99)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250103/202501030900.s8FkUPV1-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202501030900.s8FkUPV1-lkp@intel.com/

All warnings (new ones prefixed by >>):

   drivers/gpu/drm/panthor/panthor_sched.c:320: warning: Excess struct member 'runnable' description in 'panthor_scheduler'
   drivers/gpu/drm/panthor/panthor_sched.c:320: warning: Excess struct member 'idle' description in 'panthor_scheduler'
   drivers/gpu/drm/panthor/panthor_sched.c:320: warning: Excess struct member 'waiting' description in 'panthor_scheduler'
   drivers/gpu/drm/panthor/panthor_sched.c:320: warning: Excess struct member 'has_ref' description in 'panthor_scheduler'
   drivers/gpu/drm/panthor/panthor_sched.c:320: warning: Excess struct member 'in_progress' description in 'panthor_scheduler'
   drivers/gpu/drm/panthor/panthor_sched.c:320: warning: Excess struct member 'stopped_groups' description in 'panthor_scheduler'
   drivers/gpu/drm/panthor/panthor_sched.c:492: warning: Excess struct member 'mem' description in 'panthor_queue'
   drivers/gpu/drm/panthor/panthor_sched.c:492: warning: Excess struct member 'input' description in 'panthor_queue'
   drivers/gpu/drm/panthor/panthor_sched.c:492: warning: Excess struct member 'output' description in 'panthor_queue'
   drivers/gpu/drm/panthor/panthor_sched.c:492: warning: Excess struct member 'input_fw_va' description in 'panthor_queue'
   drivers/gpu/drm/panthor/panthor_sched.c:492: warning: Excess struct member 'output_fw_va' description in 'panthor_queue'
   drivers/gpu/drm/panthor/panthor_sched.c:492: warning: Excess struct member 'gpu_va' description in 'panthor_queue'
   drivers/gpu/drm/panthor/panthor_sched.c:492: warning: Excess struct member 'ref' description in 'panthor_queue'
   drivers/gpu/drm/panthor/panthor_sched.c:492: warning: Excess struct member 'gt' description in 'panthor_queue'
   drivers/gpu/drm/panthor/panthor_sched.c:492: warning: Excess struct member 'sync64' description in 'panthor_queue'
   drivers/gpu/drm/panthor/panthor_sched.c:492: warning: Excess struct member 'bo' description in 'panthor_queue'
   drivers/gpu/drm/panthor/panthor_sched.c:492: warning: Excess struct member 'offset' description in 'panthor_queue'
   drivers/gpu/drm/panthor/panthor_sched.c:492: warning: Excess struct member 'kmap' description in 'panthor_queue'
   drivers/gpu/drm/panthor/panthor_sched.c:492: warning: Excess struct member 'lock' description in 'panthor_queue'
   drivers/gpu/drm/panthor/panthor_sched.c:492: warning: Excess struct member 'id' description in 'panthor_queue'
   drivers/gpu/drm/panthor/panthor_sched.c:492: warning: Excess struct member 'seqno' description in 'panthor_queue'
   drivers/gpu/drm/panthor/panthor_sched.c:492: warning: Excess struct member 'last_fence' description in 'panthor_queue'
   drivers/gpu/drm/panthor/panthor_sched.c:492: warning: Excess struct member 'in_flight_jobs' description in 'panthor_queue'
   drivers/gpu/drm/panthor/panthor_sched.c:492: warning: Excess struct member 'slots' description in 'panthor_queue'
   drivers/gpu/drm/panthor/panthor_sched.c:492: warning: Excess struct member 'slot_count' description in 'panthor_queue'
   drivers/gpu/drm/panthor/panthor_sched.c:492: warning: Excess struct member 'seqno' description in 'panthor_queue'
   drivers/gpu/drm/panthor/panthor_sched.c:701: warning: Excess struct member 'data' description in 'panthor_group'
   drivers/gpu/drm/panthor/panthor_sched.c:701: warning: Excess struct member 'lock' description in 'panthor_group'
>> drivers/gpu/drm/panthor/panthor_sched.c:701: warning: Excess struct member 'kbo_sizes' description in 'panthor_group'
   drivers/gpu/drm/panthor/panthor_sched.c:837: warning: Excess struct member 'start' description in 'panthor_job'
   drivers/gpu/drm/panthor/panthor_sched.c:837: warning: Excess struct member 'size' description in 'panthor_job'
   drivers/gpu/drm/panthor/panthor_sched.c:837: warning: Excess struct member 'latest_flush' description in 'panthor_job'
   drivers/gpu/drm/panthor/panthor_sched.c:837: warning: Excess struct member 'start' description in 'panthor_job'
   drivers/gpu/drm/panthor/panthor_sched.c:837: warning: Excess struct member 'end' description in 'panthor_job'
   drivers/gpu/drm/panthor/panthor_sched.c:837: warning: Excess struct member 'mask' description in 'panthor_job'
   drivers/gpu/drm/panthor/panthor_sched.c:837: warning: Excess struct member 'slot' description in 'panthor_job'
   drivers/gpu/drm/panthor/panthor_sched.c:1766: warning: Function parameter or struct member 'ptdev' not described in 'panthor_sched_report_fw_events'
   drivers/gpu/drm/panthor/panthor_sched.c:1766: warning: Function parameter or struct member 'events' not described in 'panthor_sched_report_fw_events'
   drivers/gpu/drm/panthor/panthor_sched.c:2659: warning: Function parameter or struct member 'ptdev' not described in 'panthor_sched_report_mmu_fault'


vim +701 drivers/gpu/drm/panthor/panthor_sched.c

de85488138247d Boris Brezillon 2024-02-29  529  
de85488138247d Boris Brezillon 2024-02-29  530  /**
de85488138247d Boris Brezillon 2024-02-29  531   * struct panthor_group - Scheduling group object
de85488138247d Boris Brezillon 2024-02-29  532   */
de85488138247d Boris Brezillon 2024-02-29  533  struct panthor_group {
de85488138247d Boris Brezillon 2024-02-29  534  	/** @refcount: Reference count */
de85488138247d Boris Brezillon 2024-02-29  535  	struct kref refcount;
de85488138247d Boris Brezillon 2024-02-29  536  
de85488138247d Boris Brezillon 2024-02-29  537  	/** @ptdev: Device. */
de85488138247d Boris Brezillon 2024-02-29  538  	struct panthor_device *ptdev;
de85488138247d Boris Brezillon 2024-02-29  539  
de85488138247d Boris Brezillon 2024-02-29  540  	/** @vm: VM bound to the group. */
de85488138247d Boris Brezillon 2024-02-29  541  	struct panthor_vm *vm;
de85488138247d Boris Brezillon 2024-02-29  542  
de85488138247d Boris Brezillon 2024-02-29  543  	/** @compute_core_mask: Mask of shader cores that can be used for compute jobs. */
de85488138247d Boris Brezillon 2024-02-29  544  	u64 compute_core_mask;
de85488138247d Boris Brezillon 2024-02-29  545  
de85488138247d Boris Brezillon 2024-02-29  546  	/** @fragment_core_mask: Mask of shader cores that can be used for fragment jobs. */
de85488138247d Boris Brezillon 2024-02-29  547  	u64 fragment_core_mask;
de85488138247d Boris Brezillon 2024-02-29  548  
de85488138247d Boris Brezillon 2024-02-29  549  	/** @tiler_core_mask: Mask of tiler cores that can be used for tiler jobs. */
de85488138247d Boris Brezillon 2024-02-29  550  	u64 tiler_core_mask;
de85488138247d Boris Brezillon 2024-02-29  551  
de85488138247d Boris Brezillon 2024-02-29  552  	/** @max_compute_cores: Maximum number of shader cores used for compute jobs. */
de85488138247d Boris Brezillon 2024-02-29  553  	u8 max_compute_cores;
de85488138247d Boris Brezillon 2024-02-29  554  
be7ffc821f5fc2 Liviu Dudau     2024-04-02  555  	/** @max_fragment_cores: Maximum number of shader cores used for fragment jobs. */
de85488138247d Boris Brezillon 2024-02-29  556  	u8 max_fragment_cores;
de85488138247d Boris Brezillon 2024-02-29  557  
de85488138247d Boris Brezillon 2024-02-29  558  	/** @max_tiler_cores: Maximum number of tiler cores used for tiler jobs. */
de85488138247d Boris Brezillon 2024-02-29  559  	u8 max_tiler_cores;
de85488138247d Boris Brezillon 2024-02-29  560  
de85488138247d Boris Brezillon 2024-02-29  561  	/** @priority: Group priority (check panthor_csg_priority). */
de85488138247d Boris Brezillon 2024-02-29  562  	u8 priority;
de85488138247d Boris Brezillon 2024-02-29  563  
de85488138247d Boris Brezillon 2024-02-29  564  	/** @blocked_queues: Bitmask reflecting the blocked queues. */
de85488138247d Boris Brezillon 2024-02-29  565  	u32 blocked_queues;
de85488138247d Boris Brezillon 2024-02-29  566  
de85488138247d Boris Brezillon 2024-02-29  567  	/** @idle_queues: Bitmask reflecting the idle queues. */
de85488138247d Boris Brezillon 2024-02-29  568  	u32 idle_queues;
de85488138247d Boris Brezillon 2024-02-29  569  
de85488138247d Boris Brezillon 2024-02-29  570  	/** @fatal_lock: Lock used to protect access to fatal fields. */
de85488138247d Boris Brezillon 2024-02-29  571  	spinlock_t fatal_lock;
de85488138247d Boris Brezillon 2024-02-29  572  
de85488138247d Boris Brezillon 2024-02-29  573  	/** @fatal_queues: Bitmask reflecting the queues that hit a fatal exception. */
de85488138247d Boris Brezillon 2024-02-29  574  	u32 fatal_queues;
de85488138247d Boris Brezillon 2024-02-29  575  
de85488138247d Boris Brezillon 2024-02-29  576  	/** @tiler_oom: Mask of queues that have a tiler OOM event to process. */
de85488138247d Boris Brezillon 2024-02-29  577  	atomic_t tiler_oom;
de85488138247d Boris Brezillon 2024-02-29  578  
de85488138247d Boris Brezillon 2024-02-29  579  	/** @queue_count: Number of queues in this group. */
de85488138247d Boris Brezillon 2024-02-29  580  	u32 queue_count;
de85488138247d Boris Brezillon 2024-02-29  581  
de85488138247d Boris Brezillon 2024-02-29  582  	/** @queues: Queues owned by this group. */
de85488138247d Boris Brezillon 2024-02-29  583  	struct panthor_queue *queues[MAX_CS_PER_CSG];
de85488138247d Boris Brezillon 2024-02-29  584  
de85488138247d Boris Brezillon 2024-02-29  585  	/**
de85488138247d Boris Brezillon 2024-02-29  586  	 * @csg_id: ID of the FW group slot.
de85488138247d Boris Brezillon 2024-02-29  587  	 *
de85488138247d Boris Brezillon 2024-02-29  588  	 * -1 when the group is not scheduled/active.
de85488138247d Boris Brezillon 2024-02-29  589  	 */
de85488138247d Boris Brezillon 2024-02-29  590  	int csg_id;
de85488138247d Boris Brezillon 2024-02-29  591  
de85488138247d Boris Brezillon 2024-02-29  592  	/**
de85488138247d Boris Brezillon 2024-02-29  593  	 * @destroyed: True when the group has been destroyed.
de85488138247d Boris Brezillon 2024-02-29  594  	 *
de85488138247d Boris Brezillon 2024-02-29  595  	 * If a group is destroyed it becomes useless: no further jobs can be submitted
de85488138247d Boris Brezillon 2024-02-29  596  	 * to its queues. We simply wait for all references to be dropped so we can
de85488138247d Boris Brezillon 2024-02-29  597  	 * release the group object.
de85488138247d Boris Brezillon 2024-02-29  598  	 */
de85488138247d Boris Brezillon 2024-02-29  599  	bool destroyed;
de85488138247d Boris Brezillon 2024-02-29  600  
de85488138247d Boris Brezillon 2024-02-29  601  	/**
de85488138247d Boris Brezillon 2024-02-29  602  	 * @timedout: True when a timeout occurred on any of the queues owned by
de85488138247d Boris Brezillon 2024-02-29  603  	 * this group.
de85488138247d Boris Brezillon 2024-02-29  604  	 *
4700fd3e050da8 Boris Brezillon 2024-10-29  605  	 * Timeouts can be reported by drm_sched or by the FW. If a reset is required,
4700fd3e050da8 Boris Brezillon 2024-10-29  606  	 * and the group can't be suspended, this also leads to a timeout. In any case,
4700fd3e050da8 Boris Brezillon 2024-10-29  607  	 * any timeout situation is unrecoverable, and the group becomes useless. We
4700fd3e050da8 Boris Brezillon 2024-10-29  608  	 * simply wait for all references to be dropped so we can release the group
4700fd3e050da8 Boris Brezillon 2024-10-29  609  	 * object.
de85488138247d Boris Brezillon 2024-02-29  610  	 */
de85488138247d Boris Brezillon 2024-02-29  611  	bool timedout;
de85488138247d Boris Brezillon 2024-02-29  612  
4181576d85c642 Boris Brezillon 2024-12-11  613  	/**
4181576d85c642 Boris Brezillon 2024-12-11  614  	 * @innocent: True when the group becomes unusable because the group suspension
4181576d85c642 Boris Brezillon 2024-12-11  615  	 * failed during a reset.
4181576d85c642 Boris Brezillon 2024-12-11  616  	 *
4181576d85c642 Boris Brezillon 2024-12-11  617  	 * Sometimes the FW was put in a bad state by other groups, causing the group
4181576d85c642 Boris Brezillon 2024-12-11  618  	 * suspension happening in the reset path to fail. In that case, we consider the
4181576d85c642 Boris Brezillon 2024-12-11  619  	 * group innocent.
4181576d85c642 Boris Brezillon 2024-12-11  620  	 */
4181576d85c642 Boris Brezillon 2024-12-11  621  	bool innocent;
4181576d85c642 Boris Brezillon 2024-12-11  622  
de85488138247d Boris Brezillon 2024-02-29  623  	/**
de85488138247d Boris Brezillon 2024-02-29  624  	 * @syncobjs: Pool of per-queue synchronization objects.
de85488138247d Boris Brezillon 2024-02-29  625  	 *
de85488138247d Boris Brezillon 2024-02-29  626  	 * One sync object per queue. The position of the sync object is
de85488138247d Boris Brezillon 2024-02-29  627  	 * determined by the queue index.
de85488138247d Boris Brezillon 2024-02-29  628  	 */
de85488138247d Boris Brezillon 2024-02-29  629  	struct panthor_kernel_bo *syncobjs;
de85488138247d Boris Brezillon 2024-02-29  630  
1026b1b65955e8 Adrián Larumbe  2025-01-02  631  	/** @fdinfo: Per-group total cycle and timestamp values and kernel BO sizes. */
e16635d88fa07b Adrián Larumbe  2024-09-24  632  	struct {
e16635d88fa07b Adrián Larumbe  2024-09-24  633  		/** @data: Total sampled values for jobs in queues from this group. */
e16635d88fa07b Adrián Larumbe  2024-09-24  634  		struct panthor_gpu_usage data;
e16635d88fa07b Adrián Larumbe  2024-09-24  635  
e16635d88fa07b Adrián Larumbe  2024-09-24  636  		/**
e16635d88fa07b Adrián Larumbe  2024-09-24  637  		 * @lock: Mutex to govern concurrent access from drm file's fdinfo callback
e16635d88fa07b Adrián Larumbe  2024-09-24  638  		 * and job post-completion processing function
e16635d88fa07b Adrián Larumbe  2024-09-24  639  		 */
e16635d88fa07b Adrián Larumbe  2024-09-24  640  		struct mutex lock;
1026b1b65955e8 Adrián Larumbe  2025-01-02  641  
1026b1b65955e8 Adrián Larumbe  2025-01-02  642  		/** @kbo_sizes: Aggregate size of private kernel BO's held by the group. */
1026b1b65955e8 Adrián Larumbe  2025-01-02  643  		size_t kbo_sizes;
e16635d88fa07b Adrián Larumbe  2024-09-24  644  	} fdinfo;
e16635d88fa07b Adrián Larumbe  2024-09-24  645  
de85488138247d Boris Brezillon 2024-02-29  646  	/** @state: Group state. */
de85488138247d Boris Brezillon 2024-02-29  647  	enum panthor_group_state state;
de85488138247d Boris Brezillon 2024-02-29  648  
de85488138247d Boris Brezillon 2024-02-29  649  	/**
de85488138247d Boris Brezillon 2024-02-29  650  	 * @suspend_buf: Suspend buffer.
de85488138247d Boris Brezillon 2024-02-29  651  	 *
de85488138247d Boris Brezillon 2024-02-29  652  	 * Stores the state of the group and its queues when a group is suspended.
de85488138247d Boris Brezillon 2024-02-29  653  	 * Used at resume time to restore the group in its previous state.
de85488138247d Boris Brezillon 2024-02-29  654  	 *
de85488138247d Boris Brezillon 2024-02-29  655  	 * The size of the suspend buffer is exposed through the FW interface.
de85488138247d Boris Brezillon 2024-02-29  656  	 */
de85488138247d Boris Brezillon 2024-02-29  657  	struct panthor_kernel_bo *suspend_buf;
de85488138247d Boris Brezillon 2024-02-29  658  
de85488138247d Boris Brezillon 2024-02-29  659  	/**
de85488138247d Boris Brezillon 2024-02-29  660  	 * @protm_suspend_buf: Protection mode suspend buffer.
de85488138247d Boris Brezillon 2024-02-29  661  	 *
de85488138247d Boris Brezillon 2024-02-29  662  	 * Stores the state of the group and its queues when a group that's in
de85488138247d Boris Brezillon 2024-02-29  663  	 * protection mode is suspended.
de85488138247d Boris Brezillon 2024-02-29  664  	 *
de85488138247d Boris Brezillon 2024-02-29  665  	 * Used at resume time to restore the group in its previous state.
de85488138247d Boris Brezillon 2024-02-29  666  	 *
de85488138247d Boris Brezillon 2024-02-29  667  	 * The size of the protection mode suspend buffer is exposed through the
de85488138247d Boris Brezillon 2024-02-29  668  	 * FW interface.
de85488138247d Boris Brezillon 2024-02-29  669  	 */
de85488138247d Boris Brezillon 2024-02-29  670  	struct panthor_kernel_bo *protm_suspend_buf;
de85488138247d Boris Brezillon 2024-02-29  671  
de85488138247d Boris Brezillon 2024-02-29  672  	/** @sync_upd_work: Work used to check/signal job fences. */
de85488138247d Boris Brezillon 2024-02-29  673  	struct work_struct sync_upd_work;
de85488138247d Boris Brezillon 2024-02-29  674  
de85488138247d Boris Brezillon 2024-02-29  675  	/** @tiler_oom_work: Work used to process tiler OOM events happening on this group. */
de85488138247d Boris Brezillon 2024-02-29  676  	struct work_struct tiler_oom_work;
de85488138247d Boris Brezillon 2024-02-29  677  
de85488138247d Boris Brezillon 2024-02-29  678  	/** @term_work: Work used to finish the group termination procedure. */
de85488138247d Boris Brezillon 2024-02-29  679  	struct work_struct term_work;
de85488138247d Boris Brezillon 2024-02-29  680  
de85488138247d Boris Brezillon 2024-02-29  681  	/**
de85488138247d Boris Brezillon 2024-02-29  682  	 * @release_work: Work used to release group resources.
de85488138247d Boris Brezillon 2024-02-29  683  	 *
de85488138247d Boris Brezillon 2024-02-29  684  	 * We need to postpone the group release to avoid a deadlock when
de85488138247d Boris Brezillon 2024-02-29  685  	 * the last ref is released in the tick work.
de85488138247d Boris Brezillon 2024-02-29  686  	 */
de85488138247d Boris Brezillon 2024-02-29  687  	struct work_struct release_work;
de85488138247d Boris Brezillon 2024-02-29  688  
de85488138247d Boris Brezillon 2024-02-29  689  	/**
de85488138247d Boris Brezillon 2024-02-29  690  	 * @run_node: Node used to insert the group in the
de85488138247d Boris Brezillon 2024-02-29  691  	 * panthor_group::groups::{runnable,idle} and
de85488138247d Boris Brezillon 2024-02-29  692  	 * panthor_group::reset.stopped_groups lists.
de85488138247d Boris Brezillon 2024-02-29  693  	 */
de85488138247d Boris Brezillon 2024-02-29  694  	struct list_head run_node;
de85488138247d Boris Brezillon 2024-02-29  695  
de85488138247d Boris Brezillon 2024-02-29  696  	/**
de85488138247d Boris Brezillon 2024-02-29  697  	 * @wait_node: Node used to insert the group in the
de85488138247d Boris Brezillon 2024-02-29  698  	 * panthor_group::groups::waiting list.
de85488138247d Boris Brezillon 2024-02-29  699  	 */
de85488138247d Boris Brezillon 2024-02-29  700  	struct list_head wait_node;
de85488138247d Boris Brezillon 2024-02-29 @701  };
de85488138247d Boris Brezillon 2024-02-29  702
diff mbox series

Patch

diff --git a/drivers/gpu/drm/panthor/panthor_drv.c b/drivers/gpu/drm/panthor/panthor_drv.c
index ac7e53f6e3f0..8e27d0429019 100644
--- a/drivers/gpu/drm/panthor/panthor_drv.c
+++ b/drivers/gpu/drm/panthor/panthor_drv.c
@@ -1457,12 +1457,24 @@  static void panthor_gpu_show_fdinfo(struct panthor_device *ptdev,
 	drm_printf(p, "drm-curfreq-panthor:\t%lu Hz\n", ptdev->current_frequency);
 }
 
+static void panthor_show_internal_memory_stats(struct drm_printer *p, struct drm_file *file)
+{
+	struct panthor_file *pfile = file->driver_priv;
+	struct drm_memory_stats status = {0};
+
+	panthor_group_kbo_sizes(pfile, &status);
+	panthor_vm_heaps_sizes(pfile, &status);
+
+	drm_print_memory_stats(p, &status, DRM_GEM_OBJECT_RESIDENT, "internal");
+}
+
 static void panthor_show_fdinfo(struct drm_printer *p, struct drm_file *file)
 {
 	struct drm_device *dev = file->minor->dev;
 	struct panthor_device *ptdev = container_of(dev, struct panthor_device, base);
 
 	panthor_gpu_show_fdinfo(ptdev, file->driver_priv, p);
+	panthor_show_internal_memory_stats(p, file);
 
 	drm_show_memory_stats(p, file);
 }
diff --git a/drivers/gpu/drm/panthor/panthor_heap.c b/drivers/gpu/drm/panthor/panthor_heap.c
index 3796a9eb22af..db0285ce5812 100644
--- a/drivers/gpu/drm/panthor/panthor_heap.c
+++ b/drivers/gpu/drm/panthor/panthor_heap.c
@@ -603,3 +603,29 @@  void panthor_heap_pool_destroy(struct panthor_heap_pool *pool)
 
 	panthor_heap_pool_put(pool);
 }
+
+/**
+ * panthor_heap_pool_size() - Calculate size of all chunks across all heaps in a pool
+ * @pool: Pool whose total chunk size to calculate.
+ *
+ * This function adds the size of all heap chunks across all heaps in the
+ * argument pool. It also adds the size of the gpu contexts kernel bo.
+ * It is meant to be used by fdinfo for displaying the size of internal
+ * driver BO's that aren't exposed to userspace through a GEM handle.
+ *
+ */
+size_t panthor_heap_pool_size(struct panthor_heap_pool *pool)
+{
+	struct panthor_heap *heap;
+	unsigned long i;
+	size_t size = 0;
+
+	down_read(&pool->lock);
+	xa_for_each(&pool->xa, i, heap)
+		size += heap->chunk_size * heap->chunk_count;
+	up_read(&pool->lock);
+
+	size += pool->gpu_contexts->obj->size;
+
+	return size;
+}
diff --git a/drivers/gpu/drm/panthor/panthor_heap.h b/drivers/gpu/drm/panthor/panthor_heap.h
index 25a5f2bba445..e3358d4e8edb 100644
--- a/drivers/gpu/drm/panthor/panthor_heap.h
+++ b/drivers/gpu/drm/panthor/panthor_heap.h
@@ -27,6 +27,8 @@  struct panthor_heap_pool *
 panthor_heap_pool_get(struct panthor_heap_pool *pool);
 void panthor_heap_pool_put(struct panthor_heap_pool *pool);
 
+size_t panthor_heap_pool_size(struct panthor_heap_pool *pool);
+
 int panthor_heap_grow(struct panthor_heap_pool *pool,
 		      u64 heap_gpu_va,
 		      u32 renderpasses_in_flight,
diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
index c3f0b0225cf9..aee649c09cff 100644
--- a/drivers/gpu/drm/panthor/panthor_mmu.c
+++ b/drivers/gpu/drm/panthor/panthor_mmu.c
@@ -1941,6 +1941,41 @@  struct panthor_heap_pool *panthor_vm_get_heap_pool(struct panthor_vm *vm, bool c
 	return pool;
 }
 
+/**
+ * panthor_vm_heaps_sizes() - Calculate size of all heap chunks across all
+ * heaps over all the heap pools in a VM
+ * @pfile: File.
+ * @status: Memory status to be updated.
+ *
+ * Calculate all heap chunk sizes in all heap pools bound to a VM. If the VM
+ * is active, record the size as active as well.
+ */
+void panthor_vm_heaps_sizes(struct panthor_file *pfile, struct drm_memory_stats *status)
+{
+	struct panthor_vm *vm;
+	unsigned long i;
+
+	if (!pfile->vms)
+		return;
+
+	xa_for_each(&pfile->vms->xa, i, vm) {
+		size_t size;
+
+		mutex_lock(&vm->heaps.lock);
+		if (!vm->heaps.pool) {
+			mutex_unlock(&vm->heaps.lock);
+			continue;
+		}
+		size = panthor_heap_pool_size(vm->heaps.pool);
+		mutex_unlock(&vm->heaps.lock);
+
+		status->resident += size;
+		status->private += size;
+		if (vm->as.id >= 0)
+			status->active += size;
+	}
+}
+
 static u64 mair_to_memattr(u64 mair, bool coherent)
 {
 	u64 memattr = 0;
diff --git a/drivers/gpu/drm/panthor/panthor_mmu.h b/drivers/gpu/drm/panthor/panthor_mmu.h
index 8d21e83d8aba..494c36d732b4 100644
--- a/drivers/gpu/drm/panthor/panthor_mmu.h
+++ b/drivers/gpu/drm/panthor/panthor_mmu.h
@@ -9,6 +9,7 @@ 
 
 struct drm_exec;
 struct drm_sched_job;
+struct drm_memory_stats;
 struct panthor_gem_object;
 struct panthor_heap_pool;
 struct panthor_vm;
@@ -37,6 +38,8 @@  int panthor_vm_flush_all(struct panthor_vm *vm);
 struct panthor_heap_pool *
 panthor_vm_get_heap_pool(struct panthor_vm *vm, bool create);
 
+void panthor_vm_heaps_sizes(struct panthor_file *pfile, struct drm_memory_stats *status);
+
 struct panthor_vm *panthor_vm_get(struct panthor_vm *vm);
 void panthor_vm_put(struct panthor_vm *vm);
 struct panthor_vm *panthor_vm_create(struct panthor_device *ptdev, bool for_mcu,
diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c
index ef4bec7ff9c7..a0a770b53c4b 100644
--- a/drivers/gpu/drm/panthor/panthor_sched.c
+++ b/drivers/gpu/drm/panthor/panthor_sched.c
@@ -618,7 +618,7 @@  struct panthor_group {
 	 */
 	struct panthor_kernel_bo *syncobjs;
 
-	/** @fdinfo: Per-file total cycle and timestamp values reference. */
+	/** @fdinfo: Per-group total cycle and timestamp values and kernel BO sizes. */
 	struct {
 		/** @data: Total sampled values for jobs in queues from this group. */
 		struct panthor_gpu_usage data;
@@ -628,6 +628,9 @@  struct panthor_group {
 		 * and job post-completion processing function
 		 */
 		struct mutex lock;
+
+		/** @kbo_sizes: Aggregate size of private kernel BO's held by the group. */
+		size_t kbo_sizes;
 	} fdinfo;
 
 	/** @state: Group state. */
@@ -3365,6 +3368,29 @@  group_create_queue(struct panthor_group *group,
 	return ERR_PTR(ret);
 }
 
+static void add_group_kbo_sizes(struct panthor_device *ptdev,
+				struct panthor_group *group)
+{
+	struct panthor_queue *queue;
+	int i;
+
+	if (drm_WARN_ON(&ptdev->base, IS_ERR_OR_NULL(group)))
+		return;
+	if (drm_WARN_ON(&ptdev->base, ptdev != group->ptdev))
+		return;
+
+	group->fdinfo.kbo_sizes += group->suspend_buf->obj->size;
+	group->fdinfo.kbo_sizes += group->protm_suspend_buf->obj->size;
+	group->fdinfo.kbo_sizes += group->syncobjs->obj->size;
+
+	for (i = 0; i < group->queue_count; i++) {
+		queue =	group->queues[i];
+		group->fdinfo.kbo_sizes += queue->ringbuf->obj->size;
+		group->fdinfo.kbo_sizes += queue->iface.mem->obj->size;
+		group->fdinfo.kbo_sizes += queue->profiling.slots->obj->size;
+	}
+}
+
 #define MAX_GROUPS_PER_POOL		128
 
 int panthor_group_create(struct panthor_file *pfile,
@@ -3489,6 +3515,7 @@  int panthor_group_create(struct panthor_file *pfile,
 	}
 	mutex_unlock(&sched->reset.lock);
 
+	add_group_kbo_sizes(group->ptdev, group);
 	mutex_init(&group->fdinfo.lock);
 
 	return gid;
@@ -3606,6 +3633,29 @@  void panthor_group_pool_destroy(struct panthor_file *pfile)
 	pfile->groups = NULL;
 }
 
+/**
+ * panthor_group_kbo_sizes() - Retrieve aggregate size of all private kernel BO's
+ * belonging to all the groups owned by an open Panthor file
+ * @pfile: File.
+ * @status: Memory status to be updated.
+ *
+ */
+void panthor_group_kbo_sizes(struct panthor_file *pfile, struct drm_memory_stats *status)
+{
+	struct panthor_group_pool *gpool = pfile->groups;
+	struct panthor_group *group;
+	unsigned long i;
+
+	if (IS_ERR_OR_NULL(gpool))
+		return;
+	xa_for_each(&gpool->xa, i, group) {
+		status->resident += group->fdinfo.kbo_sizes;
+		status->private += group->fdinfo.kbo_sizes;
+		if (group->csg_id >= 0)
+			status->active += group->fdinfo.kbo_sizes;
+	}
+}
+
 static void job_release(struct kref *ref)
 {
 	struct panthor_job *job = container_of(ref, struct panthor_job, refcount);
diff --git a/drivers/gpu/drm/panthor/panthor_sched.h b/drivers/gpu/drm/panthor/panthor_sched.h
index 5ae6b4bde7c5..2327d441ceb1 100644
--- a/drivers/gpu/drm/panthor/panthor_sched.h
+++ b/drivers/gpu/drm/panthor/panthor_sched.h
@@ -9,6 +9,7 @@  struct dma_fence;
 struct drm_file;
 struct drm_gem_object;
 struct drm_sched_job;
+struct drm_memory_stats;
 struct drm_panthor_group_create;
 struct drm_panthor_queue_create;
 struct drm_panthor_group_get_state;
@@ -36,6 +37,7 @@  void panthor_job_update_resvs(struct drm_exec *exec, struct drm_sched_job *job);
 
 int panthor_group_pool_create(struct panthor_file *pfile);
 void panthor_group_pool_destroy(struct panthor_file *pfile);
+void panthor_group_kbo_sizes(struct panthor_file *pfile, struct drm_memory_stats *status);
 
 int panthor_sched_init(struct panthor_device *ptdev);
 void panthor_sched_unplug(struct panthor_device *ptdev);