Message ID | 20230104130308.3467806-5-boris.brezillon@collabora.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | drm/panfrost: Expose memory usage stats through fdinfo | expand |
Hi, Chris was kind enough to bring my attention to this thread. Indeed this information was asked for by various people for many years so it sounds very useful to actually do attempt it. On 04/01/2023 13:03, Boris Brezillon wrote: > drm-memory-all: memory hold by this context. Not that all the memory is > not necessarily resident: heap BO size is counted even though only part > of the memory reserved for those BOs might be allocated. > > drm-memory-resident: resident memory size. For normal BOs it's the same > as drm-memory-all, but for heap BOs, only the memory actually allocated > is counted. > > drm-memory-purgeable: amount of memory that can be reclaimed by the > system (madvise(DONT_NEED)). > > drm-memory-shared: amount of memory shared through dma-buf. A bunch of comments/questions.. First of all, lets please continue documenting the fdinfo content in Documentation/gpu/drm-usage-stats.rst as stuff is proposed to be added. Idea was to have as much commonality as reasonably possible, and so to have more usable user facing tools etc. And also please copy people involved with adding new stuff to that file. (Half-digression - for some reason get_maintainers.pl does not play nice with this file, I was expecting it to count and sort contributors after maintainers but it doesn't for some reason. Perhaps it only does that for source code.) For the actual key/fields name.. I suspect apart from category we will need a memory type part, at least with discrete GPUs (non unified/shared memory designs), that further breakdown will be required. We therefore need to discuss if we do that from the start, or start with your proposal and extend later. In more practical terms I am talking about something this: drm-memory-local-all: <ulong> <unit> drm-memory-local-resident: <ulong> <unit> drm-memory-system-all: <ulong> <unit> drm-memory-system-resident: <ulong> <unit> "All/resident/..." could then probably be standardized and the "local/system" could be partially standardized or left for drivers to use names they see fit (same as with engine names). Also, I an not quite liking "all". Analogy from the CPU land for it would be "virtual", but does translate into the GPU world? Finally, we also need to define the semantics of resident when we deal with shared objects. For instance process P / fd Fp creates a buffer and passes the handle to process Q / fd Fq. Whose fdinfo sees what, in what fields and when? I suspect if Q is the first to start using the shared object it should show it under "resident". P should not, until it starts using it itself. For the "all" category both should see it. For "shared" are you proposing to count imported or exported counts as well? I think it needs to be clearly documented. Is it "this fd is using this much shared buffers", or "this fd has exported this much shared buffers", or both? I don't know your driver to quickly figure out what semantics you proposed? Regards, Tvrtko > Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com> > --- > drivers/gpu/drm/panfrost/panfrost_drv.c | 7 +++++++ > 1 file changed, 7 insertions(+) > > diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c > index 6ee43559fc14..05d5d480df2a 100644 > --- a/drivers/gpu/drm/panfrost/panfrost_drv.c > +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c > @@ -519,9 +519,16 @@ static void panfrost_show_fdinfo(struct seq_file *m, struct file *f) > { > struct drm_file *file = f->private_data; > struct panfrost_file_priv *panfrost_priv = file->driver_priv; > + struct panfrost_mmu_stats mmu_stats; > + > + panfrost_mmu_get_stats(panfrost_priv->mmu, &mmu_stats); > > seq_printf(m, "drm-driver:\t%s\n", file->minor->dev->driver->name); > seq_printf(m, "drm-client-id:\t%llu\n", panfrost_priv->sched_entity[0].fence_context); > + seq_printf(m, "drm-memory-all:\t%llu KiB\n", mmu_stats.all >> 10); > + seq_printf(m, "drm-memory-resident:\t%llu KiB\n", mmu_stats.resident >> 10); > + seq_printf(m, "drm-memory-purgeable:\t%llu KiB\n", mmu_stats.purgeable >> 10); > + seq_printf(m, "drm-memory-shared:\t%llu KiB\n", mmu_stats.shared >> 10); > } > > static const struct file_operations panfrost_drm_driver_fops = {
diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index 6ee43559fc14..05d5d480df2a 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -519,9 +519,16 @@ static void panfrost_show_fdinfo(struct seq_file *m, struct file *f) { struct drm_file *file = f->private_data; struct panfrost_file_priv *panfrost_priv = file->driver_priv; + struct panfrost_mmu_stats mmu_stats; + + panfrost_mmu_get_stats(panfrost_priv->mmu, &mmu_stats); seq_printf(m, "drm-driver:\t%s\n", file->minor->dev->driver->name); seq_printf(m, "drm-client-id:\t%llu\n", panfrost_priv->sched_entity[0].fence_context); + seq_printf(m, "drm-memory-all:\t%llu KiB\n", mmu_stats.all >> 10); + seq_printf(m, "drm-memory-resident:\t%llu KiB\n", mmu_stats.resident >> 10); + seq_printf(m, "drm-memory-purgeable:\t%llu KiB\n", mmu_stats.purgeable >> 10); + seq_printf(m, "drm-memory-shared:\t%llu KiB\n", mmu_stats.shared >> 10); } static const struct file_operations panfrost_drm_driver_fops = {
drm-memory-all: memory hold by this context. Not that all the memory is not necessarily resident: heap BO size is counted even though only part of the memory reserved for those BOs might be allocated. drm-memory-resident: resident memory size. For normal BOs it's the same as drm-memory-all, but for heap BOs, only the memory actually allocated is counted. drm-memory-purgeable: amount of memory that can be reclaimed by the system (madvise(DONT_NEED)). drm-memory-shared: amount of memory shared through dma-buf. Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com> --- drivers/gpu/drm/panfrost/panfrost_drv.c | 7 +++++++ 1 file changed, 7 insertions(+)