Message ID | 20210520151254.959958-8-tvrtko.ursulin@linux.intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Per client engine busyness | expand |
[AMD Official Use Only] i would like to add a unit marker for the stats that we monitor in the fd, as we discussed currently we are displaying the usage percentage, because we wanted to to provide single query percentages, but this may evolve with time. May I suggest to add two new fields drm-stat-interval: <64 bit> ns drm-stat-timestamp: <64 bit> ns If interval is set, engine utilization is calculated by doing <perc render> = 100*<drm_engine_render>/<drm_stat_interval> if interval is not set, two reads are needed : <perc render> = 100*<drm_engine_render1 - drm_engine_render0> / <drm-stat-timestamp1 - drm-stat-timestamp0> Regards, David
Yeah, having the timestamp is a good idea as well. drm-driver: i915 I think we should rather add something like printing file_operations->owner->name to the common fdinfo code. This way we would have something common for all drivers in the system. I'm just not sure if that also works if they are compiled into the kernel. Regards, Christian. Am 20.05.21 um 18:26 schrieb Nieto, David M: > > [AMD Official Use Only] > > > i would like to add a unit marker for the stats that we monitor in the > fd, as we discussed currently we are displaying the usage percentage, > because we wanted to to provide single query percentages, but this may > evolve with time. > > May I suggest to add two new fields > > drm-stat-interval: <64 bit> ns > drm-stat-timestamp: <64 bit> ns > > If interval is set, engine utilization is calculated by doing <perc > render> = 100*<drm_engine_render>/<drm_stat_interval> > if interval is not set, two reads are needed : <perc render> = > 100*<drm_engine_render1 - drm_engine_render0> / <drm-stat-timestamp1 - > drm-stat-timestamp0> > > > Regards, > > David > > > ------------------------------------------------------------------------ > *From:* Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> > *Sent:* Thursday, May 20, 2021 8:12 AM > *To:* Intel-gfx@lists.freedesktop.org <Intel-gfx@lists.freedesktop.org> > *Cc:* dri-devel@lists.freedesktop.org > <dri-devel@lists.freedesktop.org>; Tvrtko Ursulin > <tvrtko.ursulin@intel.com>; Nieto, David M <David.Nieto@amd.com>; > Koenig, Christian <Christian.Koenig@amd.com>; Daniel Vetter > <daniel@ffwll.ch> > *Subject:* [RFC 7/7] drm/i915: Expose client engine utilisation via > fdinfo > From: Tvrtko Ursulin <tvrtko.ursulin@intel.com> > > Similar to AMD commit > 874442541133 ("drm/amdgpu: Add show_fdinfo() interface"), using the > infrastructure added in previous patches, we add basic client info > and GPU engine utilisation for i915. > > Example of the output: > > pos: 0 > flags: 0100002 > mnt_id: 21 > drm-driver: i915 > drm-pdev: 0000:00:02.0 > drm-client-id: 7 > drm-engine-render: 9288864723 ns > drm-engine-copy: 2035071108 ns > drm-engine-video: 0 ns > drm-engine-video-enhance: 0 ns > > DRM related fields are appropriately prefixed for easy parsing and > separation from generic fdinfo fields. > > Idea is for some fields to become either fully or partially standardised > in order to enable writting of generic top-like tools. > > Initial proposal for fully standardised common fields: > > drm-driver: <str> > drm-pdev: <aaaa:bb.cc.d> > > Optional fully standardised: > > drm-client-id: <uint> > > Optional partially standardised: > > engine-<str>: <u64> ns > memory-<str>: <u64> KiB > > Once agreed the format would need to go to some README or kerneldoc in > DRM core. > > Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> > Cc: David M Nieto <David.Nieto@amd.com> > Cc: Christian König <christian.koenig@amd.com> > Cc: Daniel Vetter <daniel@ffwll.ch> > --- > drivers/gpu/drm/i915/i915_drm_client.c | 68 ++++++++++++++++++++++++++ > drivers/gpu/drm/i915/i915_drm_client.h | 4 ++ > drivers/gpu/drm/i915/i915_drv.c | 3 ++ > 3 files changed, 75 insertions(+) > > diff --git a/drivers/gpu/drm/i915/i915_drm_client.c > b/drivers/gpu/drm/i915/i915_drm_client.c > index 1e5db7753276..5e9cfba1116b 100644 > --- a/drivers/gpu/drm/i915/i915_drm_client.c > +++ b/drivers/gpu/drm/i915/i915_drm_client.c > @@ -9,6 +9,11 @@ > > #include <drm/drm_print.h> > > +#include <uapi/drm/i915_drm.h> > + > +#include "gem/i915_gem_context.h" > +#include "gt/intel_engine_user.h" > + > #include "i915_drm_client.h" > #include "i915_drv.h" > #include "i915_gem.h" > @@ -168,3 +173,66 @@ void i915_drm_clients_fini(struct > i915_drm_clients *clients) > > xa_destroy(&clients->xarray); > } > + > +#ifdef CONFIG_PROC_FS > +static const char * const uabi_class_names[] = { > + [I915_ENGINE_CLASS_RENDER] = "render", > + [I915_ENGINE_CLASS_COPY] = "copy", > + [I915_ENGINE_CLASS_VIDEO] = "video", > + [I915_ENGINE_CLASS_VIDEO_ENHANCE] = "video-enhance", > +}; > + > +static u64 busy_add(struct i915_gem_context *ctx, unsigned int class) > +{ > + struct i915_gem_engines_iter it; > + struct intel_context *ce; > + u64 total = 0; > + > + for_each_gem_engine(ce, rcu_dereference(ctx->engines), it) { > + if (ce->engine->uabi_class != class) > + continue; > + > + total += intel_context_get_total_runtime_ns(ce); > + } > + > + return total; > +} > + > +static void > +show_client_class(struct seq_file *m, > + struct i915_drm_client *client, > + unsigned int class) > +{ > + const struct list_head *list = &client->ctx_list; > + u64 total = atomic64_read(&client->past_runtime[class]); > + struct i915_gem_context *ctx; > + > + rcu_read_lock(); > + list_for_each_entry_rcu(ctx, list, client_link) > + total += busy_add(ctx, class); > + rcu_read_unlock(); > + > + return seq_printf(m, "drm-engine-%s:\t%llu ns\n", > + uabi_class_names[class], total); > +} > + > +void i915_drm_client_fdinfo(struct seq_file *m, struct file *f) > +{ > + struct drm_file *file = f->private_data; > + struct drm_i915_file_private *file_priv = file->driver_priv; > + struct drm_i915_private *i915 = file_priv->dev_priv; > + struct i915_drm_client *client = file_priv->client; > + struct pci_dev *pdev = to_pci_dev(i915->drm.dev); > + unsigned int i; > + > + seq_printf(m, "drm-driver:\ti915\n"); > + seq_printf(m, "drm-pdev:\t%04x:%02x:%02x.%d\n", > + pci_domain_nr(pdev->bus), pdev->bus->number, > + PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn)); > + > + seq_printf(m, "drm-client-id:\t%u\n", client->id); > + > + for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++) > + show_client_class(m, client, i); > +} > +#endif > diff --git a/drivers/gpu/drm/i915/i915_drm_client.h > b/drivers/gpu/drm/i915/i915_drm_client.h > index b2b69d6985e4..9885002433a0 100644 > --- a/drivers/gpu/drm/i915/i915_drm_client.h > +++ b/drivers/gpu/drm/i915/i915_drm_client.h > @@ -98,6 +98,10 @@ i915_drm_client_pid(const struct i915_drm_client > *client) > return __i915_drm_client_name(client)->pid; > } > > +#ifdef CONFIG_PROC_FS > +void i915_drm_client_fdinfo(struct seq_file *m, struct file *f); > +#endif > + > void i915_drm_clients_fini(struct i915_drm_clients *clients); > > #endif /* !__I915_DRM_CLIENT_H__ */ > diff --git a/drivers/gpu/drm/i915/i915_drv.c > b/drivers/gpu/drm/i915/i915_drv.c > index 33eb7b52b58b..6b63fe4b3c26 100644 > --- a/drivers/gpu/drm/i915/i915_drv.c > +++ b/drivers/gpu/drm/i915/i915_drv.c > @@ -1694,6 +1694,9 @@ static const struct file_operations > i915_driver_fops = { > .read = drm_read, > .compat_ioctl = i915_ioc32_compat_ioctl, > .llseek = noop_llseek, > +#ifdef CONFIG_PROC_FS > + .show_fdinfo = i915_drm_client_fdinfo, > +#endif > }; > > static int > -- > 2.30.2 >
On Thu, May 20, 2021 at 6:31 PM Christian König <christian.koenig@amd.com> wrote: > > Yeah, having the timestamp is a good idea as well. > > drm-driver: i915 > > I think we should rather add something like printing file_operations->owner->name to the common fdinfo code. > > This way we would have something common for all drivers in the system. I'm just not sure if that also works if they are compiled into the kernel. Yeah common code could print driver name, busid and all that stuff. I think the common code should also provide some helpers for the key: value pair formatting (and maybe check for all lower-case and stuff like that) because if we don't then this is going to be a complete mess that's not parseable. And value should be real semantic stuff, not "here's a string". So accumulated time as a struct ktime as the example. -Daniel > Regards, > Christian. > > Am 20.05.21 um 18:26 schrieb Nieto, David M: > > [AMD Official Use Only] > > > i would like to add a unit marker for the stats that we monitor in the fd, as we discussed currently we are displaying the usage percentage, because we wanted to to provide single query percentages, but this may evolve with time. > > May I suggest to add two new fields > > drm-stat-interval: <64 bit> ns > drm-stat-timestamp: <64 bit> ns > > If interval is set, engine utilization is calculated by doing <perc render> = 100*<drm_engine_render>/<drm_stat_interval> > if interval is not set, two reads are needed : <perc render> = 100*<drm_engine_render1 - drm_engine_render0> / <drm-stat-timestamp1 - drm-stat-timestamp0> > > > Regards, > > David > > > ________________________________ > From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> > Sent: Thursday, May 20, 2021 8:12 AM > To: Intel-gfx@lists.freedesktop.org <Intel-gfx@lists.freedesktop.org> > Cc: dri-devel@lists.freedesktop.org <dri-devel@lists.freedesktop.org>; Tvrtko Ursulin <tvrtko.ursulin@intel.com>; Nieto, David M <David.Nieto@amd.com>; Koenig, Christian <Christian.Koenig@amd.com>; Daniel Vetter <daniel@ffwll.ch> > Subject: [RFC 7/7] drm/i915: Expose client engine utilisation via fdinfo > > From: Tvrtko Ursulin <tvrtko.ursulin@intel.com> > > Similar to AMD commit > 874442541133 ("drm/amdgpu: Add show_fdinfo() interface"), using the > infrastructure added in previous patches, we add basic client info > and GPU engine utilisation for i915. > > Example of the output: > > pos: 0 > flags: 0100002 > mnt_id: 21 > drm-driver: i915 > drm-pdev: 0000:00:02.0 > drm-client-id: 7 > drm-engine-render: 9288864723 ns > drm-engine-copy: 2035071108 ns > drm-engine-video: 0 ns > drm-engine-video-enhance: 0 ns > > DRM related fields are appropriately prefixed for easy parsing and > separation from generic fdinfo fields. > > Idea is for some fields to become either fully or partially standardised > in order to enable writting of generic top-like tools. > > Initial proposal for fully standardised common fields: > > drm-driver: <str> > drm-pdev: <aaaa:bb.cc.d> > > Optional fully standardised: > > drm-client-id: <uint> > > Optional partially standardised: > > engine-<str>: <u64> ns > memory-<str>: <u64> KiB > > Once agreed the format would need to go to some README or kerneldoc in > DRM core. > > Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> > Cc: David M Nieto <David.Nieto@amd.com> > Cc: Christian König <christian.koenig@amd.com> > Cc: Daniel Vetter <daniel@ffwll.ch> > --- > drivers/gpu/drm/i915/i915_drm_client.c | 68 ++++++++++++++++++++++++++ > drivers/gpu/drm/i915/i915_drm_client.h | 4 ++ > drivers/gpu/drm/i915/i915_drv.c | 3 ++ > 3 files changed, 75 insertions(+) > > diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c > index 1e5db7753276..5e9cfba1116b 100644 > --- a/drivers/gpu/drm/i915/i915_drm_client.c > +++ b/drivers/gpu/drm/i915/i915_drm_client.c > @@ -9,6 +9,11 @@ > > #include <drm/drm_print.h> > > +#include <uapi/drm/i915_drm.h> > + > +#include "gem/i915_gem_context.h" > +#include "gt/intel_engine_user.h" > + > #include "i915_drm_client.h" > #include "i915_drv.h" > #include "i915_gem.h" > @@ -168,3 +173,66 @@ void i915_drm_clients_fini(struct i915_drm_clients *clients) > > xa_destroy(&clients->xarray); > } > + > +#ifdef CONFIG_PROC_FS > +static const char * const uabi_class_names[] = { > + [I915_ENGINE_CLASS_RENDER] = "render", > + [I915_ENGINE_CLASS_COPY] = "copy", > + [I915_ENGINE_CLASS_VIDEO] = "video", > + [I915_ENGINE_CLASS_VIDEO_ENHANCE] = "video-enhance", > +}; > + > +static u64 busy_add(struct i915_gem_context *ctx, unsigned int class) > +{ > + struct i915_gem_engines_iter it; > + struct intel_context *ce; > + u64 total = 0; > + > + for_each_gem_engine(ce, rcu_dereference(ctx->engines), it) { > + if (ce->engine->uabi_class != class) > + continue; > + > + total += intel_context_get_total_runtime_ns(ce); > + } > + > + return total; > +} > + > +static void > +show_client_class(struct seq_file *m, > + struct i915_drm_client *client, > + unsigned int class) > +{ > + const struct list_head *list = &client->ctx_list; > + u64 total = atomic64_read(&client->past_runtime[class]); > + struct i915_gem_context *ctx; > + > + rcu_read_lock(); > + list_for_each_entry_rcu(ctx, list, client_link) > + total += busy_add(ctx, class); > + rcu_read_unlock(); > + > + return seq_printf(m, "drm-engine-%s:\t%llu ns\n", > + uabi_class_names[class], total); > +} > + > +void i915_drm_client_fdinfo(struct seq_file *m, struct file *f) > +{ > + struct drm_file *file = f->private_data; > + struct drm_i915_file_private *file_priv = file->driver_priv; > + struct drm_i915_private *i915 = file_priv->dev_priv; > + struct i915_drm_client *client = file_priv->client; > + struct pci_dev *pdev = to_pci_dev(i915->drm.dev); > + unsigned int i; > + > + seq_printf(m, "drm-driver:\ti915\n"); > + seq_printf(m, "drm-pdev:\t%04x:%02x:%02x.%d\n", > + pci_domain_nr(pdev->bus), pdev->bus->number, > + PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn)); > + > + seq_printf(m, "drm-client-id:\t%u\n", client->id); > + > + for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++) > + show_client_class(m, client, i); > +} > +#endif > diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h > index b2b69d6985e4..9885002433a0 100644 > --- a/drivers/gpu/drm/i915/i915_drm_client.h > +++ b/drivers/gpu/drm/i915/i915_drm_client.h > @@ -98,6 +98,10 @@ i915_drm_client_pid(const struct i915_drm_client *client) > return __i915_drm_client_name(client)->pid; > } > > +#ifdef CONFIG_PROC_FS > +void i915_drm_client_fdinfo(struct seq_file *m, struct file *f); > +#endif > + > void i915_drm_clients_fini(struct i915_drm_clients *clients); > > #endif /* !__I915_DRM_CLIENT_H__ */ > diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c > index 33eb7b52b58b..6b63fe4b3c26 100644 > --- a/drivers/gpu/drm/i915/i915_drv.c > +++ b/drivers/gpu/drm/i915/i915_drv.c > @@ -1694,6 +1694,9 @@ static const struct file_operations i915_driver_fops = { > .read = drm_read, > .compat_ioctl = i915_ioc32_compat_ioctl, > .llseek = noop_llseek, > +#ifdef CONFIG_PROC_FS > + .show_fdinfo = i915_drm_client_fdinfo, > +#endif > }; > > static int > -- > 2.30.2 > >
On 20/05/2021 18:47, Daniel Vetter wrote: > On Thu, May 20, 2021 at 6:31 PM Christian König > <christian.koenig@amd.com> wrote: >> >> Yeah, having the timestamp is a good idea as well. >> >> drm-driver: i915 >> >> I think we should rather add something like printing file_operations->owner->name to the common fdinfo code. >> >> This way we would have something common for all drivers in the system. I'm just not sure if that also works if they are compiled into the kernel. > > Yeah common code could print driver name, busid and all that stuff. I > think the common code should also provide some helpers for the key: > value pair formatting (and maybe check for all lower-case and stuff > like that) because if we don't then this is going to be a complete > mess that's not parseable. I see we could have a few options here, non exhaustive list (especially omitting some sub-options): 1) DRM core implements fdinfo, which emits the common parts, calling into the driver to do the rest. 2) DRM adds helpers for driver to emit common parts of fdinfo. 3) DRM core establishes a "spec" defining the common fields, the optional ones, and formats. I was trending towards 3) because it is most lightweight and feeling is there isn't that much value in extracting a tiny bit of commonality in code. Proof in the pudding is how short the fdinfo vfunc is in this patch. > And value should be real semantic stuff, not "here's a string". So > accumulated time as a struct ktime as the example. Ideally yes, but I have a feeling the ways how amdgpu and i915 track things are so different so first lets learn more about that. >> Am 20.05.21 um 18:26 schrieb Nieto, David M: >> >> [AMD Official Use Only] >> >> >> i would like to add a unit marker for the stats that we monitor in the fd, as we discussed currently we are displaying the usage percentage, because we wanted to to provide single query percentages, but this may evolve with time. >> >> May I suggest to add two new fields >> >> drm-stat-interval: <64 bit> ns >> drm-stat-timestamp: <64 bit> ns >> >> If interval is set, engine utilization is calculated by doing <perc render> = 100*<drm_engine_render>/<drm_stat_interval> >> if interval is not set, two reads are needed : <perc render> = 100*<drm_engine_render1 - drm_engine_render0> / <drm-stat-timestamp1 - drm-stat-timestamp0> I would like to understand how admgpu tracks GPU time since I am not getting these fields yet. 1) You suggest to have a timestamp because of different clock domains? 2) With the interval option - you actually have a restarting counter? Do you keep that in the driver or get it from hw itself? Regards, Tvrtko
Am 21.05.21 um 14:26 schrieb Tvrtko Ursulin: > > On 20/05/2021 18:47, Daniel Vetter wrote: >> On Thu, May 20, 2021 at 6:31 PM Christian König >> <christian.koenig@amd.com> wrote: >>> >>> Yeah, having the timestamp is a good idea as well. >>> >>> drm-driver: i915 >>> >>> I think we should rather add something like printing >>> file_operations->owner->name to the common fdinfo code. >>> >>> This way we would have something common for all drivers in the >>> system. I'm just not sure if that also works if they are compiled >>> into the kernel. >> >> Yeah common code could print driver name, busid and all that stuff. I >> think the common code should also provide some helpers for the key: >> value pair formatting (and maybe check for all lower-case and stuff >> like that) because if we don't then this is going to be a complete >> mess that's not parseable. > > I see we could have a few options here, non exhaustive list > (especially omitting some sub-options): > > 1) > DRM core implements fdinfo, which emits the common parts, calling into > the driver to do the rest. > > 2) > DRM adds helpers for driver to emit common parts of fdinfo. > > 3) > DRM core establishes a "spec" defining the common fields, the optional > ones, and formats. > > I was trending towards 3) because it is most lightweight and feeling > is there isn't that much value in extracting a tiny bit of commonality > in code. Proof in the pudding is how short the fdinfo vfunc is in this > patch. > I would say that we should add printing the module name to the common fdinfo function for the whole kernel. And for the DRM specific stuff either 2 or 3 is the way to go I think. Number 1 sounds to much like mid-layering to me. Regards, Christian. >> And value should be real semantic stuff, not "here's a string". So >> accumulated time as a struct ktime as the example. > > Ideally yes, but I have a feeling the ways how amdgpu and i915 track > things are so different so first lets learn more about that. > >>> Am 20.05.21 um 18:26 schrieb Nieto, David M: >>> >>> [AMD Official Use Only] >>> >>> >>> i would like to add a unit marker for the stats that we monitor in >>> the fd, as we discussed currently we are displaying the usage >>> percentage, because we wanted to to provide single query >>> percentages, but this may evolve with time. >>> >>> May I suggest to add two new fields >>> >>> drm-stat-interval: <64 bit> ns >>> drm-stat-timestamp: <64 bit> ns >>> >>> If interval is set, engine utilization is calculated by doing <perc >>> render> = 100*<drm_engine_render>/<drm_stat_interval> >>> if interval is not set, two reads are needed : <perc render> = >>> 100*<drm_engine_render1 - drm_engine_render0> / <drm-stat-timestamp1 >>> - drm-stat-timestamp0> > > I would like to understand how admgpu tracks GPU time since I am not > getting these fields yet. > > 1) > You suggest to have a timestamp because of different clock domains? > > 2) > With the interval option - you actually have a restarting counter? Do > you keep that in the driver or get it from hw itself? > > Regards, > > Tvrtko
diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c index 1e5db7753276..5e9cfba1116b 100644 --- a/drivers/gpu/drm/i915/i915_drm_client.c +++ b/drivers/gpu/drm/i915/i915_drm_client.c @@ -9,6 +9,11 @@ #include <drm/drm_print.h> +#include <uapi/drm/i915_drm.h> + +#include "gem/i915_gem_context.h" +#include "gt/intel_engine_user.h" + #include "i915_drm_client.h" #include "i915_drv.h" #include "i915_gem.h" @@ -168,3 +173,66 @@ void i915_drm_clients_fini(struct i915_drm_clients *clients) xa_destroy(&clients->xarray); } + +#ifdef CONFIG_PROC_FS +static const char * const uabi_class_names[] = { + [I915_ENGINE_CLASS_RENDER] = "render", + [I915_ENGINE_CLASS_COPY] = "copy", + [I915_ENGINE_CLASS_VIDEO] = "video", + [I915_ENGINE_CLASS_VIDEO_ENHANCE] = "video-enhance", +}; + +static u64 busy_add(struct i915_gem_context *ctx, unsigned int class) +{ + struct i915_gem_engines_iter it; + struct intel_context *ce; + u64 total = 0; + + for_each_gem_engine(ce, rcu_dereference(ctx->engines), it) { + if (ce->engine->uabi_class != class) + continue; + + total += intel_context_get_total_runtime_ns(ce); + } + + return total; +} + +static void +show_client_class(struct seq_file *m, + struct i915_drm_client *client, + unsigned int class) +{ + const struct list_head *list = &client->ctx_list; + u64 total = atomic64_read(&client->past_runtime[class]); + struct i915_gem_context *ctx; + + rcu_read_lock(); + list_for_each_entry_rcu(ctx, list, client_link) + total += busy_add(ctx, class); + rcu_read_unlock(); + + return seq_printf(m, "drm-engine-%s:\t%llu ns\n", + uabi_class_names[class], total); +} + +void i915_drm_client_fdinfo(struct seq_file *m, struct file *f) +{ + struct drm_file *file = f->private_data; + struct drm_i915_file_private *file_priv = file->driver_priv; + struct drm_i915_private *i915 = file_priv->dev_priv; + struct i915_drm_client *client = file_priv->client; + struct pci_dev *pdev = to_pci_dev(i915->drm.dev); + unsigned int i; + + seq_printf(m, "drm-driver:\ti915\n"); + seq_printf(m, "drm-pdev:\t%04x:%02x:%02x.%d\n", + pci_domain_nr(pdev->bus), pdev->bus->number, + PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn)); + + seq_printf(m, "drm-client-id:\t%u\n", client->id); + + for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++) + show_client_class(m, client, i); +} +#endif diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h index b2b69d6985e4..9885002433a0 100644 --- a/drivers/gpu/drm/i915/i915_drm_client.h +++ b/drivers/gpu/drm/i915/i915_drm_client.h @@ -98,6 +98,10 @@ i915_drm_client_pid(const struct i915_drm_client *client) return __i915_drm_client_name(client)->pid; } +#ifdef CONFIG_PROC_FS +void i915_drm_client_fdinfo(struct seq_file *m, struct file *f); +#endif + void i915_drm_clients_fini(struct i915_drm_clients *clients); #endif /* !__I915_DRM_CLIENT_H__ */ diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c index 33eb7b52b58b..6b63fe4b3c26 100644 --- a/drivers/gpu/drm/i915/i915_drv.c +++ b/drivers/gpu/drm/i915/i915_drv.c @@ -1694,6 +1694,9 @@ static const struct file_operations i915_driver_fops = { .read = drm_read, .compat_ioctl = i915_ioc32_compat_ioctl, .llseek = noop_llseek, +#ifdef CONFIG_PROC_FS + .show_fdinfo = i915_drm_client_fdinfo, +#endif }; static int