Message ID | 1435133343-15563-1-git-send-email-chris@chris-wilson.co.uk (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Wed, Jun 24, 2015 at 09:09:03AM +0100, Chris Wilson wrote: > Akash noticed that we were recursing from the call to > intel_runtime_pm_get() inside intel_mark_busy() when we were already > waking the device (through another intel_runtime_pm_get()). In > intel_mark_busy() we know the device is awake and purpose of the > reference here is to simply keep the device awake until the GPU is idle > again. As such we do not need the full resume, and can call the lighter > intel_runtime_pm_get_noresume() instead. How does that happen? I only see a mark_busy in add_request, and we shouldn't call that from intel_rpm_get(). Or do we? Calltrace of how this happens would be great. -Daniel > > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> > Cc: Akash Goel <akash.goel@intel.com> > Cc: Imre Deak <imre.deak@intel.com> > --- > drivers/gpu/drm/i915/intel_display.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c > index a43abc38af90..da0d882cc71e 100644 > --- a/drivers/gpu/drm/i915/intel_display.c > +++ b/drivers/gpu/drm/i915/intel_display.c > @@ -10668,7 +10668,7 @@ void intel_mark_busy(struct drm_device *dev) > if (dev_priv->mm.busy) > return; > > - intel_runtime_pm_get(dev_priv); > + intel_runtime_pm_get_noresume(dev_priv); > intel_rps_busy(dev_priv); > dev_priv->mm.busy = true; > } > -- > 2.1.4 > > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@lists.freedesktop.org > http://lists.freedesktop.org/mailman/listinfo/intel-gfx
On Wed, Jun 24, 2015 at 10:40:02AM +0200, Daniel Vetter wrote: > On Wed, Jun 24, 2015 at 09:09:03AM +0100, Chris Wilson wrote: > > Akash noticed that we were recursing from the call to > > intel_runtime_pm_get() inside intel_mark_busy() when we were already > > waking the device (through another intel_runtime_pm_get()). In > > intel_mark_busy() we know the device is awake and purpose of the > > reference here is to simply keep the device awake until the GPU is idle > > again. As such we do not need the full resume, and can call the lighter > > intel_runtime_pm_get_noresume() instead. > > How does that happen? I only see a mark_busy in add_request, and we > shouldn't call that from intel_rpm_get(). Or do we? Calltrace of how this > happens would be great. We are looking at VLV workarounds for which i915_gem_init_hw() is acting very fishy. The recursion is a result of that, but that is just a demonstration that we didn't need the full pm_get() here. -Chris
On Wed, Jun 24, 2015 at 10:40:02AM +0200, Daniel Vetter wrote: > On Wed, Jun 24, 2015 at 09:09:03AM +0100, Chris Wilson wrote: > > Akash noticed that we were recursing from the call to > > intel_runtime_pm_get() inside intel_mark_busy() when we were already > > waking the device (through another intel_runtime_pm_get()). In > > intel_mark_busy() we know the device is awake and purpose of the > > reference here is to simply keep the device awake until the GPU is idle > > again. As such we do not need the full resume, and can call the lighter > > intel_runtime_pm_get_noresume() instead. > > How does that happen? I only see a mark_busy in add_request, and we > shouldn't call that from intel_rpm_get(). Or do we? Calltrace of how this > happens would be great. From the vlv wa: [<ffffffff81b2d426>] dump_stack+0x4e/0x7a [<ffffffff8108c40d>] warn_slowpath_common+0x7d/0xa0 [<ffffffff8108c4ea>] warn_slowpath_null+0x1a/0x20 [<ffffffff815656e5>] intel_runtime_pm_get+0xc5/0xf0 [<ffffffff81583f2e>] __i915_add_request+0x5e/0x340 [<ffffffff815772a3>] i915_gem_context_enable+0x93/0xb0 [<ffffffff81586e19>] i915_gem_init_hw+0xd9/0x270 [<ffffffff81552585>] intel_runtime_resume+0x105/0x370 [<ffffffff8146e569>] ? pci_restore_state.part.36+0xc9/0x1f0 [<ffffffff8147103b>] pci_pm_runtime_resume+0x7b/0xc0 [<ffffffff81470fc0>] ? pci_restore_standard_config+0x60/0x60 [<ffffffff8162a7f2>] __rpm_callback+0x32/0x70 [<ffffffff8162a856>] rpm_callback+0x26/0xa0 [<ffffffff8162b0f6>] rpm_resume+0x496/0x6f0 [<ffffffff8162c33f>] __pm_runtime_resume+0x4f/0x90 [<ffffffff815656a6>] intel_runtime_pm_get+0x86/0xf0 [<ffffffff81565865>] intel_display_power_rpm_get+0x125/0x170 [<ffffffff815658c3>] intel_display_power_get+0x13/0x20 [<ffffffff815d66a7>] intel_connector_dpms+0x57/0x120 [<ffffffff815394e9>] drm_mode_obj_set_property_ioctl+0x399/0x3a0 [<ffffffff815281c9>] drm_ioctl+0x1e9/0x5a0 [<ffffffff81539150>] ? drm_mode_obj_get_properties_ioctl+0x140/0x140 [<ffffffff813ccb57>] ? inode_has_perm.isra.32+0x27/0x40 [<ffffffff813ccc86>] ? file_has_perm+0x86/0xa0 [<ffffffff815698d5>] i915_compat_ioctl+0x45/0x50 [<ffffffff8120cb30>] compat_sys_ioctl+0xc0/0x13e0 [<ffffffff811d517e>] ? __fget+0x6e/0xb0 [<ffffffff81436a8b>] ? trace_hardirqs_on_thunk+0x3a/0x3c [<ffffffff81b3c189>] ia32_do_call+0x13/0x1 -Chris
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c index a43abc38af90..da0d882cc71e 100644 --- a/drivers/gpu/drm/i915/intel_display.c +++ b/drivers/gpu/drm/i915/intel_display.c @@ -10668,7 +10668,7 @@ void intel_mark_busy(struct drm_device *dev) if (dev_priv->mm.busy) return; - intel_runtime_pm_get(dev_priv); + intel_runtime_pm_get_noresume(dev_priv); intel_rps_busy(dev_priv); dev_priv->mm.busy = true; }
Akash noticed that we were recursing from the call to intel_runtime_pm_get() inside intel_mark_busy() when we were already waking the device (through another intel_runtime_pm_get()). In intel_mark_busy() we know the device is awake and purpose of the reference here is to simply keep the device awake until the GPU is idle again. As such we do not need the full resume, and can call the lighter intel_runtime_pm_get_noresume() instead. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Akash Goel <akash.goel@intel.com> Cc: Imre Deak <imre.deak@intel.com> --- drivers/gpu/drm/i915/intel_display.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)