diff mbox

[13/18] drm/i915: remove too-frequent FBC debug message

Message ID 1445349004-16409-14-git-send-email-paulo.r.zanoni@intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Zanoni, Paulo R Oct. 20, 2015, 1:49 p.m. UTC
If we run igt/kms_frontbuffer_tracking, this message will appear
thousands of times, eating a significant part of our dmesg buffer.
It's part of the expected FBC behavior, so let's just silence it.

Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
---
 drivers/gpu/drm/i915/intel_fbc.c | 2 --
 1 file changed, 2 deletions(-)

Comments

Chris Wilson Oct. 21, 2015, 1:01 p.m. UTC | #1
On Tue, Oct 20, 2015 at 11:49:59AM -0200, Paulo Zanoni wrote:
> If we run igt/kms_frontbuffer_tracking, this message will appear
> thousands of times, eating a significant part of our dmesg buffer.
> It's part of the expected FBC behavior, so let's just silence it.
> 
> Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>

Looks fine. Out of curiosity what metrics do we have for FBC activity? I
presume we have tracepoints for activate/deactivate. and perhaps a sw
timer, and a hw debug register?
-Chris
Zanoni, Paulo R Oct. 21, 2015, 6:19 p.m. UTC | #2
Em Qua, 2015-10-21 às 14:01 +0100, Chris Wilson escreveu:
> On Tue, Oct 20, 2015 at 11:49:59AM -0200, Paulo Zanoni wrote:

> > If we run igt/kms_frontbuffer_tracking, this message will appear

> > thousands of times, eating a significant part of our dmesg buffer.

> > It's part of the expected FBC behavior, so let's just silence it.

> > 

> > Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>

> 

> Looks fine. Out of curiosity what metrics do we have for FBC

> activity? I

> presume we have tracepoints for activate/deactivate. and perhaps a sw

> timer, and a hw debug register?


The most important metric is PC state residency, but that requires the
machine to be properly configured: if SATA is preventing our machine
from going deeper than PC3, FBC won't change PC state residencies.
Also, our public processor datasheet docs mention the maximum expected
PC states for the most common screen resolutions.

We can work on adding more things later, such as the tracepoints or
software timer you mentioned. I've been 100% focused on getting the
bugs out first. Is there anything specific you think you could use?

> -Chris

>
Chris Wilson Oct. 22, 2015, 7:52 p.m. UTC | #3
On Wed, Oct 21, 2015 at 06:19:23PM +0000, Zanoni, Paulo R wrote:
> Em Qua, 2015-10-21 às 14:01 +0100, Chris Wilson escreveu:
> > On Tue, Oct 20, 2015 at 11:49:59AM -0200, Paulo Zanoni wrote:
> > > If we run igt/kms_frontbuffer_tracking, this message will appear
> > > thousands of times, eating a significant part of our dmesg buffer.
> > > It's part of the expected FBC behavior, so let's just silence it.
> > > 
> > > Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
> > 
> > Looks fine. Out of curiosity what metrics do we have for FBC
> > activity? I
> > presume we have tracepoints for activate/deactivate. and perhaps a sw
> > timer, and a hw debug register?
> 
> The most important metric is PC state residency, but that requires the
> machine to be properly configured: if SATA is preventing our machine
> from going deeper than PC3, FBC won't change PC state residencies.
> Also, our public processor datasheet docs mention the maximum expected
> PC states for the most common screen resolutions.
> 
> We can work on adding more things later, such as the tracepoints or
> software timer you mentioned. I've been 100% focused on getting the
> bugs out first. Is there anything specific you think you could use?

The tracepoints are primarily a debug tool, effectively less noisy
printks that can be easily hooked up to a bit of python for processing
(if just read huge logs isn't satisfying).

For monitoring efficacy, I had in mode a timer for fbc active and
perhaps one for timing compression (ideally the active timer would only
start when the compressed frame was complete, but that may be too much).
That should get us to the point where we can quickly see if we are
enabling FBC for significant periods. Now, this can be done with good
tracepoints and a userspace script. So really just planning good
coverage of tracepoints is the starting point.
-Chris
diff mbox

Patch

diff --git a/drivers/gpu/drm/i915/intel_fbc.c b/drivers/gpu/drm/i915/intel_fbc.c
index 502ab0b..5dab0e0 100644
--- a/drivers/gpu/drm/i915/intel_fbc.c
+++ b/drivers/gpu/drm/i915/intel_fbc.c
@@ -426,8 +426,6 @@  static void intel_fbc_cancel_work(struct drm_i915_private *dev_priv)
 	if (dev_priv->fbc.fbc_work == NULL)
 		return;
 
-	DRM_DEBUG_KMS("cancelling pending FBC activation\n");
-
 	/* Synchronisation is provided by struct_mutex and checking of
 	 * dev_priv->fbc.fbc_work, so we can perform the cancellation
 	 * entirely asynchronously.