diff mbox

[17/17] drm/i915: Rewrite GMCH irq handlers to follow the VLV/CHV pattern

Message ID 20170622115555.12176-18-ville.syrjala@linux.intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Ville Syrjälä June 22, 2017, 11:55 a.m. UTC
From: Ville Syrjälä <ville.syrjala@linux.intel.com>

Eliminate the loops from the gen2-3 irq handlers by following the same
trick used for VLV/CHV, ie. clear IER around acking the interrupts.
That way if some IIR bits still remain set we'll get another edge (and
thus another CPU interrupt) when the IER gets restored.

This shouldn't really be necessary when level triggered PCI interrupts
are used (gen2, some gen3), but let's follow the same pattern in
all the handlers so that we don't have to worry about MSI being enabled
or not. And consistency should help avoid confusing the reader as well.

Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/i915/i915_irq.c | 196 +++++++++++++++++++---------------------
 1 file changed, 94 insertions(+), 102 deletions(-)

Comments

Chris Wilson June 22, 2017, 1 p.m. UTC | #1
Quoting ville.syrjala@linux.intel.com (2017-06-22 12:55:55)
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> Eliminate the loops from the gen2-3 irq handlers by following the same
> trick used for VLV/CHV, ie. clear IER around acking the interrupts.
> That way if some IIR bits still remain set we'll get another edge (and
> thus another CPU interrupt) when the IER gets restored.
> 
> This shouldn't really be necessary when level triggered PCI interrupts
> are used (gen2, some gen3), but let's follow the same pattern in
> all the handlers so that we don't have to worry about MSI being enabled
> or not. And consistency should help avoid confusing the reader as well.
> 
> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>

Having a common approach that just works is definitely worth it.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>

Interrupts aren't so much of a concern for me for pre-snb, but every
mmio read prior to waking up a waiter should be justified :) Before
ilk, we have to run the entire interrupt handler before userspace can
proceed. Just something to bear in mind, and prune as much as possible.
-Chris
Ville Syrjälä June 22, 2017, 1:10 p.m. UTC | #2
On Thu, Jun 22, 2017 at 02:00:49PM +0100, Chris Wilson wrote:
> Quoting ville.syrjala@linux.intel.com (2017-06-22 12:55:55)
> > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > 
> > Eliminate the loops from the gen2-3 irq handlers by following the same
> > trick used for VLV/CHV, ie. clear IER around acking the interrupts.
> > That way if some IIR bits still remain set we'll get another edge (and
> > thus another CPU interrupt) when the IER gets restored.
> > 
> > This shouldn't really be necessary when level triggered PCI interrupts
> > are used (gen2, some gen3), but let's follow the same pattern in
> > all the handlers so that we don't have to worry about MSI being enabled
> > or not. And consistency should help avoid confusing the reader as well.
> > 
> > Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> Having a common approach that just works is definitely worth it.
> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
> 
> Interrupts aren't so much of a concern for me for pre-snb, but every
> mmio read prior to waking up a waiter should be justified :) Before
> ilk, we have to run the entire interrupt handler before userspace can
> proceed. Just something to bear in mind, and prune as much as possible.

I guess we could nuke the IER read at least by storing the value
somewhere. I believe you actually suggested that when I redid the
VLV/CHV code. And I guess ripping out all POSTING_READs could be
a sane thing to do as well.

As for the underrun checks, I guess we could limit those to just happen
when some other pipe interrupt happens. While we're flipping or
reconfiguring planes we should have vblanks enabled anyway, so
we should still be able to catch the most glaring watermark failures
at least. But catching more subtle underruns caused by something like
memory self refresh or perhaps starvation due to heavy GPU memory
access etc. might go unnoticed then.
diff mbox

Patch

diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index f00e20902e1c..293609384b38 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -3760,11 +3760,7 @@  static irqreturn_t i8xx_irq_handler(int irq, void *arg)
 {
 	struct drm_device *dev = arg;
 	struct drm_i915_private *dev_priv = to_i915(dev);
-	u16 iir, new_iir;
-	const u16 flip_mask =
-		I915_DISPLAY_PLANE_A_FLIP_PENDING_INTERRUPT |
-		I915_DISPLAY_PLANE_B_FLIP_PENDING_INTERRUPT;
-	irqreturn_t ret;
+	irqreturn_t ret = IRQ_NONE;
 
 	if (!intel_irqs_enabled(dev_priv))
 		return IRQ_NONE;
@@ -3772,34 +3768,45 @@  static irqreturn_t i8xx_irq_handler(int irq, void *arg)
 	/* IRQs are synced during runtime_suspend, we don't require a wakeref */
 	disable_rpm_wakeref_asserts(dev_priv);
 
-	ret = IRQ_NONE;
-	iir = I915_READ16(IIR);
-	if (iir == 0)
-		goto out;
-
-	while (iir & ~flip_mask) {
+	do {
+		const u16 flip_mask =
+			I915_DISPLAY_PLANE_A_FLIP_PENDING_INTERRUPT |
+			I915_DISPLAY_PLANE_B_FLIP_PENDING_INTERRUPT;
 		u32 pipe_stats[I915_MAX_PIPES] = {};
+		u16 iir, ier;
 
-		if (iir & I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT)
-			DRM_DEBUG("Command parser error, iir 0x%08x\n", iir);
+		iir = I915_READ16(IIR);
+		if ((iir & ~flip_mask) == 0)
+			break;
+
+		ret = IRQ_HANDLED;
+
+		/*
+		 * Clear IER while we clear the IIR bits to make
+		 * sure we get another edge if not all IIR bits
+		 * end up cleared.
+		 */
+		ier = I915_READ16(IER);
+		I915_WRITE16(IER, 0);
 
 		/* Call regardless, as some status bits might not be
 		 * signalled in iir */
 		i9xx_pipestat_irq_ack(dev_priv, iir, pipe_stats);
 
-		I915_WRITE16(IIR, iir & ~flip_mask);
-		new_iir = I915_READ16(IIR); /* Flush posted writes */
+		I915_WRITE16(IIR, (iir & ~flip_mask));
+
+		I915_WRITE16(IER, ier);
+		POSTING_READ16(IER);
 
 		if (iir & I915_USER_INTERRUPT)
 			notify_ring(dev_priv->engine[RCS]);
 
-		i8xx_pipestat_irq_handler(dev_priv, iir, pipe_stats);
+		if (iir & I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT)
+			DRM_DEBUG("Command parser error, iir 0x%08x\n", iir);
 
-		iir = new_iir;
-	}
-	ret = IRQ_HANDLED;
+		i8xx_pipestat_irq_handler(dev_priv, iir, pipe_stats);
+	} while (0);
 
-out:
 	enable_rpm_wakeref_asserts(dev_priv);
 
 	return ret;
@@ -3868,11 +3875,7 @@  static irqreturn_t i915_irq_handler(int irq, void *arg)
 {
 	struct drm_device *dev = arg;
 	struct drm_i915_private *dev_priv = to_i915(dev);
-	u32 iir, new_iir;
-	const u32 flip_mask =
-		I915_DISPLAY_PLANE_A_FLIP_PENDING_INTERRUPT |
-		I915_DISPLAY_PLANE_B_FLIP_PENDING_INTERRUPT;
-	int ret = IRQ_NONE;
+	irqreturn_t ret = IRQ_NONE;
 
 	if (!intel_irqs_enabled(dev_priv))
 		return IRQ_NONE;
@@ -3880,55 +3883,52 @@  static irqreturn_t i915_irq_handler(int irq, void *arg)
 	/* IRQs are synced during runtime_suspend, we don't require a wakeref */
 	disable_rpm_wakeref_asserts(dev_priv);
 
-	iir = I915_READ(IIR);
 	do {
+		const u32 flip_mask =
+			I915_DISPLAY_PLANE_A_FLIP_PENDING_INTERRUPT |
+			I915_DISPLAY_PLANE_B_FLIP_PENDING_INTERRUPT;
 		u32 pipe_stats[I915_MAX_PIPES] = {};
-		bool irq_received = (iir & ~flip_mask) != 0;
+		u32 hotplug_status = 0;
+		u32 iir, ier;
 
-		if (iir & I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT)
-			DRM_DEBUG("Command parser error, iir 0x%08x\n", iir);
+		iir = I915_READ(IIR);
+		if ((iir & ~flip_mask) == 0)
+			break;
 
-		/* Call regardless, as some status bits might not be
-		 * signalled in iir */
-		i9xx_pipestat_irq_ack(dev_priv, iir, pipe_stats);
+		ret = IRQ_HANDLED;
 
-		if (!irq_received)
-			break;
+		/*
+		 * Clear IER while we clear the IIR bits to make
+		 * sure we get another edge if not all IIR bits
+		 * end up cleared.
+		 */
+		ier = I915_READ(IER);
+		I915_WRITE(IER, 0);
 
-		/* Consume port.  Then clear IIR or we'll miss events */
 		if (I915_HAS_HOTPLUG(dev_priv) &&
-		    iir & I915_DISPLAY_PORT_INTERRUPT) {
-			u32 hotplug_status = i9xx_hpd_irq_ack(dev_priv);
-			if (hotplug_status)
-				i9xx_hpd_irq_handler(dev_priv, hotplug_status);
-		}
+		    iir & I915_DISPLAY_PORT_INTERRUPT)
+			hotplug_status = i9xx_hpd_irq_ack(dev_priv);
+
+		/* Call regardless, as some status bits might not be
+		 * signalled in iir */
+		i9xx_pipestat_irq_ack(dev_priv, iir, pipe_stats);
 
 		I915_WRITE(IIR, iir & ~flip_mask);
-		new_iir = I915_READ(IIR); /* Flush posted writes */
+
+		I915_WRITE(IER, ier);
+		POSTING_READ(IER);
 
 		if (iir & I915_USER_INTERRUPT)
 			notify_ring(dev_priv->engine[RCS]);
 
-		i915_pipestat_irq_handler(dev_priv, iir, pipe_stats);
+		if (iir & I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT)
+			DRM_DEBUG("Command parser error, iir 0x%08x\n", iir);
 
-		/* With MSI, interrupts are only generated when iir
-		 * transitions from zero to nonzero.  If another bit got
-		 * set while we were handling the existing iir bits, then
-		 * we would never get another interrupt.
-		 *
-		 * This is fine on non-MSI as well, as if we hit this path
-		 * we avoid exiting the interrupt handler only to generate
-		 * another one.
-		 *
-		 * Note that for MSI this could cause a stray interrupt report
-		 * if an interrupt landed in the time between writing IIR and
-		 * the posting read.  This should be rare enough to never
-		 * trigger the 99% of 100,000 interrupts test for disabling
-		 * stray interrupts.
-		 */
-		ret = IRQ_HANDLED;
-		iir = new_iir;
-	} while (iir & ~flip_mask);
+		if (hotplug_status)
+			i9xx_hpd_irq_handler(dev_priv, hotplug_status);
+
+		i915_pipestat_irq_handler(dev_priv, iir, pipe_stats);
+	} while (0);
 
 	enable_rpm_wakeref_asserts(dev_priv);
 
@@ -4035,11 +4035,7 @@  static irqreturn_t i965_irq_handler(int irq, void *arg)
 {
 	struct drm_device *dev = arg;
 	struct drm_i915_private *dev_priv = to_i915(dev);
-	u32 iir, new_iir;
-	int ret = IRQ_NONE;
-	const u32 flip_mask =
-		I915_DISPLAY_PLANE_A_FLIP_PENDING_INTERRUPT |
-		I915_DISPLAY_PLANE_B_FLIP_PENDING_INTERRUPT;
+	irqreturn_t ret = IRQ_NONE;
 
 	if (!intel_irqs_enabled(dev_priv))
 		return IRQ_NONE;
@@ -4047,58 +4043,54 @@  static irqreturn_t i965_irq_handler(int irq, void *arg)
 	/* IRQs are synced during runtime_suspend, we don't require a wakeref */
 	disable_rpm_wakeref_asserts(dev_priv);
 
-	iir = I915_READ(IIR);
-
-	for (;;) {
+	do {
+		const u32 flip_mask =
+			I915_DISPLAY_PLANE_A_FLIP_PENDING_INTERRUPT |
+			I915_DISPLAY_PLANE_B_FLIP_PENDING_INTERRUPT;
 		u32 pipe_stats[I915_MAX_PIPES] = {};
-		bool irq_received = (iir & ~flip_mask) != 0;
-
-		if (iir & I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT)
-			DRM_DEBUG("Command parser error, iir 0x%08x\n", iir);
-
-		/* Call regardless, as some status bits might not be
-		 * signalled in iir */
-		i9xx_pipestat_irq_ack(dev_priv, iir, pipe_stats);
+		u32 hotplug_status = 0;
+		u32 iir, ier;
 
-		if (!irq_received)
+		iir = I915_READ(IIR);
+		if ((iir & ~flip_mask) == 0)
 			break;
 
 		ret = IRQ_HANDLED;
 
-		/* Consume port.  Then clear IIR or we'll miss events */
-		if (iir & I915_DISPLAY_PORT_INTERRUPT) {
-			u32 hotplug_status = i9xx_hpd_irq_ack(dev_priv);
-			if (hotplug_status)
-				i9xx_hpd_irq_handler(dev_priv, hotplug_status);
-		}
+		/*
+		 * Clear IER while we clear the IIR bits to make
+		 * sure we get another edge if not all IIR bits
+		 * end up cleared.
+		 */
+		ier = I915_READ(IER);
+		I915_WRITE(IER, 0);
+
+		if (iir & I915_DISPLAY_PORT_INTERRUPT)
+			hotplug_status = i9xx_hpd_irq_ack(dev_priv);
+
+		/* Call regardless, as some status bits might not be
+		 * signalled in iir */
+		i9xx_pipestat_irq_ack(dev_priv, iir, pipe_stats);
 
 		I915_WRITE(IIR, iir & ~flip_mask);
-		new_iir = I915_READ(IIR); /* Flush posted writes */
+
+		I915_WRITE(IER, ier);
+		POSTING_READ(IER);
 
 		if (iir & I915_USER_INTERRUPT)
 			notify_ring(dev_priv->engine[RCS]);
+
 		if (iir & I915_BSD_USER_INTERRUPT)
 			notify_ring(dev_priv->engine[VCS]);
 
-		i965_pipestat_irq_handler(dev_priv, iir, pipe_stats);
+		if (iir & I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT)
+			DRM_DEBUG("Command parser error, iir 0x%08x\n", iir);
 
-		/* With MSI, interrupts are only generated when iir
-		 * transitions from zero to nonzero.  If another bit got
-		 * set while we were handling the existing iir bits, then
-		 * we would never get another interrupt.
-		 *
-		 * This is fine on non-MSI as well, as if we hit this path
-		 * we avoid exiting the interrupt handler only to generate
-		 * another one.
-		 *
-		 * Note that for MSI this could cause a stray interrupt report
-		 * if an interrupt landed in the time between writing IIR and
-		 * the posting read.  This should be rare enough to never
-		 * trigger the 99% of 100,000 interrupts test for disabling
-		 * stray interrupts.
-		 */
-		iir = new_iir;
-	}
+		if (hotplug_status)
+			i9xx_hpd_irq_handler(dev_priv, hotplug_status);
+
+		i965_pipestat_irq_handler(dev_priv, iir, pipe_stats);
+	} while (0);
 
 	enable_rpm_wakeref_asserts(dev_priv);