From patchwork Wed Jun 12 11:37:16 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Vetter X-Patchwork-Id: 2709591 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 8A6D69F4BB for ; Wed, 12 Jun 2013 11:46:08 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 41B7920248 for ; Wed, 12 Jun 2013 11:46:07 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id F2EFC20188 for ; Wed, 12 Jun 2013 11:46:05 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id CE0E1E6300 for ; Wed, 12 Jun 2013 04:46:05 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail-ee0-f48.google.com (mail-ee0-f48.google.com [74.125.83.48]) by gabe.freedesktop.org (Postfix) with ESMTP id A4F41E6300 for ; Wed, 12 Jun 2013 04:38:40 -0700 (PDT) Received: by mail-ee0-f48.google.com with SMTP id b47so3618798eek.35 for ; Wed, 12 Jun 2013 04:38:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; bh=2igaD2ydt6haapIGcuJzrG7Zaqsd/deCaYtDf85cnFc=; b=GZS6sOCWRALMnAP2sRSgG9V+ASVrU57KaVhBFYllPdDK7+6N9QeamQCqeMBDTHdquz K/MFmGKC9Twk0MMb6YgSuuMo7eTBMnusoBkmop6WlCT7dHExyIbhArduhmJxVBeTrrZz bPqwciEuLQ2w1CGOUB7mM02XmeFIrqd459Hio= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references :x-gm-message-state; bh=2igaD2ydt6haapIGcuJzrG7Zaqsd/deCaYtDf85cnFc=; b=Vy3G0VwC/GgRJeBeK9L83grZQPiCM9qlivfSWPwn+fVbtw0kGtC89PsiLzDb79A0Ap 1HP5R1sN905IanGtDFYnOcDAG3XP3UJEOcx/FhRvDVqOaU584CVfx9pjSCOzccGH0g+i AEmPRKoNTVJqBXxibjYsPQUKqjYRbLLuO4NgFR7teGSLy7qXhKPwlxv6wmj9vwv6BQ1g JqSK9DSkRKNqD6hY+kFvvz9ssKaBG8y20PCCQ+zO3rBD2VZ3k+H+2G7+ngVSUDN28LHX p/Hxl9nTsgXTxbmI2X6GOfh+dvWX8KkINja+r1+tIw0z0TZ9jsolGAPcr1Cav1+kC0OF 4ATg== X-Received: by 10.14.178.134 with SMTP id f6mr20952888eem.91.1371037119720; Wed, 12 Jun 2013 04:38:39 -0700 (PDT) Received: from natalie.ffwll.local (178-83-130-250.dynamic.hispeed.ch. [178.83.130.250]) by mx.google.com with ESMTPSA id 3sm36309753een.7.2013.06.12.04.38.38 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Wed, 12 Jun 2013 04:38:38 -0700 (PDT) From: Daniel Vetter To: Intel Graphics Development Date: Wed, 12 Jun 2013 13:37:16 +0200 Message-Id: <1371037046-3732-15-git-send-email-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 1.8.1.4 In-Reply-To: <1371037046-3732-1-git-send-email-daniel.vetter@ffwll.ch> References: <1371037046-3732-1-git-send-email-daniel.vetter@ffwll.ch> X-Gm-Message-State: ALoCoQlJMmL27eWdUwcXdmTxRKkVbcKTE7cPYlcFYl0JnhTXn4VU/Kurg0644Kj8MNqgToXpzopU Cc: Daniel Vetter Subject: [Intel-gfx] [PATCH 14/24] drm/i915: irq handlers don't need interrupt-safe spinlocks X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org Errors-To: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org X-Spam-Status: No, score=-4.3 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Since we only have one interrupt handler and interrupt handlers are non-reentrant. To drive the point really home give them all an _irq_handler suffix. This is a tiny micro-optimization but even more important it makes it clearer what locking we actually need. And in case someone screws this up: lockdep will catch hardirq vs. other context deadlocks. Signed-off-by: Daniel Vetter --- drivers/gpu/drm/i915/i915_irq.c | 40 +++++++++++++++++----------------------- 1 file changed, 17 insertions(+), 23 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c index b98ea4e..8bba0c5 100644 --- a/drivers/gpu/drm/i915/i915_irq.c +++ b/drivers/gpu/drm/i915/i915_irq.c @@ -633,14 +633,13 @@ static void i915_hotplug_work_func(struct work_struct *work) drm_kms_helper_hotplug_event(dev); } -static void ironlake_handle_rps_change(struct drm_device *dev) +static void ironlake_rps_change_irq_handler(struct drm_device *dev) { drm_i915_private_t *dev_priv = dev->dev_private; u32 busy_up, busy_down, max_avg, min_avg; u8 new_delay; - unsigned long flags; - spin_lock_irqsave(&mchdev_lock, flags); + spin_lock(&mchdev_lock); I915_WRITE16(MEMINTRSTS, I915_READ(MEMINTRSTS)); @@ -668,7 +667,7 @@ static void ironlake_handle_rps_change(struct drm_device *dev) if (ironlake_set_drps(dev, new_delay)) dev_priv->ips.cur_delay = new_delay; - spin_unlock_irqrestore(&mchdev_lock, flags); + spin_unlock(&mchdev_lock); return; } @@ -804,18 +803,17 @@ static void ivybridge_parity_work(struct work_struct *work) kfree(parity_event[1]); } -static void ivybridge_handle_parity_error(struct drm_device *dev) +static void ivybridge_parity_error_irq_handler(struct drm_device *dev) { drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; - unsigned long flags; if (!HAS_L3_GPU_CACHE(dev)) return; - spin_lock_irqsave(&dev_priv->irq_lock, flags); + spin_lock(&dev_priv->irq_lock); dev_priv->gt_irq_mask |= GT_RENDER_L3_PARITY_ERROR_INTERRUPT; I915_WRITE(GTIMR, dev_priv->gt_irq_mask); - spin_unlock_irqrestore(&dev_priv->irq_lock, flags); + spin_unlock(&dev_priv->irq_lock); queue_work(dev_priv->wq, &dev_priv->l3_parity.error_work); } @@ -845,11 +843,9 @@ static void snb_gt_irq_handler(struct drm_device *dev, } /* Legacy way of handling PM interrupts */ -static void gen6_queue_rps_work(struct drm_i915_private *dev_priv, - u32 pm_iir) +static void gen6_rps_irq_handler(struct drm_i915_private *dev_priv, + u32 pm_iir) { - unsigned long flags; - /* * IIR bits should never already be set because IMR should * prevent an interrupt from being shown in IIR. The warning @@ -860,11 +856,11 @@ static void gen6_queue_rps_work(struct drm_i915_private *dev_priv, * The mask bit in IMR is cleared by dev_priv->rps.work. */ - spin_lock_irqsave(&dev_priv->rps.lock, flags); + spin_lock(&dev_priv->rps.lock); dev_priv->rps.pm_iir |= pm_iir; I915_WRITE(GEN6_PMIMR, dev_priv->rps.pm_iir); POSTING_READ(GEN6_PMIMR); - spin_unlock_irqrestore(&dev_priv->rps.lock, flags); + spin_unlock(&dev_priv->rps.lock); queue_work(dev_priv->wq, &dev_priv->rps.work); } @@ -928,7 +924,7 @@ static void dp_aux_irq_handler(struct drm_device *dev) wake_up_all(&dev_priv->gmbus_wait_queue); } -/* Unlike gen6_queue_rps_work() from which this function is originally derived, +/* Unlike gen6_rps_irq_handler() from which this function is originally derived, * we must be able to deal with other PM interrupts. This is complicated because * of the way in which we use the masks to defer the RPS work (which for * posterity is necessary because of forcewake). @@ -936,9 +932,7 @@ static void dp_aux_irq_handler(struct drm_device *dev) static void hsw_pm_irq_handler(struct drm_i915_private *dev_priv, u32 pm_iir) { - unsigned long flags; - - spin_lock_irqsave(&dev_priv->rps.lock, flags); + spin_lock(&dev_priv->rps.lock); dev_priv->rps.pm_iir |= pm_iir & GEN6_PM_RPS_EVENTS; if (dev_priv->rps.pm_iir) { I915_WRITE(GEN6_PMIMR, dev_priv->rps.pm_iir); @@ -947,7 +941,7 @@ static void hsw_pm_irq_handler(struct drm_i915_private *dev_priv, /* TODO: if queue_work is slow, move it out of the spinlock */ queue_work(dev_priv->wq, &dev_priv->rps.work); } - spin_unlock_irqrestore(&dev_priv->rps.lock, flags); + spin_unlock(&dev_priv->rps.lock); if (pm_iir & ~GEN6_PM_RPS_EVENTS) { if (pm_iir & PM_VEBOX_USER_INTERRUPT) @@ -1029,7 +1023,7 @@ static irqreturn_t valleyview_irq_handler(int irq, void *arg) gmbus_irq_handler(dev); if (pm_iir & GEN6_PM_RPS_EVENTS) - gen6_queue_rps_work(dev_priv, pm_iir); + gen6_rps_irq_handler(dev_priv, pm_iir); I915_WRITE(GTIIR, gt_iir); I915_WRITE(GEN6_PMIIR, pm_iir); @@ -1267,7 +1261,7 @@ static irqreturn_t ivybridge_irq_handler(int irq, void *arg) if (IS_HASWELL(dev)) hsw_pm_irq_handler(dev_priv, pm_iir); else if (pm_iir & GEN6_PM_RPS_EVENTS) - gen6_queue_rps_work(dev_priv, pm_iir); + gen6_rps_irq_handler(dev_priv, pm_iir); I915_WRITE(GEN6_PMIIR, pm_iir); ret = IRQ_HANDLED; } @@ -1383,10 +1377,10 @@ static irqreturn_t ironlake_irq_handler(int irq, void *arg) } if (IS_GEN5(dev) && de_iir & DE_PCU_EVENT) - ironlake_handle_rps_change(dev); + ironlake_rps_change_irq_handler(dev); if (IS_GEN6(dev) && pm_iir & GEN6_PM_RPS_EVENTS) - gen6_queue_rps_work(dev_priv, pm_iir); + gen6_rps_irq_handler(dev_priv, pm_iir); I915_WRITE(GTIIR, gt_iir); I915_WRITE(DEIIR, de_iir);