From patchwork Tue Nov 6 16:25:32 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 1705251 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by patchwork1.kernel.org (Postfix) with ESMTP id 185B13FCDE for ; Tue, 6 Nov 2012 16:30:34 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 05EA59EB37 for ; Tue, 6 Nov 2012 08:30:34 -0800 (PST) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from shiva.chad-versace.us (209-20-75-48.static.cloud-ips.com [209.20.75.48]) by gabe.freedesktop.org (Postfix) with ESMTP id C0F989ED90 for ; Tue, 6 Nov 2012 08:26:17 -0800 (PST) Received: by shiva.chad-versace.us (Postfix, from userid 1005) id F3BDF885AD; Tue, 6 Nov 2012 16:27:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on shiva.chad-versace.us X-Spam-Level: X-Spam-Status: No, score=-2.9 required=5.0 tests=ALL_TRUSTED,BAYES_00 autolearn=unavailable version=3.3.2 Received: from localhost.localdomain (unknown [83.217.123.106]) by shiva.chad-versace.us (Postfix) with ESMTPSA id BB28D88671; Tue, 6 Nov 2012 16:27:02 +0000 (UTC) From: Ben Widawsky To: intel-gfx@lists.freedesktop.org Date: Tue, 6 Nov 2012 16:25:32 +0000 Message-Id: <1352219142-1395-9-git-send-email-ben@bwidawsk.net> X-Mailer: git-send-email 1.8.0 In-Reply-To: <1352219142-1395-1-git-send-email-ben@bwidawsk.net> References: <1352219142-1395-1-git-send-email-ben@bwidawsk.net> Cc: Ben Widawsky Subject: [Intel-gfx] [PATCH 08/18] drm/i915: Create a more generic pm handler for hsw+ X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org Errors-To: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org HSW has some special requirements for the VEBOX. Splitting out the interrupt handler will make the code a bit nicer and less error prone when we begin to handle those. The slight functional change in this patch (queueing work while holding the spinlock) is intentional as it makes a subsequent patch a bit nicer. The change should also only effect HSW platforms. Based on patches from: CC: Haihao Xiang Signed-off-by: Ben Widawsky --- drivers/gpu/drm/i915/i915_irq.c | 30 +++++++++++++++++++++++++++++- 1 file changed, 29 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c index 2604867..d5aef6c 100644 --- a/drivers/gpu/drm/i915/i915_irq.c +++ b/drivers/gpu/drm/i915/i915_irq.c @@ -500,6 +500,7 @@ static void snb_gt_irq_handler(struct drm_device *dev, ivybridge_handle_parity_error(dev); } +/* Legacy way of handling PM interrupts */ static void gen6_queue_rps_work(struct drm_i915_private *dev_priv, u32 pm_iir) { @@ -524,6 +525,31 @@ static void gen6_queue_rps_work(struct drm_i915_private *dev_priv, queue_work(dev_priv->wq, &dev_priv->rps.work); } +/* Unlike gen6_queue_rps_work() from which this function is originally derived, + * we must be able to deal with other PM interrupts. This is complicated because + * of the way in which we use the masks to defer the RPS work (which for + * posterity is necessary because of forcewake). + */ +static void hsw_pm_irq_handler(struct drm_i915_private *dev_priv, + u32 pm_iir) +{ + unsigned long flags; + + spin_lock_irqsave(&dev_priv->rps.lock, flags); + dev_priv->rps.pm_iir |= pm_iir & GEN6_PM_DEFERRED_EVENTS; + if (dev_priv->rps.pm_iir) { + I915_WRITE(GEN6_PMIMR, dev_priv->rps.pm_iir); + /* We never want to mask useful interrupts. (also posting read) */ + WARN_ON(I915_READ_NOTRACE(GEN6_PMIMR) & ~GEN6_PM_DEFERRED_EVENTS); + /* TODO: if queue_work is slow, move it out of the spinlock */ + queue_work(dev_priv->wq, &dev_priv->rps.work); + } + spin_unlock_irqrestore(&dev_priv->rps.lock, flags); + + if (pm_iir & ~GEN6_PM_DEFERRED_EVENTS) + DRM_ERROR("Unexpected PM interrupted\n"); +} + static irqreturn_t valleyview_irq_handler(int irq, void *arg) { struct drm_device *dev = (struct drm_device *) arg; @@ -731,7 +757,9 @@ static irqreturn_t ivybridge_irq_handler(int irq, void *arg) pm_iir = I915_READ(GEN6_PMIIR); if (pm_iir) { - if (pm_iir & GEN6_PM_DEFERRED_EVENTS) + if (IS_HASWELL(dev)) + hsw_pm_irq_handler(dev_priv, pm_iir); + else if (pm_iir & GEN6_PM_DEFERRED_EVENTS) gen6_queue_rps_work(dev_priv, pm_iir); I915_WRITE(GEN6_PMIIR, pm_iir); ret = IRQ_HANDLED;