From patchwork Thu Jul 4 21:35:26 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Vetter X-Patchwork-Id: 2823948 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 5FAFB9F3EB for ; Thu, 4 Jul 2013 21:38:51 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 7886020145 for ; Thu, 4 Jul 2013 21:38:50 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 8F0CD20140 for ; Thu, 4 Jul 2013 21:38:49 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 83B1BE5E69 for ; Thu, 4 Jul 2013 14:38:49 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail-ee0-f52.google.com (mail-ee0-f52.google.com [74.125.83.52]) by gabe.freedesktop.org (Postfix) with ESMTP id 6E327E601F for ; Thu, 4 Jul 2013 14:35:53 -0700 (PDT) Received: by mail-ee0-f52.google.com with SMTP id c50so992168eek.39 for ; Thu, 04 Jul 2013 14:35:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; bh=VtyZuAYFYFcVWuX7X8wbj2y+6CgoCiTw+orZuH8+2ZI=; b=MFJYCRRc4oH35SiUFLMuoDiPiXpbBnkIUxGogbPobXsbGjfzlF0l3oUtNy8W/CSByh 4QHaH8EJBpHsBJj/0/lLWJ+9xUC68sj89Ex91JyJuawDQpi4Dve8Gqtgs+oJ/kgyonip UU4vXjLy/JBlt+CvvnZlAwXeTsjJGHgoMT5Pc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references :x-gm-message-state; bh=VtyZuAYFYFcVWuX7X8wbj2y+6CgoCiTw+orZuH8+2ZI=; b=a9Te1hQWDmdmyHGXWkdNFcSNqgOjFCSzrr7XsGKME7PIhSO2TOCwgdqMms3Jbp4gbA B17A01XSUTeopYwlj7BeAQVLUd5HKs4rhLicgoul3SbeLiijJ5911fb1lZ5GdXQ4jqQv lvC9mSH48O3LrS8Y0mNg6LUR98LwzzKbjVy0eGbLjBmIwgwV7zrBaQao+vBlYhQmHKKs OiMt5DNSCn8wF9YVL6DXXKAAqeuateGG6jBUd6HD5MJr+XcqA327Ngdy9e3UY4f5MB81 opgm9potqXbNFmYL/oUuebRDeKUW+/TpIFDCylpi7G7HzKfg76l0lBSjEmz844qv+D5Y gCJw== X-Received: by 10.15.111.135 with SMTP id cj7mr8781283eeb.144.1372973752515; Thu, 04 Jul 2013 14:35:52 -0700 (PDT) Received: from natalie.ffwll.local (178-83-130-250.dynamic.hispeed.ch. [178.83.130.250]) by mx.google.com with ESMTPSA id p49sm8435066eeu.2.2013.07.04.14.35.50 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Thu, 04 Jul 2013 14:35:51 -0700 (PDT) From: Daniel Vetter To: Intel Graphics Development Date: Thu, 4 Jul 2013 23:35:26 +0200 Message-Id: <1372973734-7601-7-git-send-email-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 1.8.1.4 In-Reply-To: <1372973734-7601-1-git-send-email-daniel.vetter@ffwll.ch> References: <1372973734-7601-1-git-send-email-daniel.vetter@ffwll.ch> X-Gm-Message-State: ALoCoQlnF07i9wwZQWIDXeAywGGMhCkQRq7ySgwvV+hLt770ysNLJ2EosR3C5Wuxp9MYodGm6Pf0 Cc: Daniel Vetter Subject: [Intel-gfx] [PATCH 06/14] drm/i915: streamline hsw_pm_irq_handler X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org Errors-To: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org X-Spam-Status: No, score=-4.3 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The if (pm_iir & ~GEN6_PM_RPS_EVENTS) check was redunandant. Otoh adding a check for rps events allows us to avoid the spinlock grabbing for VECS interrupts. v2: Drop misplaced hunk which now moved to the right patch. Reviewed-by: Ben Widawsky Signed-off-by: Daniel Vetter --- drivers/gpu/drm/i915/i915_irq.c | 20 +++++++++----------- 1 file changed, 9 insertions(+), 11 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c index 0e2663c..b1185e2 100644 --- a/drivers/gpu/drm/i915/i915_irq.c +++ b/drivers/gpu/drm/i915/i915_irq.c @@ -961,25 +961,23 @@ static void dp_aux_irq_handler(struct drm_device *dev) static void hsw_pm_irq_handler(struct drm_i915_private *dev_priv, u32 pm_iir) { - spin_lock(&dev_priv->rps.lock); - dev_priv->rps.pm_iir |= pm_iir & GEN6_PM_RPS_EVENTS; - if (dev_priv->rps.pm_iir) { + if (pm_iir & GEN6_PM_RPS_EVENTS) { + spin_lock(&dev_priv->rps.lock); + dev_priv->rps.pm_iir |= pm_iir & GEN6_PM_RPS_EVENTS; I915_WRITE(GEN6_PMIMR, dev_priv->rps.pm_iir); /* never want to mask useful interrupts. (also posting read) */ WARN_ON(I915_READ_NOTRACE(GEN6_PMIMR) & ~GEN6_PM_RPS_EVENTS); /* TODO: if queue_work is slow, move it out of the spinlock */ queue_work(dev_priv->wq, &dev_priv->rps.work); + spin_unlock(&dev_priv->rps.lock); } - spin_unlock(&dev_priv->rps.lock); - if (pm_iir & ~GEN6_PM_RPS_EVENTS) { - if (pm_iir & PM_VEBOX_USER_INTERRUPT) - notify_ring(dev_priv->dev, &dev_priv->ring[VECS]); + if (pm_iir & PM_VEBOX_USER_INTERRUPT) + notify_ring(dev_priv->dev, &dev_priv->ring[VECS]); - if (pm_iir & PM_VEBOX_CS_ERROR_INTERRUPT) { - DRM_ERROR("VEBOX CS error interrupt 0x%08x\n", pm_iir); - i915_handle_error(dev_priv->dev, false); - } + if (pm_iir & PM_VEBOX_CS_ERROR_INTERRUPT) { + DRM_ERROR("VEBOX CS error interrupt 0x%08x\n", pm_iir); + i915_handle_error(dev_priv->dev, false); } }