From patchwork Thu May 21 23:53:53 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Doug Anderson X-Patchwork-Id: 6460471 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id C80F1C0020 for ; Thu, 21 May 2015 23:57:21 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E035A2052D for ; Thu, 21 May 2015 23:57:20 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F37F220503 for ; Thu, 21 May 2015 23:57:19 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YvaIm-0001KK-GY; Thu, 21 May 2015 23:55:00 +0000 Received: from mail-ig0-x22a.google.com ([2607:f8b0:4001:c05::22a]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YvaIj-0001IZ-BP for linux-arm-kernel@lists.infradead.org; Thu, 21 May 2015 23:54:57 +0000 Received: by igcau1 with SMTP id au1so21257357igc.1 for ; Thu, 21 May 2015 16:54:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id; bh=trIrItPMQslOOXG0JMVjjWvciyvEqj9OZt9hzTOFRH0=; b=WpP/DP/je5uux2hx/gE5uEVp8vnTP8zx9AB0lZw0PIosNq4Mf50Frh/bYE70kirIQt Y46rI+gu/VpYidRiJpMk3Je6oIYumSByqIC6d3sRcuI4QzC3KwKJtmEBbHGXB6TU0jqL 1ibWNBp8pX7ycJftGUJ8vRWBLt59aVWTOfRb0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=trIrItPMQslOOXG0JMVjjWvciyvEqj9OZt9hzTOFRH0=; b=TrkTzH8tECTpBAe6a3oBctsN7xPf/QRnsZV/ATfPRu7fT0zORHyEkr0g1joWHnUauw StYmwOtC1KlSNKc0hJDqHLWFlBSQDrUbNnATLxtrzi+AiXWXlwmqAX56+YUnGPheyBib sL2nkRPw7dRp/V+UxUPNqS96y/zs712Nwy2XLBzt/41BurQYyVJ7zMC9mOkna0LE397x ARZTqgIk39nT6QHBAjmGmy7hGFvrRkRUXPOEQ2xIDwyCtoF5RVl73pGSlB4xVb6Jb6Gb 1jfCW9yVNX41SQjzapnUKcKQBKlrC9hAmqq2aYSJIO0TisWfSqHRtE5tz+Fr/ICVIhWj nJ2A== X-Gm-Message-State: ALoCoQnxu42wzKKTjVnRNmzqKymD589WoQcnhFpnpExUYhEH7xgZcCi4qiK03xu7piazUSeRfBmj X-Received: by 10.107.34.140 with SMTP id i134mr5182444ioi.88.1432252474586; Thu, 21 May 2015 16:54:34 -0700 (PDT) Received: from tictac.mtv.corp.google.com ([172.22.65.76]) by mx.google.com with ESMTPSA id qt1sm2399152igb.5.2015.05.21.16.54.32 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 21 May 2015 16:54:33 -0700 (PDT) From: Doug Anderson To: olof@lixom.net, Arnd Bergmann , Russell King Subject: [PATCH] RFC: ARM: Don't break affinity for non-balancable IRQs to fix perf Date: Thu, 21 May 2015 16:53:53 -0700 Message-Id: <1432252433-25206-1-git-send-email-dianders@chromium.org> X-Mailer: git-send-email 2.2.0.rc0.207.ga3a616c X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150521_165457_443981_3797DB17 X-CRM114-Status: GOOD ( 13.75 ) X-Spam-Score: -0.8 (/) Cc: mark.rutland@arm.com, nm@ti.com, Dmitry Torokhov , linux@arm.linux.org.uk, Heiko Stuebner , Andrew Bresticker , linux-kernel@vger.kernel.org, t.figa@samsung.com, Daniel Lezcano , Doug Anderson , sudeep.holla@arm.com, marc.zyngier@arm.com, Magnus Damm , Barry Song , Andres Salomon , joe@perches.com, Jamie Iles , Thomas Gleixner , linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Right now we don't ever try to break affinity for "per CPU" IRQs when a CPU goes down. We should apply this logic to all non-balancable IRQs. All non-balancable IRQs I've can find are supposed to be targeted at a specific CPU and don't make sense on other CPUs. From a "grep" the list of interrupts possible on ARM devices that are non-balancable and _not_ per CPU consists of at most things in these files: - arch/arm/kernel/perf_event_cpu.c - drivers/clocksource/*.c It's "perf_event_cpu" that we're trying to fix here. For perf_event_cpu, we actually expect to have a single IRQ per CPU. This doesn't appear to be an "IRQ_PER_CPU IRQ" because each CPU has a distinct IRQ number. However, moving a perf event IRQ from one CPU to another makes no sense since the IRQ can only be handled on the CPU that they're destined for (we can't access the relevant CP15 registers on the other CPUs). While we could come up with a new concept just for perf_event_cpu that indicates that we have an non "per cpu" IRQ that also shoulnd't be migrated, simply using the already present IRQF_NOBALANCING seems safe and should work just fine. The clocksource files I've checked appear to use IRQF_NOBALANCING for interrupts that are also supposed to be destined for a CPU. For instance: - exynos_mct.c: Used for local (per CPU) timers - qcom-timer.c: Also for local timer - dw_apb_timer.c: Register function has "cpu" parameter indicating that IRQ is targeted at a certain CPU. Note that without this change if you're doing perf recording across a suspend/resume cycle (where CPUs go down and then come back up) you'll get warnings about 'IRQX no longer affine to CPUn', then eventually get warnings about 'irq X: nobody cared (try booting with the "irqpoll" option)' and 'Disabling IRQ #X'. When this happens (obviously) perf recording stops. After this change problems are resolved. A similar change ought to be made to arm64 and likely to other architectures as well if this concept of "per cpu" interrupts with unique irq numbers makes sense there too). Signed-off-by: Doug Anderson --- arch/arm/kernel/irq.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm/kernel/irq.c b/arch/arm/kernel/irq.c index 350f188..08399fe 100644 --- a/arch/arm/kernel/irq.c +++ b/arch/arm/kernel/irq.c @@ -145,10 +145,10 @@ static bool migrate_one_irq(struct irq_desc *desc) bool ret = false; /* - * If this is a per-CPU interrupt, or the affinity does not + * If this is a non-balancable interrupt, or the affinity does not * include this CPU, then we have nothing to do. */ - if (irqd_is_per_cpu(d) || !cpumask_test_cpu(smp_processor_id(), affinity)) + if (!irqd_can_balance(d) || !cpumask_test_cpu(smp_processor_id(), affinity)) return false; if (cpumask_any_and(affinity, cpu_online_mask) >= nr_cpu_ids) {