From patchwork Fri Nov 7 16:25:27 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 5254201 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 74F97C11AC for ; Fri, 7 Nov 2014 16:29:06 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 98E322010C for ; Fri, 7 Nov 2014 16:29:05 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A2E752012E for ; Fri, 7 Nov 2014 16:29:04 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XmmNE-0007tb-8j; Fri, 07 Nov 2014 16:26:56 +0000 Received: from cam-admin0.cambridge.arm.com ([217.140.96.50]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XmmNA-0007ku-45 for linux-arm-kernel@lists.infradead.org; Fri, 07 Nov 2014 16:26:53 +0000 Received: from leverpostej.cambridge.arm.com (leverpostej.cambridge.arm.com [10.1.205.151]) by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id sA7GPwwq014702; Fri, 7 Nov 2014 16:26:26 GMT From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Subject: [PATCH 02/11] perf: allow for PMU-specific event filtering Date: Fri, 7 Nov 2014 16:25:27 +0000 Message-Id: <1415377536-12841-3-git-send-email-mark.rutland@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1415377536-12841-1-git-send-email-mark.rutland@arm.com> References: <1415377536-12841-1-git-send-email-mark.rutland@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141107_082652_545238_4288A473 X-CRM114-Status: GOOD ( 17.00 ) X-Spam-Score: -5.6 (-----) Cc: Mark Rutland , Peter Zijlstra , will.deacon@arm.com, linux-kernel@vger.kernel.org, Arnaldo Carvalho de Melo , Ingo Molnar , Paul Mackerras X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In certain circumstances it may not be possible to schedule particular events due to constraints other than a lack of hardware counters (e.g. on big.LITTLE systems where CPUs support different events). The core perf event code does not distinguish these cases and pessimistically assumes that any failure to schedule an events is due to a lack of hardware counters, ending event group scheduling early despite hardware counters remaining available. When such an unschedulable event exists in a ctx->flexible_groups list it can unnecessarily prevent event groups following it in the list from being scheduled until it is rotated to the end of the list. This can result in events being scheduled for only a portion of the time they would otherwise be eligible, and for short running programs unfortunate initial list ordering can result in no events being counted. This patch adds a new (optional) filter_match function pointer to struct pmu which backends can use to tell the perf core whether or not it is worth attempting to schedule an event. This plugs into the existing event_filter_match logic, and makes it possible to avoid the scheduling problem described above. Signed-off-by: Mark Rutland Cc: Peter Zijlstra Cc: Paul Mackerras Cc: Ingo Molnar Cc: Arnaldo Carvalho de Melo --- include/linux/perf_event.h | 5 +++++ kernel/events/core.c | 8 +++++++- 2 files changed, 12 insertions(+), 1 deletion(-) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 893a0d0..80c5f5f 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -263,6 +263,11 @@ struct pmu { * flush branch stack on context-switches (needed in cpu-wide mode) */ void (*flush_branch_stack) (void); + + /* + * Filter events for PMU-specific reasons. + */ + int (*filter_match) (struct perf_event *event); /* optional */ }; /** diff --git a/kernel/events/core.c b/kernel/events/core.c index 2b02c9f..770b276 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -1428,11 +1428,17 @@ static int __init perf_workqueue_init(void) core_initcall(perf_workqueue_init); +static inline int pmu_filter_match(struct perf_event *event) +{ + struct pmu *pmu = event->pmu; + return pmu->filter_match ? pmu->filter_match(event) : 1; +} + static inline int event_filter_match(struct perf_event *event) { return (event->cpu == -1 || event->cpu == smp_processor_id()) - && perf_cgroup_match(event); + && perf_cgroup_match(event) && pmu_filter_match(event); } static void