From patchwork Mon Jan 4 11:54:44 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suzuki K Poulose X-Patchwork-Id: 7947451 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id DEA81BEEE5 for ; Mon, 4 Jan 2016 11:58:08 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 543D620377 for ; Mon, 4 Jan 2016 11:58:07 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7E5A920375 for ; Mon, 4 Jan 2016 11:58:06 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1aG3ke-00055r-61; Mon, 04 Jan 2016 11:56:40 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1aG3jk-0004L5-Mx for linux-arm-kernel@lists.infradead.org; Mon, 04 Jan 2016 11:55:47 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BBED75F0; Mon, 4 Jan 2016 03:54:38 -0800 (PST) Received: from e106634-lin.cambridge.arm.com (e106634-lin.cambridge.arm.com [10.1.209.25]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 0B1853F24D; Mon, 4 Jan 2016 03:55:08 -0800 (PST) From: "Suzuki K. Poulose" To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v5 05/11] arm-cci PMU: Delay counter writes to pmu_enable Date: Mon, 4 Jan 2016 11:54:44 +0000 Message-Id: <1451908490-2615-6-git-send-email-suzuki.poulose@arm.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1451908490-2615-1-git-send-email-suzuki.poulose@arm.com> References: <1451908490-2615-1-git-send-email-suzuki.poulose@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160104_035545_120442_2F2AE503 X-CRM114-Status: GOOD ( 12.17 ) X-Spam-Score: -6.9 (------) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, "Suzuki K. Poulose" , peterz@infradead.org, punit.agrawal@arm.com, linux-kernel@vger.kernel.org, arm@kernel.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Delay setting the event periods for enabled events to pmu::pmu_enable(). We mark the event.hw->state PERF_HES_ARCH for the events that we know have their counts recorded and have been started. Since we reprogram the counters every time before count, we can set the counters for all the event counters which are !STOPPED && ARCH. Grouping the writes to counters can ammortise the cost of the operation on PMUs where it is expensive (e.g, CCI-500). Cc: Mark Rutland Cc: Punit Agrawal Cc: Peter Zijlstra Signed-off-by: Suzuki K. Poulose --- drivers/bus/arm-cci.c | 42 ++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 40 insertions(+), 2 deletions(-) diff --git a/drivers/bus/arm-cci.c b/drivers/bus/arm-cci.c index 0189f3a..c768ee4 100644 --- a/drivers/bus/arm-cci.c +++ b/drivers/bus/arm-cci.c @@ -916,6 +916,40 @@ static void hw_perf_event_destroy(struct perf_event *event) } } +/* + * Program the CCI PMU counters which have PERF_HES_ARCH set + * with the event period and mark them ready before we enable + * PMU. + */ +void cci_pmu_update_counters(struct cci_pmu *cci_pmu) +{ + int i; + unsigned long mask[BITS_TO_LONGS(cci_pmu->num_cntrs)]; + + memset(mask, 0, BITS_TO_LONGS(cci_pmu->num_cntrs) * sizeof(unsigned long)); + + for_each_set_bit(i, cci_pmu->hw_events.used_mask, cci_pmu->num_cntrs) { + struct hw_perf_event *hwe; + + if (!cci_pmu->hw_events.events[i]) { + WARN_ON(1); + continue; + } + + hwe = &cci_pmu->hw_events.events[i]->hw; + /* Leave the events which are not counting */ + if (hwe->state & PERF_HES_STOPPED) + continue; + if (hwe->state & PERF_HES_ARCH) { + set_bit(i, mask); + hwe->state &= ~PERF_HES_ARCH; + local64_set(&hwe->prev_count, CCI_CNTR_PERIOD); + } + } + + pmu_write_counters(cci_pmu, mask, CCI_CNTR_PERIOD); +} + static void cci_pmu_enable(struct pmu *pmu) { struct cci_pmu *cci_pmu = to_cci_pmu(pmu); @@ -927,6 +961,7 @@ static void cci_pmu_enable(struct pmu *pmu) return; raw_spin_lock_irqsave(&hw_events->pmu_lock, flags); + cci_pmu_update_counters(cci_pmu); __cci_pmu_enable(); raw_spin_unlock_irqrestore(&hw_events->pmu_lock, flags); @@ -980,8 +1015,11 @@ static void cci_pmu_start(struct perf_event *event, int pmu_flags) /* Configure the counter unless you are counting a fixed event */ if (!pmu_fixed_hw_idx(cci_pmu, idx)) pmu_set_event(cci_pmu, idx, hwc->config_base); - - pmu_event_set_period(event); + /* + * Mark this counter, so that we can program the + * counter with the event_period. see cci_pmu_enable() + */ + hwc->state = PERF_HES_ARCH; pmu_enable_counter(cci_pmu, idx); raw_spin_unlock_irqrestore(&hw_events->pmu_lock, flags);