From patchwork Thu Feb 18 16:50:13 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Glauber X-Patchwork-Id: 8352181 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 512F3C0553 for ; Thu, 18 Feb 2016 16:53:01 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 649F72039E for ; Thu, 18 Feb 2016 16:53:00 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8F43420375 for ; Thu, 18 Feb 2016 16:52:59 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1aWRnO-0003Ny-Ty; Thu, 18 Feb 2016 16:51:14 +0000 Received: from mail-wm0-f65.google.com ([74.125.82.65]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1aWRnA-0003IG-I9 for linux-arm-kernel@lists.infradead.org; Thu, 18 Feb 2016 16:51:02 +0000 Received: by mail-wm0-f65.google.com with SMTP id c200so4032422wme.0 for ; Thu, 18 Feb 2016 08:50:45 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=eaC2XqRMkRPfW2kbXht+DEs/Jq7EqlIE+CX6rhR+l3Q=; b=YPMhbug6d6Ya8aSkrFedteuZyMmiv35XcBkff3g2pPlikFdFUtzsWA2+v03sbHeCsx PchVi3zlZ6xYXeBeie8oN161shpH8m2iIvc9dxfdQg1dS5DbeYHR71pypUVkdm+8PnZI u5IzzLXcjbQ/RpJIomUDr09gZyHwjXRXW/M/vxdIPrdgr9jcV69kHEdJDeEB3BEKh7nC acyV6VbHp6fk4Odlkn6dnfG9lAgTYXEs+oakIJ1dQ3zReGa9zYkYxbrUKMzIIk6bn6Md KX5TejqGuCKBcRjbnRd0KaTLoHVyxFyjWHoIY8THjPEgJG/3iAnK/OQ4vdaI9w9VQP5V NKCg== X-Gm-Message-State: AG10YOTECvFnQ1LSvlboFI/KCe7OO63rw6DkR/fM0hmoh7cLpydCT4EeFVFnhE6z0Fi7fg== X-Received: by 10.28.153.19 with SMTP id b19mr4742029wme.14.1455814243921; Thu, 18 Feb 2016 08:50:43 -0800 (PST) Received: from wintermute.fritz.box (HSI-KBW-46-223-157-133.hsi.kabel-badenwuerttemberg.de. [46.223.157.133]) by smtp.gmail.com with ESMTPSA id ka4sm7333857wjc.47.2016.02.18.08.50.43 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 18 Feb 2016 08:50:43 -0800 (PST) From: Jan Glauber To: Will Deacon , Mark Rutland Subject: [PATCH v4 4/5] arm64/perf: Enable PMCR long cycle counter bit Date: Thu, 18 Feb 2016 17:50:13 +0100 Message-Id: <467597048eda3004bd69f1fbe3981aab111e00dd.1455810755.git.jglauber@cavium.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: References: In-Reply-To: References: X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160218_085100_926172_3282B9E8 X-CRM114-Status: GOOD ( 16.01 ) X-Spam-Score: -2.6 (--) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Jan Glauber MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP With the long cycle counter bit (LC) disabled the cycle counter is not working on ThunderX SOC (ThunderX only implements Aarch64). Also, according to documentation LC == 0 is deprecated. To keep the code simple the patch does not introduce 64 bit wide counter functions. Instead writing the cycle counter always sets the upper 32 bits so overflow interrupts are generated as before. Original patch from Andrew Pinksi Signed-off-by: Jan Glauber --- arch/arm64/kernel/perf_event.c | 21 ++++++++++++++++----- 1 file changed, 16 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index 0ed05f6..c68fa98 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -405,6 +405,7 @@ static const struct attribute_group *armv8_pmuv3_attr_groups[] = { #define ARMV8_PMCR_D (1 << 3) /* CCNT counts every 64th cpu cycle */ #define ARMV8_PMCR_X (1 << 4) /* Export to ETM */ #define ARMV8_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/ +#define ARMV8_PMCR_LC (1 << 6) /* Overflow on 64 bit cycle counter */ #define ARMV8_PMCR_N_SHIFT 11 /* Number of counters supported */ #define ARMV8_PMCR_N_MASK 0x1f #define ARMV8_PMCR_MASK 0x3f /* Mask for writable bits */ @@ -494,9 +495,16 @@ static inline void armv8pmu_write_counter(struct perf_event *event, u32 value) if (!armv8pmu_counter_valid(cpu_pmu, idx)) pr_err("CPU%u writing wrong counter %d\n", smp_processor_id(), idx); - else if (idx == ARMV8_IDX_CYCLE_COUNTER) - asm volatile("msr pmccntr_el0, %0" :: "r" (value)); - else if (armv8pmu_select_counter(idx) == idx) + else if (idx == ARMV8_IDX_CYCLE_COUNTER) { + /* + * Set the upper 32bits as this is a 64bit counter but we only + * count using the lower 32bits and we want an interrupt when + * it overflows. + */ + u64 value64 = 0xffffffff00000000ULL | value; + + asm volatile("msr pmccntr_el0, %0" :: "r" (value64)); + } else if (armv8pmu_select_counter(idx) == idx) asm volatile("msr pmxevcntr_el0, %0" :: "r" (value)); } @@ -768,8 +776,11 @@ static void armv8pmu_reset(void *info) armv8pmu_disable_intens(idx); } - /* Initialize & Reset PMNC: C and P bits. */ - armv8pmu_pmcr_write(ARMV8_PMCR_P | ARMV8_PMCR_C); + /* + * Initialize & Reset PMNC. Request overflow interrupt for + * 64 bit cycle counter but cheat in armv8pmu_write_counter(). + */ + armv8pmu_pmcr_write(ARMV8_PMCR_P | ARMV8_PMCR_C | ARMV8_PMCR_LC); } static int armv8_pmuv3_map_event(struct perf_event *event)