From patchwork Wed Nov 2 22:22:16 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wei Huang X-Patchwork-Id: 9409975 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8300860723 for ; Wed, 2 Nov 2016 22:22:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 74BB92A5BC for ; Wed, 2 Nov 2016 22:22:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 696EE2A5DF; Wed, 2 Nov 2016 22:22:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DA5982A5E0 for ; Wed, 2 Nov 2016 22:22:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933202AbcKBWWX (ORCPT ); Wed, 2 Nov 2016 18:22:23 -0400 Received: from mx1.redhat.com ([209.132.183.28]:49140 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933194AbcKBWWW (ORCPT ); Wed, 2 Nov 2016 18:22:22 -0400 Received: from int-mx10.intmail.prod.int.phx2.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 49DEC81129; Wed, 2 Nov 2016 22:22:22 +0000 (UTC) Received: from weilaptop.redhat.com (unused [10.10.50.104] (may be forged)) by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id uA2MMHAQ001149; Wed, 2 Nov 2016 18:22:21 -0400 From: Wei Huang To: cov@codeaurora.org Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, shannon.zhao@linaro.org, alistair.francis@xilinx.com, croberts@codeaurora.org, alindsay@codeaurora.org, drjones@redhat.com Subject: [kvm-unit-tests PATCHv7 2/3] arm: pmu: Check cycle count increases Date: Wed, 2 Nov 2016 17:22:16 -0500 Message-Id: <1478125337-11770-3-git-send-email-wei@redhat.com> In-Reply-To: <1478125337-11770-1-git-send-email-wei@redhat.com> References: <1478125337-11770-1-git-send-email-wei@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Wed, 02 Nov 2016 22:22:22 +0000 (UTC) Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Ensure that reads of the PMCCNTR_EL0 are monotonically increasing, even for the smallest delta of two subsequent reads. Signed-off-by: Christopher Covington Signed-off-by: Wei Huang --- arm/pmu.c | 100 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 100 insertions(+) diff --git a/arm/pmu.c b/arm/pmu.c index 42d0ee1..65b7df1 100644 --- a/arm/pmu.c +++ b/arm/pmu.c @@ -14,6 +14,9 @@ */ #include "libcflat.h" +#define NR_SAMPLES 10 +#define ARMV8_PMU_CYCLE_IDX 31 + #if defined(__arm__) static inline uint32_t get_pmcr(void) { @@ -22,6 +25,43 @@ static inline uint32_t get_pmcr(void) asm volatile("mrc p15, 0, %0, c9, c12, 0" : "=r" (ret)); return ret; } + +static inline void set_pmcr(uint32_t pmcr) +{ + asm volatile("mcr p15, 0, %0, c9, c12, 0" : : "r" (pmcr)); +} + +static inline void set_pmccfiltr(uint32_t filter) +{ + uint32_t cycle_idx = ARMV8_PMU_CYCLE_IDX; + + asm volatile("mcr p15, 0, %0, c9, c12, 5" : : "r" (cycle_idx)); + asm volatile("mcr p15, 0, %0, c9, c13, 1" : : "r" (filter)); +} + +/* + * While PMCCNTR can be accessed as a 64 bit coprocessor register, returning 64 + * bits doesn't seem worth the trouble when differential usage of the result is + * expected (with differences that can easily fit in 32 bits). So just return + * the lower 32 bits of the cycle count in AArch32. + */ +static inline unsigned long get_pmccntr(void) +{ + unsigned long cycles; + + asm volatile("mrc p15, 0, %0, c9, c13, 0" : "=r" (cycles)); + return cycles; +} + +static inline void enable_counter(uint32_t idx) +{ + asm volatile("mcr p15, 0, %0, c9, c12, 1" : : "r" (1 << idx)); +} + +static inline void disable_counter(uint32_t idx) +{ + asm volatile("mrc p15, 0, %0, c9, c12, 1" : : "r" (1 << idx)); +} #elif defined(__aarch64__) static inline uint32_t get_pmcr(void) { @@ -30,6 +70,34 @@ static inline uint32_t get_pmcr(void) asm volatile("mrs %0, pmcr_el0" : "=r" (ret)); return ret; } + +static inline void set_pmcr(uint32_t pmcr) +{ + asm volatile("msr pmcr_el0, %0" : : "r" (pmcr)); +} + +static inline void set_pmccfiltr(uint32_t filter) +{ + asm volatile("msr pmccfiltr_el0, %0" : : "r" (filter)); +} + +static inline unsigned long get_pmccntr(void) +{ + unsigned long cycles; + + asm volatile("mrs %0, pmccntr_el0" : "=r" (cycles)); + return cycles; +} + +static inline void enable_counter(uint32_t idx) +{ + asm volatile("msr pmcntenset_el0, %0" : : "r" (1 << idx)); +} + +static inline void disable_counter(uint32_t idx) +{ + asm volatile("msr pmcntensclr_el0, %0" : : "r" (1 << idx)); +} #endif struct pmu_data { @@ -72,11 +140,43 @@ static bool check_pmcr(void) return pmu.implementer != 0; } +/* + * Ensure that the cycle counter progresses between back-to-back reads. + */ +static bool check_cycles_increase(void) +{ + struct pmu_data pmu = {{0}}; + + enable_counter(ARMV8_PMU_CYCLE_IDX); + set_pmccfiltr(0); /* count cycles in EL0, EL1, but not EL2 */ + + pmu.enable = 1; + set_pmcr(pmu.pmcr_el0); + + for (int i = 0; i < NR_SAMPLES; i++) { + unsigned long a, b; + + a = get_pmccntr(); + b = get_pmccntr(); + + if (a >= b) { + printf("Read %ld then %ld.\n", a, b); + return false; + } + } + + pmu.enable = 0; + set_pmcr(pmu.pmcr_el0); + + return true; +} + int main(void) { report_prefix_push("pmu"); report("Control register", check_pmcr()); + report("Monotonically increasing cycle count", check_cycles_increase()); return report_summary(); }