From patchwork Fri Dec 2 04:55:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13062173 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C167CC4708D for ; Fri, 2 Dec 2022 04:55:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230447AbiLBEzi (ORCPT ); Thu, 1 Dec 2022 23:55:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33260 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232075AbiLBEze (ORCPT ); Thu, 1 Dec 2022 23:55:34 -0500 Received: from mail-ot1-x349.google.com (mail-ot1-x349.google.com [IPv6:2607:f8b0:4864:20::349]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 34B49D4257 for ; Thu, 1 Dec 2022 20:55:33 -0800 (PST) Received: by mail-ot1-x349.google.com with SMTP id l23-20020a9d4c17000000b0066cf87fd9b1so1870868otf.16 for ; Thu, 01 Dec 2022 20:55:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=FGz+sFHxGqEHN5rxi1/IHD0K758Z+41MxAvfePk9lvQ=; b=hxzONmykETsMaOMNBn8jG4SKXF84rnJd0xXOEwHWsp97P5ZBgHJXF2TAvYJRjocn2k 8qUw6+X6hcWboJm2VAwDpQ+F0ognO5K+tCLgFgTXAeJOagh2BE5KLoKncFjBPluaPi2u sNopLOWlmwtW8+J06JijwzrEpCNzZvZ44sYeOKp4oZvssaeIioDNDG3aJF3ersxZZduc +OKfBjb8LRXeSR6drhKOZ1VP8kQmq3hJD9t7boISzKQDRrBZSUdAsUXJziMHgquPdzhJ Tj2wONkfmggxQ13rjNpHJc1aBmQ39gCZl+TKkO42a6s1mQQ7TPeuRzJp2H6xNfNRkvFa Hynw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FGz+sFHxGqEHN5rxi1/IHD0K758Z+41MxAvfePk9lvQ=; b=GUP/ZpiO0rLjGJTTNGz1ec55MEX/mOIo4aZVCI2+cO6TFqYnYbYlPN9GqchEYcNz2X KTuJygKBE3YOTNtfukjo44e0G/FwhDLAvSrkox5MU3h4AlZbhC0y+WEZkYlaGfa3FGgD 5PoaBOVoxGUlerqeRH9T9bNKVOx/zk8rIEwqyjS/FZerUNyaNSOAFzXds1iFP8+VpuZy PLTXzxpSzZgTVnKSN/KEv2scZwcKOA9Yh9NScljxU8XIXz6bfa6rtM4tnnHkEMW9SoBq OuHx7IWlqa5D7hUiB95CBumkTkF93b4gwsghEQ3bKLqQHRCVOSPNNdd5XTJOhaNrhAKY Zkhw== X-Gm-Message-State: ANoB5plcbVZmuqmdCztoKZViV4i0qb+q1EPKT1M8XcGugt2LIk43rpWb 2TiVOofti8HBjtyRkhQu/eD3yTPq1iV93MLbxw+sibvsaOw2CDuSXCNJOusjpC+155HncMIMmXo Isah3FJOh7PM7sA1KjOgKZUPwh4jAbk5+riJUsvkSXWHX8yGGjRjJVhZW1aRkuas= X-Google-Smtp-Source: AA0mqf5mw9ypZDB3ZsHHcG43pyqGXNWCWfWb80qzt2SEhHRCYVdiGC4D+Odb4u2C54oEPNqrYNYjDP4M94jmsg== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a05:6870:1eca:b0:144:1f0b:2d13 with SMTP id pc10-20020a0568701eca00b001441f0b2d13mr3542546oab.94.1669956932447; Thu, 01 Dec 2022 20:55:32 -0800 (PST) Date: Fri, 2 Dec 2022 04:55:25 +0000 In-Reply-To: <20221202045527.3646838-1-ricarkol@google.com> Mime-Version: 1.0 References: <20221202045527.3646838-1-ricarkol@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221202045527.3646838-2-ricarkol@google.com> Subject: [kvm-unit-tests PATCH 1/3] arm: pmu: Fix overflow checks for PMUv3p5 long counters From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, andrew.jones@linux.dev Cc: maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oliver.upton@linux.dev, reijiw@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org PMUv3p5 uses 64-bit counters irrespective of whether the PMU is configured for overflowing at 32 or 64-bits. The consequence is that tests that check the counter values after overflowing should not assume that values will be wrapped around 32-bits: they overflow into the other half of the 64-bit counters on PMUv3p5. Fix tests by correctly checking overflowing-counters against the expected 64-bit value. Signed-off-by: Ricardo Koller --- arm/pmu.c | 29 ++++++++++++++++++----------- 1 file changed, 18 insertions(+), 11 deletions(-) diff --git a/arm/pmu.c b/arm/pmu.c index cd47b14..eeac984 100644 --- a/arm/pmu.c +++ b/arm/pmu.c @@ -54,10 +54,10 @@ #define EXT_COMMON_EVENTS_LOW 0x4000 #define EXT_COMMON_EVENTS_HIGH 0x403F -#define ALL_SET 0xFFFFFFFF -#define ALL_CLEAR 0x0 -#define PRE_OVERFLOW 0xFFFFFFF0 -#define PRE_OVERFLOW2 0xFFFFFFDC +#define ALL_SET 0x00000000FFFFFFFFULL +#define ALL_CLEAR 0x0000000000000000ULL +#define PRE_OVERFLOW 0x00000000FFFFFFF0ULL +#define PRE_OVERFLOW2 0x00000000FFFFFFDCULL #define PMU_PPI 23 @@ -538,6 +538,7 @@ static void test_mem_access(void) static void test_sw_incr(void) { uint32_t events[] = {SW_INCR, SW_INCR}; + uint64_t cntr0; int i; if (!satisfy_prerequisites(events, ARRAY_SIZE(events))) @@ -572,9 +573,9 @@ static void test_sw_incr(void) write_sysreg(0x3, pmswinc_el0); isb(); - report(read_regn_el0(pmevcntr, 0) == 84, "counter #1 after + 100 SW_INCR"); - report(read_regn_el0(pmevcntr, 1) == 100, - "counter #0 after + 100 SW_INCR"); + cntr0 = (pmu.version < ID_DFR0_PMU_V3_8_5) ? 84 : PRE_OVERFLOW + 100; + report(read_regn_el0(pmevcntr, 0) == cntr0, "counter #0 after + 100 SW_INCR"); + report(read_regn_el0(pmevcntr, 1) == 100, "counter #1 after + 100 SW_INCR"); report_info("counter values after 100 SW_INCR #0=%ld #1=%ld", read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1)); report(read_sysreg(pmovsclr_el0) == 0x1, @@ -584,6 +585,7 @@ static void test_sw_incr(void) static void test_chained_counters(void) { uint32_t events[] = {CPU_CYCLES, CHAIN}; + uint64_t cntr1; if (!satisfy_prerequisites(events, ARRAY_SIZE(events))) return; @@ -618,13 +620,16 @@ static void test_chained_counters(void) precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E); report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0)); - report(!read_regn_el0(pmevcntr, 1), "CHAIN counter #1 wrapped"); + cntr1 = (pmu.version < ID_DFR0_PMU_V3_8_5) ? 0 : ALL_SET + 1; + report(read_regn_el0(pmevcntr, 1) == cntr1, "CHAIN counter #1 wrapped"); + report(read_sysreg(pmovsclr_el0) == 0x3, "overflow on even and odd counters"); } static void test_chained_sw_incr(void) { uint32_t events[] = {SW_INCR, CHAIN}; + uint64_t cntr0, cntr1; int i; if (!satisfy_prerequisites(events, ARRAY_SIZE(events))) @@ -665,10 +670,12 @@ static void test_chained_sw_incr(void) write_sysreg(0x1, pmswinc_el0); isb(); + cntr0 = (pmu.version < ID_DFR0_PMU_V3_8_5) ? 0 : ALL_SET + 1; + cntr1 = (pmu.version < ID_DFR0_PMU_V3_8_5) ? 84 : PRE_OVERFLOW + 100; report((read_sysreg(pmovsclr_el0) == 0x3) && - (read_regn_el0(pmevcntr, 1) == 0) && - (read_regn_el0(pmevcntr, 0) == 84), - "expected overflows and values after 100 SW_INCR/CHAIN"); + (read_regn_el0(pmevcntr, 1) == cntr0) && + (read_regn_el0(pmevcntr, 0) == cntr1), + "expected overflows and values after 100 SW_INCR/CHAIN"); report_info("overflow=0x%lx, #0=%ld #1=%ld", read_sysreg(pmovsclr_el0), read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1)); } From patchwork Fri Dec 2 04:55:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13062174 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8364AC4332F for ; Fri, 2 Dec 2022 04:55:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232186AbiLBEzk (ORCPT ); Thu, 1 Dec 2022 23:55:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33272 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232168AbiLBEzf (ORCPT ); Thu, 1 Dec 2022 23:55:35 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF588CAF95 for ; Thu, 1 Dec 2022 20:55:34 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id y133-20020a25328b000000b006f997751950so3987704yby.7 for ; Thu, 01 Dec 2022 20:55:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=UWYoC1C7vLvgwnuJf/5t4YZu/UYVRlMlPl/Nd3cz8/k=; b=UKkIYduyOROeBe/8qzDkIHE+bh3g2clbyI9VybMrxEYMnL+5qIlK+c3SBaFAf3UVPo +68JmioMgLci+SyF0diG4OkWwH5J36rf8pcripXk7LF8mcYxxpVD0pTePemAffdjeOR6 +Q+GiuFkRXwQUp3PWhtEefYYvIl7z0n0Qd+yAZd7on1apiAKNwCdsaOtA0ZitpRHqZBr hk+z13TtaCZ/98QH01jLzVd72LCnSSpAIaPZDFHOcv9Y1sBzf/PPjoM25egQKiGg0Ojk Z4Ni3Oc6wnmS+ZkI8kxLM5N1yzWA07FkHlKVyxIzETd+G3FAO7qdqUaB5dAsUD7N4wXa Y+zQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=UWYoC1C7vLvgwnuJf/5t4YZu/UYVRlMlPl/Nd3cz8/k=; b=2Y5nOu77xlOp8SHEe/uk5tznzdVpR9/ieTu2V0RxWQvrDCi/LyIZWxtaYulnDiEwmZ p/DoWBvfsFHgogPkj6P+I6xOjtvbY+OfS2YypMztJ3eQO2PTgio3/YxV+zZHL0UxdhZ1 PalbhiY2HbygMW0DN3wXYoR8iO8wChNGDcOfBUe4Ue0QPxh5CB2TBVvLVF+N+ER0+Nl2 PepGvEXGNQvcLIPm3685JKP0KA4YLJLwpHuWLlp0BHhg71T9M/9Sy7aLRveAGFXkanft C9tZThj401x2pEOjh0bTuQF/NNAd2oSQb0RKGd0Kjy4YNhiJIERv9KpoHWpVv10SC+2p Tu/A== X-Gm-Message-State: ANoB5pmeFNFKPghrNCrAJ5k4e9Mfc64/nTyXl+lc2k9L9UuR05sScf7h lXe0F8Gskxl0RIazoHddq1Lb1mch2i+LCKOyFpPXyMvK65B63DhgBAUHPNcq557lIEhr537b2lm lfb4A4z7OcI+p9Ir8Rcao1U4ibRQCmMRPAlIR14eFKpj6FAePqFllJTrTNSC3ugE= X-Google-Smtp-Source: AA0mqf5IaOCoNyWyBRi5C+jCmWXfzVKN66yMb+RMB1AWswia+RjOzlKyD4WeO4T93LoK0iPURRk0rV6XBGbVlg== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a25:5583:0:b0:6e7:878:74 with SMTP id j125-20020a255583000000b006e708780074mr49333861ybb.645.1669956934065; Thu, 01 Dec 2022 20:55:34 -0800 (PST) Date: Fri, 2 Dec 2022 04:55:26 +0000 In-Reply-To: <20221202045527.3646838-1-ricarkol@google.com> Mime-Version: 1.0 References: <20221202045527.3646838-1-ricarkol@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221202045527.3646838-3-ricarkol@google.com> Subject: [kvm-unit-tests PATCH 2/3] arm: pmu: Prepare for testing 64-bit overflows From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, andrew.jones@linux.dev Cc: maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oliver.upton@linux.dev, reijiw@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org PMUv3p5 adds a knob, PMCR_EL0.LP == 1, that allows overflowing at 64-bits instead of 32. Prepare by doing these 3 things: 1. Add a "bool overflow_at_64bits" argument to all tests checking overflows. 2. Extend satisfy_prerequisites() to check if the machine supports "overflow_at_64bits". 3. Refactor the test invocations to use the new "run_test()" which adds a report prefix indicating whether the test uses 64 or 32-bit overflows. A subsequent commit will actually add the 64-bit overflow tests. Signed-off-by: Ricardo Koller --- arm/pmu.c | 99 +++++++++++++++++++++++++++++++------------------------ 1 file changed, 56 insertions(+), 43 deletions(-) diff --git a/arm/pmu.c b/arm/pmu.c index eeac984..59e5bfe 100644 --- a/arm/pmu.c +++ b/arm/pmu.c @@ -164,13 +164,13 @@ static void pmu_reset(void) /* event counter tests only implemented for aarch64 */ static void test_event_introspection(void) {} static void test_event_counter_config(void) {} -static void test_basic_event_count(void) {} -static void test_mem_access(void) {} -static void test_sw_incr(void) {} -static void test_chained_counters(void) {} -static void test_chained_sw_incr(void) {} -static void test_chain_promotion(void) {} -static void test_overflow_interrupt(void) {} +static void test_basic_event_count(bool overflow_at_64bits) {} +static void test_mem_access(bool overflow_at_64bits) {} +static void test_sw_incr(bool overflow_at_64bits) {} +static void test_chained_counters(bool overflow_at_64bits) {} +static void test_chained_sw_incr(bool overflow_at_64bits) {} +static void test_chain_promotion(bool overflow_at_64bits) {} +static void test_overflow_interrupt(bool overflow_at_64bits) {} #elif defined(__aarch64__) #define ID_AA64DFR0_PERFMON_SHIFT 8 @@ -399,7 +399,8 @@ static void test_event_counter_config(void) "read of a counter programmed with unsupported event"); } -static bool satisfy_prerequisites(uint32_t *events, unsigned int nb_events) +static bool satisfy_prerequisites(uint32_t *events, unsigned int nb_events, + bool overflow_at_64bits) { int i; @@ -416,16 +417,23 @@ static bool satisfy_prerequisites(uint32_t *events, unsigned int nb_events) return false; } } + + if (overflow_at_64bits && pmu.version < ID_DFR0_PMU_V3_8_5) { + report_skip("Skip test as 64 overflows need FEAT_PMUv3p5"); + return false; + } + return true; } -static void test_basic_event_count(void) +static void test_basic_event_count(bool overflow_at_64bits) { uint32_t implemented_counter_mask, non_implemented_counter_mask; uint32_t counter_mask; uint32_t events[] = {CPU_CYCLES, INST_RETIRED}; - if (!satisfy_prerequisites(events, ARRAY_SIZE(events))) + if (!satisfy_prerequisites(events, ARRAY_SIZE(events), + overflow_at_64bits)) return; implemented_counter_mask = BIT(pmu.nb_implemented_counters) - 1; @@ -499,12 +507,13 @@ static void test_basic_event_count(void) "check overflow happened on #0 only"); } -static void test_mem_access(void) +static void test_mem_access(bool overflow_at_64bits) { void *addr = malloc(PAGE_SIZE); uint32_t events[] = {MEM_ACCESS, MEM_ACCESS}; - if (!satisfy_prerequisites(events, ARRAY_SIZE(events))) + if (!satisfy_prerequisites(events, ARRAY_SIZE(events), + overflow_at_64bits)) return; pmu_reset(); @@ -535,13 +544,14 @@ static void test_mem_access(void) read_sysreg(pmovsclr_el0)); } -static void test_sw_incr(void) +static void test_sw_incr(bool overflow_at_64bits) { uint32_t events[] = {SW_INCR, SW_INCR}; uint64_t cntr0; int i; - if (!satisfy_prerequisites(events, ARRAY_SIZE(events))) + if (!satisfy_prerequisites(events, ARRAY_SIZE(events), + overflow_at_64bits)) return; pmu_reset(); @@ -582,12 +592,13 @@ static void test_sw_incr(void) "overflow on counter #0 after 100 SW_INCR"); } -static void test_chained_counters(void) +static void test_chained_counters(bool overflow_at_64bits) { uint32_t events[] = {CPU_CYCLES, CHAIN}; uint64_t cntr1; - if (!satisfy_prerequisites(events, ARRAY_SIZE(events))) + if (!satisfy_prerequisites(events, ARRAY_SIZE(events), + overflow_at_64bits)) return; pmu_reset(); @@ -626,13 +637,14 @@ static void test_chained_counters(void) report(read_sysreg(pmovsclr_el0) == 0x3, "overflow on even and odd counters"); } -static void test_chained_sw_incr(void) +static void test_chained_sw_incr(bool overflow_at_64bits) { uint32_t events[] = {SW_INCR, CHAIN}; uint64_t cntr0, cntr1; int i; - if (!satisfy_prerequisites(events, ARRAY_SIZE(events))) + if (!satisfy_prerequisites(events, ARRAY_SIZE(events), + overflow_at_64bits)) return; pmu_reset(); @@ -680,12 +692,13 @@ static void test_chained_sw_incr(void) read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1)); } -static void test_chain_promotion(void) +static void test_chain_promotion(bool overflow_at_64bits) { uint32_t events[] = {MEM_ACCESS, CHAIN}; void *addr = malloc(PAGE_SIZE); - if (!satisfy_prerequisites(events, ARRAY_SIZE(events))) + if (!satisfy_prerequisites(events, ARRAY_SIZE(events), + overflow_at_64bits)) return; /* Only enable CHAIN counter */ @@ -829,13 +842,14 @@ static bool expect_interrupts(uint32_t bitmap) return true; } -static void test_overflow_interrupt(void) +static void test_overflow_interrupt(bool overflow_at_64bits) { uint32_t events[] = {MEM_ACCESS, SW_INCR}; void *addr = malloc(PAGE_SIZE); int i; - if (!satisfy_prerequisites(events, ARRAY_SIZE(events))) + if (!satisfy_prerequisites(events, ARRAY_SIZE(events), + overflow_at_64bits)) return; gic_enable_defaults(); @@ -1059,6 +1073,19 @@ static bool pmu_probe(void) return true; } +static void run_test(char *name, void (*test)(bool), bool overflow_at_64bits) +{ + const char *prefix = overflow_at_64bits ? "64-bit" : "32-bit"; + + report_prefix_push(name); + report_prefix_push(prefix); + + test(overflow_at_64bits); + + report_prefix_pop(); + report_prefix_pop(); +} + int main(int argc, char *argv[]) { int cpi = 0; @@ -1091,33 +1118,19 @@ int main(int argc, char *argv[]) test_event_counter_config(); report_prefix_pop(); } else if (strcmp(argv[1], "pmu-basic-event-count") == 0) { - report_prefix_push(argv[1]); - test_basic_event_count(); - report_prefix_pop(); + run_test(argv[1], test_basic_event_count, false); } else if (strcmp(argv[1], "pmu-mem-access") == 0) { - report_prefix_push(argv[1]); - test_mem_access(); - report_prefix_pop(); + run_test(argv[1], test_mem_access, false); } else if (strcmp(argv[1], "pmu-sw-incr") == 0) { - report_prefix_push(argv[1]); - test_sw_incr(); - report_prefix_pop(); + run_test(argv[1], test_sw_incr, false); } else if (strcmp(argv[1], "pmu-chained-counters") == 0) { - report_prefix_push(argv[1]); - test_chained_counters(); - report_prefix_pop(); + run_test(argv[1], test_chained_counters, false); } else if (strcmp(argv[1], "pmu-chained-sw-incr") == 0) { - report_prefix_push(argv[1]); - test_chained_sw_incr(); - report_prefix_pop(); + run_test(argv[1], test_chained_sw_incr, false); } else if (strcmp(argv[1], "pmu-chain-promotion") == 0) { - report_prefix_push(argv[1]); - test_chain_promotion(); - report_prefix_pop(); + run_test(argv[1], test_chain_promotion, false); } else if (strcmp(argv[1], "pmu-overflow-interrupt") == 0) { - report_prefix_push(argv[1]); - test_overflow_interrupt(); - report_prefix_pop(); + run_test(argv[1], test_overflow_interrupt, false); } else { report_abort("Unknown sub-test '%s'", argv[1]); } From patchwork Fri Dec 2 04:55:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13062175 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58F2DC4167B for ; Fri, 2 Dec 2022 04:55:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232178AbiLBEzo (ORCPT ); Thu, 1 Dec 2022 23:55:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33356 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232176AbiLBEzh (ORCPT ); Thu, 1 Dec 2022 23:55:37 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 68187D11D9 for ; Thu, 1 Dec 2022 20:55:36 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id f71-20020a25384a000000b006dd7876e98eso3992039yba.15 for ; Thu, 01 Dec 2022 20:55:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=uRcBcYBhcK2NFHWoVxmpF09t7tRHe45LFzDCXQhbeiI=; b=G+EjzY/fhE5Hp0d5sz5Sst7KvfmwMgiKeHyqn57eMfAVPUkFy+YszzcpRMod4EmjaJ jr7KzFWoz/JEDCEhKLOgXlxoa6GG/Bll4Vi+L1NOFD4QHgiBzWMCsqq+yDk5Oi3i+qBU lcYzA1qpmSZ/oMazPB2Tiq73FQbmwzD75wxsKcNpVJAp7aZn2aC1hegfd+VkVbB7TghL Y7WWGC5ph0BYPe5EgxTTSe708O1uxNrQKlZqvu5O67bjmAoNm4REqwwB1EWp0g5yZpOM oi+YTEnHMmK6iEWPtVa+9GPStuqFJX+MLWa3aEyq8kU7Jtx8HRfoHJb1XZfthyvFHOV9 ad9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=uRcBcYBhcK2NFHWoVxmpF09t7tRHe45LFzDCXQhbeiI=; b=gMBTtCMo2QjHgDoIGzfRjOjPKAizJs97h6qPbF71WB4KzW6dn60a/91oWDvEw+npQz JNr2ataxo2NfuhUYqvuZYsMJvakPOkPNc6QJqv+uDSrjnDZ2ZODc1BZZXXuJSmD1SyoL pSipj23vTLZ8Th0z9WnqzT+fSG7ojZQ11E+oE/AwSHadQF42wXEG3G8CbELkbEik7Dfm 8kwRJSVKnxMoqv22CA9haf4RtZmTn8l40anl1FFK46mRSMpVIaL9nifSHuyrVATscgGE sGt5MIRoJBUguF5eMhYC9AC2l/7C3Lt++B7f1rJ9RUlRFx0f/vVZFG81NO4mSStDZDjM 1U2A== X-Gm-Message-State: ANoB5pm6YbcM22py0VSZsTPT4Dv8phjCsNvdMx+flKUx9R8GZYP9fb/t DDxmhYpE/ZKuL25Gt6FjHn3UpjvPv0jz4kCLb/Yb+0V+n+NtcWfMigjOGZCrBSm9b/dWXQHvjo5 5BLGvF6vYR+IDw4aaiHqHOU3bqYrNn29oqPGKNHDPwlgcyp66zP1ulMnPdsEgUg4= X-Google-Smtp-Source: AA0mqf5/z0V4HhAkHhS66x/c3XI6gjQ0P/ZMMMaGl5hNJ6YT9xRP1+986m7vYWWUA3qJQDPe6PYdm3h8CsJvqA== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a05:6902:4e6:b0:6cb:f83c:98d5 with SMTP id w6-20020a05690204e600b006cbf83c98d5mr48943119ybs.544.1669956935539; Thu, 01 Dec 2022 20:55:35 -0800 (PST) Date: Fri, 2 Dec 2022 04:55:27 +0000 In-Reply-To: <20221202045527.3646838-1-ricarkol@google.com> Mime-Version: 1.0 References: <20221202045527.3646838-1-ricarkol@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221202045527.3646838-4-ricarkol@google.com> Subject: [kvm-unit-tests PATCH 3/3] arm: pmu: Add tests for 64-bit overflows From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, andrew.jones@linux.dev Cc: maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oliver.upton@linux.dev, reijiw@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Modify all tests checking overflows to support both 32 (PMCR_EL0.LP == 0) and 64-bit overflows (PMCR_EL0.LP == 1). 64-bit overflows are only supported on PMUv3p5. Note that chained tests do not implement "overflow_at_64bits == true". That's because there are no CHAIN events when "PMCR_EL0.LP == 1" (for more details see AArch64.IncrementEventCounter() pseudocode in the ARM ARM DDI 0487H.a, J1.1.1 "aarch64/debug"). Signed-off-by: Ricardo Koller --- arm/pmu.c | 91 ++++++++++++++++++++++++++++++++++++------------------- 1 file changed, 60 insertions(+), 31 deletions(-) diff --git a/arm/pmu.c b/arm/pmu.c index 59e5bfe..3cb563b 100644 --- a/arm/pmu.c +++ b/arm/pmu.c @@ -28,6 +28,7 @@ #define PMU_PMCR_X (1 << 4) #define PMU_PMCR_DP (1 << 5) #define PMU_PMCR_LC (1 << 6) +#define PMU_PMCR_LP (1 << 7) #define PMU_PMCR_N_SHIFT 11 #define PMU_PMCR_N_MASK 0x1f #define PMU_PMCR_ID_SHIFT 16 @@ -55,10 +56,15 @@ #define EXT_COMMON_EVENTS_HIGH 0x403F #define ALL_SET 0x00000000FFFFFFFFULL +#define ALL_SET_64 0xFFFFFFFFFFFFFFFFULL #define ALL_CLEAR 0x0000000000000000ULL #define PRE_OVERFLOW 0x00000000FFFFFFF0ULL +#define PRE_OVERFLOW_64 0xFFFFFFFFFFFFFFF0ULL #define PRE_OVERFLOW2 0x00000000FFFFFFDCULL +#define PRE_OVERFLOW_AT(_64b) (_64b ? PRE_OVERFLOW_64 : PRE_OVERFLOW) +#define ALL_SET_AT(_64b) (_64b ? ALL_SET_64 : ALL_SET) + #define PMU_PPI 23 struct pmu { @@ -429,8 +435,10 @@ static bool satisfy_prerequisites(uint32_t *events, unsigned int nb_events, static void test_basic_event_count(bool overflow_at_64bits) { uint32_t implemented_counter_mask, non_implemented_counter_mask; - uint32_t counter_mask; + uint64_t pre_overflow = PRE_OVERFLOW_AT(overflow_at_64bits); + uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0; uint32_t events[] = {CPU_CYCLES, INST_RETIRED}; + uint32_t counter_mask; if (!satisfy_prerequisites(events, ARRAY_SIZE(events), overflow_at_64bits)) @@ -452,13 +460,13 @@ static void test_basic_event_count(bool overflow_at_64bits) * clear cycle and all event counters and allow counter enablement * through PMCNTENSET. LC is RES1. */ - set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P); + set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P | pmcr_lp); isb(); - report(get_pmcr() == (pmu.pmcr_ro | PMU_PMCR_LC), "pmcr: reset counters"); + report(get_pmcr() == (pmu.pmcr_ro | PMU_PMCR_LC | pmcr_lp), "pmcr: reset counters"); /* Preset counter #0 to pre overflow value to trigger an overflow */ - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW); - report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW, + write_regn_el0(pmevcntr, 0, pre_overflow); + report(read_regn_el0(pmevcntr, 0) == pre_overflow, "counter #0 preset to pre-overflow value"); report(!read_regn_el0(pmevcntr, 1), "counter #1 is 0"); @@ -511,6 +519,8 @@ static void test_mem_access(bool overflow_at_64bits) { void *addr = malloc(PAGE_SIZE); uint32_t events[] = {MEM_ACCESS, MEM_ACCESS}; + uint64_t pre_overflow = PRE_OVERFLOW_AT(overflow_at_64bits); + uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0; if (!satisfy_prerequisites(events, ARRAY_SIZE(events), overflow_at_64bits)) @@ -522,7 +532,7 @@ static void test_mem_access(bool overflow_at_64bits) write_regn_el0(pmevtyper, 1, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0); write_sysreg_s(0x3, PMCNTENSET_EL0); isb(); - mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); + mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp); report_info("counter #0 is %ld (MEM_ACCESS)", read_regn_el0(pmevcntr, 0)); report_info("counter #1 is %ld (MEM_ACCESS)", read_regn_el0(pmevcntr, 1)); /* We may measure more than 20 mem access depending on the core */ @@ -532,11 +542,11 @@ static void test_mem_access(bool overflow_at_64bits) pmu_reset(); - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW); - write_regn_el0(pmevcntr, 1, PRE_OVERFLOW); + write_regn_el0(pmevcntr, 0, pre_overflow); + write_regn_el0(pmevcntr, 1, pre_overflow); write_sysreg_s(0x3, PMCNTENSET_EL0); isb(); - mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); + mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp); report(read_sysreg(pmovsclr_el0) == 0x3, "Ran 20 mem accesses with expected overflows on both counters"); report_info("cnt#0 = %ld cnt#1=%ld overflow=0x%lx", @@ -546,6 +556,8 @@ static void test_mem_access(bool overflow_at_64bits) static void test_sw_incr(bool overflow_at_64bits) { + uint64_t pre_overflow = PRE_OVERFLOW_AT(overflow_at_64bits); + uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0; uint32_t events[] = {SW_INCR, SW_INCR}; uint64_t cntr0; int i; @@ -561,7 +573,7 @@ static void test_sw_incr(bool overflow_at_64bits) /* enable counters #0 and #1 */ write_sysreg_s(0x3, PMCNTENSET_EL0); - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW); + write_regn_el0(pmevcntr, 0, pre_overflow); isb(); for (i = 0; i < 100; i++) @@ -569,21 +581,21 @@ static void test_sw_incr(bool overflow_at_64bits) isb(); report_info("SW_INCR counter #0 has value %ld", read_regn_el0(pmevcntr, 0)); - report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW, + report(read_regn_el0(pmevcntr, 0) == pre_overflow, "PWSYNC does not increment if PMCR.E is unset"); pmu_reset(); - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW); + write_regn_el0(pmevcntr, 0, pre_overflow); write_sysreg_s(0x3, PMCNTENSET_EL0); - set_pmcr(pmu.pmcr_ro | PMU_PMCR_E); + set_pmcr(pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp); isb(); for (i = 0; i < 100; i++) write_sysreg(0x3, pmswinc_el0); isb(); - cntr0 = (pmu.version < ID_DFR0_PMU_V3_8_5) ? 84 : PRE_OVERFLOW + 100; + cntr0 = (pmu.version < ID_DFR0_PMU_V3_8_5) ? 84 : pre_overflow + 100; report(read_regn_el0(pmevcntr, 0) == cntr0, "counter #0 after + 100 SW_INCR"); report(read_regn_el0(pmevcntr, 1) == 100, "counter #1 after + 100 SW_INCR"); report_info("counter values after 100 SW_INCR #0=%ld #1=%ld", @@ -844,6 +856,9 @@ static bool expect_interrupts(uint32_t bitmap) static void test_overflow_interrupt(bool overflow_at_64bits) { + uint64_t pre_overflow = PRE_OVERFLOW_AT(overflow_at_64bits); + uint64_t all_set = ALL_SET_AT(overflow_at_64bits); + uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0; uint32_t events[] = {MEM_ACCESS, SW_INCR}; void *addr = malloc(PAGE_SIZE); int i; @@ -862,16 +877,16 @@ static void test_overflow_interrupt(bool overflow_at_64bits) write_regn_el0(pmevtyper, 0, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0); write_regn_el0(pmevtyper, 1, SW_INCR | PMEVTYPER_EXCLUDE_EL0); write_sysreg_s(0x3, PMCNTENSET_EL0); - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW); - write_regn_el0(pmevcntr, 1, PRE_OVERFLOW); + write_regn_el0(pmevcntr, 0, pre_overflow); + write_regn_el0(pmevcntr, 1, pre_overflow); isb(); /* interrupts are disabled (PMINTENSET_EL1 == 0) */ - mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E); + mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp); report(expect_interrupts(0), "no overflow interrupt after preset"); - set_pmcr(pmu.pmcr_ro | PMU_PMCR_E); + set_pmcr(pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp); isb(); for (i = 0; i < 100; i++) @@ -886,12 +901,12 @@ static void test_overflow_interrupt(bool overflow_at_64bits) pmu_reset_stats(); - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW); - write_regn_el0(pmevcntr, 1, PRE_OVERFLOW); + write_regn_el0(pmevcntr, 0, pre_overflow); + write_regn_el0(pmevcntr, 1, pre_overflow); write_sysreg(ALL_SET, pmintenset_el1); isb(); - mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E); + mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp); for (i = 0; i < 100; i++) write_sysreg(0x3, pmswinc_el0); @@ -900,25 +915,35 @@ static void test_overflow_interrupt(bool overflow_at_64bits) report(expect_interrupts(0x3), "overflow interrupts expected on #0 and #1"); - /* promote to 64-b */ + /* + * promote to 64-b: + * + * This only applies to the !overflow_at_64bits case, as + * overflow_at_64bits doesn't implement CHAIN events. The + * overflow_at_64bits case just checks that chained counters are + * not incremented when PMCR.LP == 1. + */ pmu_reset_stats(); write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0); - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW); + write_regn_el0(pmevcntr, 0, pre_overflow); isb(); - mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E); - report(expect_interrupts(0x1), - "expect overflow interrupt on 32b boundary"); + mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp); + report(expect_interrupts(0x1), "expect overflow interrupt"); /* overflow on odd counter */ pmu_reset_stats(); - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW); - write_regn_el0(pmevcntr, 1, ALL_SET); + write_regn_el0(pmevcntr, 0, pre_overflow); + write_regn_el0(pmevcntr, 1, all_set); isb(); - mem_access_loop(addr, 400, pmu.pmcr_ro | PMU_PMCR_E); - report(expect_interrupts(0x3), - "expect overflow interrupt on even and odd counter"); + mem_access_loop(addr, 400, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp); + if (overflow_at_64bits) + report(expect_interrupts(0x1), + "expect overflow interrupt on even counter"); + else + report(expect_interrupts(0x3), + "expect overflow interrupt on even and odd counter"); } #endif @@ -1119,10 +1144,13 @@ int main(int argc, char *argv[]) report_prefix_pop(); } else if (strcmp(argv[1], "pmu-basic-event-count") == 0) { run_test(argv[1], test_basic_event_count, false); + run_test(argv[1], test_basic_event_count, true); } else if (strcmp(argv[1], "pmu-mem-access") == 0) { run_test(argv[1], test_mem_access, false); + run_test(argv[1], test_mem_access, true); } else if (strcmp(argv[1], "pmu-sw-incr") == 0) { run_test(argv[1], test_sw_incr, false); + run_test(argv[1], test_sw_incr, true); } else if (strcmp(argv[1], "pmu-chained-counters") == 0) { run_test(argv[1], test_chained_counters, false); } else if (strcmp(argv[1], "pmu-chained-sw-incr") == 0) { @@ -1131,6 +1159,7 @@ int main(int argc, char *argv[]) run_test(argv[1], test_chain_promotion, false); } else if (strcmp(argv[1], "pmu-overflow-interrupt") == 0) { run_test(argv[1], test_overflow_interrupt, false); + run_test(argv[1], test_overflow_interrupt, true); } else { report_abort("Unknown sub-test '%s'", argv[1]); }