From patchwork Thu Jan 26 16:53:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13117467 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22800C54E94 for ; Thu, 26 Jan 2023 16:53:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230130AbjAZQx6 (ORCPT ); Thu, 26 Jan 2023 11:53:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33626 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231532AbjAZQx5 (ORCPT ); Thu, 26 Jan 2023 11:53:57 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 842B6CA16 for ; Thu, 26 Jan 2023 08:53:56 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id i17-20020a25bc11000000b007b59a5b74aaso2433883ybh.7 for ; Thu, 26 Jan 2023 08:53:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=shYgfKRwhIAgTuaj5Y0w7vjI6+sPUsgjYlnsUjXkrH0=; b=lVzRX25LzEYV+PoC279pIXw1Dre+8sYnEw0e/hvbx7s6D0mta/drv249sxAN1i60Zs SyOId/M2Pzw0BCzi3wwGuULPwSgCliX7fF6aJGaEvLEUfQBemh6ZrkpRJOI2pZ8lIPlH QxHLcv2hRPgtj4GEiWXaUmRDLcBImh8czvJNCz658xQEdtlxvhpzWotEd46DrbsZL0lT /FeN8G8dlcpSfGwaPwB36wr0SEdL0pK5I9Fh6UBj6HwAsLeDWR0OuKiDCq+JQvBFsNrA QMGRTfNykr/wviT1X7+Lz6ye8uTc7TJ2ju7xLGWPNQWy2zUbL4HSumLd5YrSzUFVa6B3 R/1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=shYgfKRwhIAgTuaj5Y0w7vjI6+sPUsgjYlnsUjXkrH0=; b=RKLuNzSBvQnd0Llh2eDdkaCE7y1mttu/DVnZZi+oQrtu3Xymc1C6lvfEmmFdE/8cYP iwFPhczCYTO78Pn2wcqdlng0MffejOnP4IrGhj2K7Mwu2vQA5BwQJ7SvLs68pnL4sHli LzXNwxZ+MORcJ/HgZaVqV2ZmB/N8Isd3n72ulPfJEC8Q/xOHAKEaKdTCzuqO0DtR5M9G PFLFpw6IPxjQa3My0lFvL72VzRAMJ6wKZ3tczXi5sY8THztwxhoWohHxdag+UtQXvLCJ pS0+D/Ik+Rnwq9n8h/o01rJMxE+MeCNjaRt+Oy5lFwIdPGyzMoQBWQDCsVNEuFDRE86L kc0A== X-Gm-Message-State: AFqh2kqKYUZzBVa3xBFHrczL5qL1yOkKmgOAvVw9MPlCR06yhT+NigS8 KrksB2V65u1/uMehUOpwtyBUxYO1We7kpZEKb24NG0EmaJc2l62CPlp4fEdPRCRL80njiL6r/i7 5QivdkZ/+F+9+nxiTcs72DCSYc/fc4iaTuFQvwAunm17dVI6P2yOnmjx2DFfkFDs= X-Google-Smtp-Source: AMrXdXvjwChlMHS5H8B+nZTZzYAyG0v2rCTA903uuf8elhv5SFqElhbNUTcVggb2NJ2jnMmnn4ENxVuiuCgw5g== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a0d:e611:0:b0:4ff:b3a2:5962 with SMTP id p17-20020a0de611000000b004ffb3a25962mr3345889ywe.111.1674752035680; Thu, 26 Jan 2023 08:53:55 -0800 (PST) Date: Thu, 26 Jan 2023 16:53:46 +0000 In-Reply-To: <20230126165351.2561582-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230126165351.2561582-1-ricarkol@google.com> X-Mailer: git-send-email 2.39.1.456.gfc5497dd1b-goog Message-ID: <20230126165351.2561582-2-ricarkol@google.com> Subject: [kvm-unit-tests PATCH v4 1/6] arm: pmu: Fix overflow checks for PMUv3p5 long counters From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.linux.dev, andrew.jones@linux.dev Cc: maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oliver.upton@linux.dev, reijiw@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org PMUv3p5 uses 64-bit counters irrespective of whether the PMU is configured for overflowing at 32 or 64-bits. The consequence is that tests that check the counter values after overflowing should not assume that values will be wrapped around 32-bits: they overflow into the other half of the 64-bit counters on PMUv3p5. Fix tests by correctly checking overflowing-counters against the expected 64-bit value. Signed-off-by: Ricardo Koller Reviewed-by: Eric Auger Reviewed-by: Oliver Upton --- arm/pmu.c | 38 ++++++++++++++++++++++++++++---------- 1 file changed, 28 insertions(+), 10 deletions(-) diff --git a/arm/pmu.c b/arm/pmu.c index cd47b14..7f0794d 100644 --- a/arm/pmu.c +++ b/arm/pmu.c @@ -54,10 +54,10 @@ #define EXT_COMMON_EVENTS_LOW 0x4000 #define EXT_COMMON_EVENTS_HIGH 0x403F -#define ALL_SET 0xFFFFFFFF -#define ALL_CLEAR 0x0 -#define PRE_OVERFLOW 0xFFFFFFF0 -#define PRE_OVERFLOW2 0xFFFFFFDC +#define ALL_SET 0x00000000FFFFFFFFULL +#define ALL_CLEAR 0x0000000000000000ULL +#define PRE_OVERFLOW 0x00000000FFFFFFF0ULL +#define PRE_OVERFLOW2 0x00000000FFFFFFDCULL #define PMU_PPI 23 @@ -419,6 +419,22 @@ static bool satisfy_prerequisites(uint32_t *events, unsigned int nb_events) return true; } +static uint64_t pmevcntr_mask(void) +{ + /* + * Bits [63:0] are always incremented for 64-bit counters, + * even if the PMU is configured to generate an overflow at + * bits [31:0] + * + * For more details see the AArch64.IncrementEventCounter() + * pseudo-code in the ARM ARM DDI 0487I.a, section J1.1.1. + */ + if (pmu.version >= ID_DFR0_PMU_V3_8_5) + return ~0; + + return (uint32_t)~0; +} + static void test_basic_event_count(void) { uint32_t implemented_counter_mask, non_implemented_counter_mask; @@ -538,6 +554,7 @@ static void test_mem_access(void) static void test_sw_incr(void) { uint32_t events[] = {SW_INCR, SW_INCR}; + uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask(); int i; if (!satisfy_prerequisites(events, ARRAY_SIZE(events))) @@ -572,9 +589,8 @@ static void test_sw_incr(void) write_sysreg(0x3, pmswinc_el0); isb(); - report(read_regn_el0(pmevcntr, 0) == 84, "counter #1 after + 100 SW_INCR"); - report(read_regn_el0(pmevcntr, 1) == 100, - "counter #0 after + 100 SW_INCR"); + report(read_regn_el0(pmevcntr, 0) == cntr0, "counter #0 after + 100 SW_INCR"); + report(read_regn_el0(pmevcntr, 1) == 100, "counter #1 after + 100 SW_INCR"); report_info("counter values after 100 SW_INCR #0=%ld #1=%ld", read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1)); report(read_sysreg(pmovsclr_el0) == 0x1, @@ -625,6 +641,8 @@ static void test_chained_counters(void) static void test_chained_sw_incr(void) { uint32_t events[] = {SW_INCR, CHAIN}; + uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask(); + uint64_t cntr1 = (ALL_SET + 1) & pmevcntr_mask(); int i; if (!satisfy_prerequisites(events, ARRAY_SIZE(events))) @@ -666,9 +684,9 @@ static void test_chained_sw_incr(void) isb(); report((read_sysreg(pmovsclr_el0) == 0x3) && - (read_regn_el0(pmevcntr, 1) == 0) && - (read_regn_el0(pmevcntr, 0) == 84), - "expected overflows and values after 100 SW_INCR/CHAIN"); + (read_regn_el0(pmevcntr, 0) == cntr0) && + (read_regn_el0(pmevcntr, 1) == cntr1), + "expected overflows and values after 100 SW_INCR/CHAIN"); report_info("overflow=0x%lx, #0=%ld #1=%ld", read_sysreg(pmovsclr_el0), read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1)); } From patchwork Thu Jan 26 16:53:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13117468 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 933DCC52D11 for ; Thu, 26 Jan 2023 16:54:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231663AbjAZQx7 (ORCPT ); Thu, 26 Jan 2023 11:53:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33662 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231532AbjAZQx6 (ORCPT ); Thu, 26 Jan 2023 11:53:58 -0500 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB1A57A86 for ; Thu, 26 Jan 2023 08:53:57 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-4cddba76f55so26215617b3.23 for ; Thu, 26 Jan 2023 08:53:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=2L3ER9SzABa+sDdvSwBbApK/Tx8lI+xii3tE+VgX11c=; b=Hr2CZb/TkOwfy5Pp+ggEBYNreT4Nj+fEELm5KwjWNo9wixRPcQ7OhuBOHEhnAdFvzT L6DLY5h6Z1wKIOGuHCjMV/wR7ke5kX+PX5dckvNqhTG3DnE/bnZmrJ6Dowrqts/dHf4s Sjec8bzvZW1Psa2vIEdAvo9k8xB+H2Xkd4B7epkTIm0MBoNbf+Z2+cZIqyF9mrw3x3J8 rGgBNdepfhOqJcoDKsdr+Pqpj+UOpCNMWRoJdcZb0k26KvWuUi4P/b6EMWnnXB1h+f18 5VfR5I7gc0jBCz38ztLcUZCpFjfz5J/w2z0OMWzpCGk/IeQM0mnOK4KeeaJPKz+IZ3LB eiIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2L3ER9SzABa+sDdvSwBbApK/Tx8lI+xii3tE+VgX11c=; b=cFpzQ1uJSYpVgkHv3l9lQLhbdHptJ260YMM90LkheUIM1RLtjQ7I8jTMnyjdETg8f0 +ugG8X8w5F04kUQY38n1cZTI8we2tudrxtyk9nyo3D/xiWDyhi6NviocQ8gUC3Oqn+q5 7Ii0775TnMyKL7OWvZd1kxbrAaR+akFvgO6vNX8Fu97TC/vHrlTm5YHx70OumX1RnBPV MaeexX0P7PdM+q0KI00y9HHxlGy1mjoIb5CWet/ldOWiQv2kmtcIq2lQtGBfgnFrX5dc U9XXJ8j+OiTybFgabOWsYeCRBk08wB6WlcS3qVzFMO/adkxzCAPYiaWFiU13A4YNj72m mkzQ== X-Gm-Message-State: AFqh2kr4V9VcZwvc6BB3obqMRVKYm630ZO3Ndl2LSbhZEZ1pTMnL4js8 M1x4Jl4lfHjCqOCp5k53npP8ssuGvBNA543vaDd1Nd/N8FP/sg4quQJl/mnC5YwH9iUnnNbWRDj DIcfdYFTcOXQl3Zo/0q64m53DMpxuFECZkxL3KmouuJJGUtaf5bHcYAPDBarQ7HY= X-Google-Smtp-Source: AMrXdXvgfGoyjWUZth5F4JKgaiPd8lntZP1O+F6uUABaLGE8kCNPIo0DECSCR3FhBqS0V/figGDuDKWzajFKug== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a25:8d06:0:b0:7be:813e:1f2b with SMTP id n6-20020a258d06000000b007be813e1f2bmr4298882ybl.508.1674752036924; Thu, 26 Jan 2023 08:53:56 -0800 (PST) Date: Thu, 26 Jan 2023 16:53:47 +0000 In-Reply-To: <20230126165351.2561582-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230126165351.2561582-1-ricarkol@google.com> X-Mailer: git-send-email 2.39.1.456.gfc5497dd1b-goog Message-ID: <20230126165351.2561582-3-ricarkol@google.com> Subject: [kvm-unit-tests PATCH v4 2/6] arm: pmu: Prepare for testing 64-bit overflows From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.linux.dev, andrew.jones@linux.dev Cc: maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oliver.upton@linux.dev, reijiw@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org PMUv3p5 adds a knob, PMCR_EL0.LP == 1, that allows overflowing at 64-bits instead of 32. Prepare by doing these 3 things: 1. Add a "bool overflow_at_64bits" argument to all tests checking overflows. 2. Extend satisfy_prerequisites() to check if the machine supports "overflow_at_64bits". 3. Refactor the test invocations to use the new "run_test()" which adds a report prefix indicating whether the test uses 64 or 32-bit overflows. A subsequent commit will actually add the 64-bit overflow tests. Signed-off-by: Ricardo Koller Reviewed-by: Reiji Watanabe Reviewed-by: Eric Auger --- arm/pmu.c | 99 +++++++++++++++++++++++++++++++++---------------------- 1 file changed, 60 insertions(+), 39 deletions(-) diff --git a/arm/pmu.c b/arm/pmu.c index 7f0794d..06cbd73 100644 --- a/arm/pmu.c +++ b/arm/pmu.c @@ -164,13 +164,13 @@ static void pmu_reset(void) /* event counter tests only implemented for aarch64 */ static void test_event_introspection(void) {} static void test_event_counter_config(void) {} -static void test_basic_event_count(void) {} -static void test_mem_access(void) {} -static void test_sw_incr(void) {} -static void test_chained_counters(void) {} -static void test_chained_sw_incr(void) {} -static void test_chain_promotion(void) {} -static void test_overflow_interrupt(void) {} +static void test_basic_event_count(bool overflow_at_64bits) {} +static void test_mem_access(bool overflow_at_64bits) {} +static void test_sw_incr(bool overflow_at_64bits) {} +static void test_chained_counters(bool unused) {} +static void test_chained_sw_incr(bool unused) {} +static void test_chain_promotion(bool unused) {} +static void test_overflow_interrupt(bool overflow_at_64bits) {} #elif defined(__aarch64__) #define ID_AA64DFR0_PERFMON_SHIFT 8 @@ -435,13 +435,24 @@ static uint64_t pmevcntr_mask(void) return (uint32_t)~0; } -static void test_basic_event_count(void) +static bool check_overflow_prerequisites(bool overflow_at_64bits) +{ + if (overflow_at_64bits && pmu.version < ID_DFR0_PMU_V3_8_5) { + report_skip("Skip test as 64 overflows need FEAT_PMUv3p5"); + return false; + } + + return true; +} + +static void test_basic_event_count(bool overflow_at_64bits) { uint32_t implemented_counter_mask, non_implemented_counter_mask; uint32_t counter_mask; uint32_t events[] = {CPU_CYCLES, INST_RETIRED}; - if (!satisfy_prerequisites(events, ARRAY_SIZE(events))) + if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) || + !check_overflow_prerequisites(overflow_at_64bits)) return; implemented_counter_mask = BIT(pmu.nb_implemented_counters) - 1; @@ -515,12 +526,13 @@ static void test_basic_event_count(void) "check overflow happened on #0 only"); } -static void test_mem_access(void) +static void test_mem_access(bool overflow_at_64bits) { void *addr = malloc(PAGE_SIZE); uint32_t events[] = {MEM_ACCESS, MEM_ACCESS}; - if (!satisfy_prerequisites(events, ARRAY_SIZE(events))) + if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) || + !check_overflow_prerequisites(overflow_at_64bits)) return; pmu_reset(); @@ -551,13 +563,14 @@ static void test_mem_access(void) read_sysreg(pmovsclr_el0)); } -static void test_sw_incr(void) +static void test_sw_incr(bool overflow_at_64bits) { uint32_t events[] = {SW_INCR, SW_INCR}; uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask(); int i; - if (!satisfy_prerequisites(events, ARRAY_SIZE(events))) + if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) || + !check_overflow_prerequisites(overflow_at_64bits)) return; pmu_reset(); @@ -597,7 +610,7 @@ static void test_sw_incr(void) "overflow on counter #0 after 100 SW_INCR"); } -static void test_chained_counters(void) +static void test_chained_counters(bool unused) { uint32_t events[] = {CPU_CYCLES, CHAIN}; @@ -638,7 +651,7 @@ static void test_chained_counters(void) report(read_sysreg(pmovsclr_el0) == 0x3, "overflow on even and odd counters"); } -static void test_chained_sw_incr(void) +static void test_chained_sw_incr(bool unused) { uint32_t events[] = {SW_INCR, CHAIN}; uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask(); @@ -691,7 +704,7 @@ static void test_chained_sw_incr(void) read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1)); } -static void test_chain_promotion(void) +static void test_chain_promotion(bool unused) { uint32_t events[] = {MEM_ACCESS, CHAIN}; void *addr = malloc(PAGE_SIZE); @@ -840,13 +853,14 @@ static bool expect_interrupts(uint32_t bitmap) return true; } -static void test_overflow_interrupt(void) +static void test_overflow_interrupt(bool overflow_at_64bits) { uint32_t events[] = {MEM_ACCESS, SW_INCR}; void *addr = malloc(PAGE_SIZE); int i; - if (!satisfy_prerequisites(events, ARRAY_SIZE(events))) + if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) || + !check_overflow_prerequisites(overflow_at_64bits)) return; gic_enable_defaults(); @@ -1070,6 +1084,27 @@ static bool pmu_probe(void) return true; } +static void run_test(const char *name, const char *prefix, + void (*test)(bool), void *arg) +{ + report_prefix_push(name); + report_prefix_push(prefix); + + test(arg); + + report_prefix_pop(); + report_prefix_pop(); +} + +static void run_event_test(char *name, void (*test)(bool), + bool overflow_at_64bits) +{ + const char *prefix = overflow_at_64bits ? "64-bit overflows" + : "32-bit overflows"; + + run_test(name, prefix, test, (void *)overflow_at_64bits); +} + int main(int argc, char *argv[]) { int cpi = 0; @@ -1102,33 +1137,19 @@ int main(int argc, char *argv[]) test_event_counter_config(); report_prefix_pop(); } else if (strcmp(argv[1], "pmu-basic-event-count") == 0) { - report_prefix_push(argv[1]); - test_basic_event_count(); - report_prefix_pop(); + run_event_test(argv[1], test_basic_event_count, false); } else if (strcmp(argv[1], "pmu-mem-access") == 0) { - report_prefix_push(argv[1]); - test_mem_access(); - report_prefix_pop(); + run_event_test(argv[1], test_mem_access, false); } else if (strcmp(argv[1], "pmu-sw-incr") == 0) { - report_prefix_push(argv[1]); - test_sw_incr(); - report_prefix_pop(); + run_event_test(argv[1], test_sw_incr, false); } else if (strcmp(argv[1], "pmu-chained-counters") == 0) { - report_prefix_push(argv[1]); - test_chained_counters(); - report_prefix_pop(); + run_event_test(argv[1], test_chained_counters, false); } else if (strcmp(argv[1], "pmu-chained-sw-incr") == 0) { - report_prefix_push(argv[1]); - test_chained_sw_incr(); - report_prefix_pop(); + run_event_test(argv[1], test_chained_sw_incr, false); } else if (strcmp(argv[1], "pmu-chain-promotion") == 0) { - report_prefix_push(argv[1]); - test_chain_promotion(); - report_prefix_pop(); + run_event_test(argv[1], test_chain_promotion, false); } else if (strcmp(argv[1], "pmu-overflow-interrupt") == 0) { - report_prefix_push(argv[1]); - test_overflow_interrupt(); - report_prefix_pop(); + run_event_test(argv[1], test_overflow_interrupt, false); } else { report_abort("Unknown sub-test '%s'", argv[1]); } From patchwork Thu Jan 26 16:53:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13117469 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD483C05027 for ; Thu, 26 Jan 2023 16:54:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231700AbjAZQyC (ORCPT ); Thu, 26 Jan 2023 11:54:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33716 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229486AbjAZQyA (ORCPT ); Thu, 26 Jan 2023 11:54:00 -0500 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6E8B9470BF for ; Thu, 26 Jan 2023 08:53:59 -0800 (PST) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-509ab88f98fso16990677b3.10 for ; Thu, 26 Jan 2023 08:53:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3lDmrfi+HjrUZeHRFrGSbTeg3Od0rKhtY0IfxLDbzY8=; b=epBHYM+tAT6MpQNCXx0TrlsLJDIe06G8gccOfv5IWJlXaOsnw3c5oUOAwGKKyRxxyB dQkVmcmbnfEEN93iGpOfcxIUHFflVppGaLv8nB2KU2578B9Up3t/LtJ0tqnrT4I9KH3i wUuW90OszIVUmwJLlPSSRFIkAxzd42b8KHHWfDIghROrKobBzaMBoeAR4+RoOt4JlpAd PAKAS/8fElttyDmwRz9JG+Geg+VVkR7RhqFgtzhRJuVyHOu1Vg4qmuP9bgQuBN6hZAtx S1fy+AhDnrmACgEM0eIy6gkZEHYnKXVc6AcrT4W4yDDoMjdIlHbUl6rSdGgMmM0QkOyQ Rzvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3lDmrfi+HjrUZeHRFrGSbTeg3Od0rKhtY0IfxLDbzY8=; b=4cnsZmZF/bfIig6425IBI8IqOHq/MI7k4xuD9s2KdbNhM4vHkxEVFxHgAJ4LqrFeJd VHSCzhT36KmmlD6HBEG/alRMwyr65QI3yFuXAwK6r5xgEXjqM5+C/oi/hBGC1vzi86b3 jLF7318l4YlH0fvDnY6hDPMjDjkVE6CvyfXJuIY3kvgECbaxXcvTZxwCXwtmqKxa6tsi tTZGZCD1mBnbeu6dUJmTa4d+qNPDUfG/3051r/0OS2+BElYVgDH64lyGeTp6hHvzhS+S Rgx8xJ2QkHRJEC8MkWc/0X8lO954cjEtW3ZJFkFuej0CIHhIR343RqM1p7FngboOJL1O uTNA== X-Gm-Message-State: AFqh2koXALDyHPTK/tyVB6hWrBrL/amsIPeRsFftmr/7bqqgTIQjjCuo vcYiRbuvAE/WcMpgRnljmicrmFVcf/qhBkXRYdLinQEVGFcgrwMTheQ+Fismha/Rf4OgUW0h87K qp1iP0sVS78HmRGHDp+5j6ZHLsahisf089Lex8PqAL3wrxAcXEEbGCgBaeXkmycQ= X-Google-Smtp-Source: AMrXdXsasyF/0U1lpEv0xVZylGG9Pc412/O0dWUZHMGef0j+GYlVfTE1grKd0aoLFot/Y2OVNIPolLfcFwWWHw== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a81:1681:0:b0:475:9f2c:899 with SMTP id 123-20020a811681000000b004759f2c0899mr5243518yww.290.1674752038614; Thu, 26 Jan 2023 08:53:58 -0800 (PST) Date: Thu, 26 Jan 2023 16:53:48 +0000 In-Reply-To: <20230126165351.2561582-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230126165351.2561582-1-ricarkol@google.com> X-Mailer: git-send-email 2.39.1.456.gfc5497dd1b-goog Message-ID: <20230126165351.2561582-4-ricarkol@google.com> Subject: [kvm-unit-tests PATCH v4 3/6] arm: pmu: Rename ALL_SET and PRE_OVERFLOW From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.linux.dev, andrew.jones@linux.dev Cc: maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oliver.upton@linux.dev, reijiw@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Given that the arm PMU tests now handle 64-bit counters and overflows, it's better to be precise about what the ALL_SET and PRE_OVERFLOW macros actually are. Given that they are both 32-bit counters, just add _32 to both of them. Signed-off-by: Ricardo Koller Reviewed-by: Eric Auger --- arm/pmu.c | 78 +++++++++++++++++++++++++++---------------------------- 1 file changed, 39 insertions(+), 39 deletions(-) diff --git a/arm/pmu.c b/arm/pmu.c index 06cbd73..08e956d 100644 --- a/arm/pmu.c +++ b/arm/pmu.c @@ -54,9 +54,9 @@ #define EXT_COMMON_EVENTS_LOW 0x4000 #define EXT_COMMON_EVENTS_HIGH 0x403F -#define ALL_SET 0x00000000FFFFFFFFULL +#define ALL_SET_32 0x00000000FFFFFFFFULL #define ALL_CLEAR 0x0000000000000000ULL -#define PRE_OVERFLOW 0x00000000FFFFFFF0ULL +#define PRE_OVERFLOW_32 0x00000000FFFFFFF0ULL #define PRE_OVERFLOW2 0x00000000FFFFFFDCULL #define PMU_PPI 23 @@ -153,11 +153,11 @@ static void pmu_reset(void) /* reset all counters, counting disabled at PMCR level*/ set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P); /* Disable all counters */ - write_sysreg(ALL_SET, PMCNTENCLR); + write_sysreg(ALL_SET_32, PMCNTENCLR); /* clear overflow reg */ - write_sysreg(ALL_SET, PMOVSR); + write_sysreg(ALL_SET_32, PMOVSR); /* disable overflow interrupts on all counters */ - write_sysreg(ALL_SET, PMINTENCLR); + write_sysreg(ALL_SET_32, PMINTENCLR); isb(); } @@ -322,7 +322,7 @@ static void irq_handler(struct pt_regs *regs) pmu_stats.bitmap |= 1 << i; } } - write_sysreg(ALL_SET, pmovsclr_el0); + write_sysreg(ALL_SET_32, pmovsclr_el0); isb(); } else { pmu_stats.unexpected = true; @@ -346,11 +346,11 @@ static void pmu_reset(void) /* reset all counters, counting disabled at PMCR level*/ set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P); /* Disable all counters */ - write_sysreg_s(ALL_SET, PMCNTENCLR_EL0); + write_sysreg_s(ALL_SET_32, PMCNTENCLR_EL0); /* clear overflow reg */ - write_sysreg(ALL_SET, pmovsclr_el0); + write_sysreg(ALL_SET_32, pmovsclr_el0); /* disable overflow interrupts on all counters */ - write_sysreg(ALL_SET, pmintenclr_el1); + write_sysreg(ALL_SET_32, pmintenclr_el1); pmu_reset_stats(); isb(); } @@ -463,7 +463,7 @@ static void test_basic_event_count(bool overflow_at_64bits) write_regn_el0(pmevtyper, 1, INST_RETIRED | PMEVTYPER_EXCLUDE_EL0); /* disable all counters */ - write_sysreg_s(ALL_SET, PMCNTENCLR_EL0); + write_sysreg_s(ALL_SET_32, PMCNTENCLR_EL0); report(!read_sysreg_s(PMCNTENCLR_EL0) && !read_sysreg_s(PMCNTENSET_EL0), "pmcntenclr: disable all counters"); @@ -476,8 +476,8 @@ static void test_basic_event_count(bool overflow_at_64bits) report(get_pmcr() == (pmu.pmcr_ro | PMU_PMCR_LC), "pmcr: reset counters"); /* Preset counter #0 to pre overflow value to trigger an overflow */ - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW); - report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW, + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32); + report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW_32, "counter #0 preset to pre-overflow value"); report(!read_regn_el0(pmevcntr, 1), "counter #1 is 0"); @@ -499,11 +499,11 @@ static void test_basic_event_count(bool overflow_at_64bits) "pmcntenset: just enabled #0 and #1"); /* clear overflow register */ - write_sysreg(ALL_SET, pmovsclr_el0); + write_sysreg(ALL_SET_32, pmovsclr_el0); report(!read_sysreg(pmovsclr_el0), "check overflow reg is 0"); /* disable overflow interrupts on all counters*/ - write_sysreg(ALL_SET, pmintenclr_el1); + write_sysreg(ALL_SET_32, pmintenclr_el1); report(!read_sysreg(pmintenclr_el1), "pmintenclr_el1=0, all interrupts disabled"); @@ -551,8 +551,8 @@ static void test_mem_access(bool overflow_at_64bits) pmu_reset(); - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW); - write_regn_el0(pmevcntr, 1, PRE_OVERFLOW); + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32); + write_regn_el0(pmevcntr, 1, PRE_OVERFLOW_32); write_sysreg_s(0x3, PMCNTENSET_EL0); isb(); mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); @@ -566,7 +566,7 @@ static void test_mem_access(bool overflow_at_64bits) static void test_sw_incr(bool overflow_at_64bits) { uint32_t events[] = {SW_INCR, SW_INCR}; - uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask(); + uint64_t cntr0 = (PRE_OVERFLOW_32 + 100) & pmevcntr_mask(); int i; if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) || @@ -580,7 +580,7 @@ static void test_sw_incr(bool overflow_at_64bits) /* enable counters #0 and #1 */ write_sysreg_s(0x3, PMCNTENSET_EL0); - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW); + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32); isb(); for (i = 0; i < 100; i++) @@ -588,12 +588,12 @@ static void test_sw_incr(bool overflow_at_64bits) isb(); report_info("SW_INCR counter #0 has value %ld", read_regn_el0(pmevcntr, 0)); - report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW, + report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW_32, "PWSYNC does not increment if PMCR.E is unset"); pmu_reset(); - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW); + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32); write_sysreg_s(0x3, PMCNTENSET_EL0); set_pmcr(pmu.pmcr_ro | PMU_PMCR_E); isb(); @@ -623,7 +623,7 @@ static void test_chained_counters(bool unused) write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0); /* enable counters #0 and #1 */ write_sysreg_s(0x3, PMCNTENSET_EL0); - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW); + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32); precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E); @@ -635,15 +635,15 @@ static void test_chained_counters(bool unused) pmu_reset(); write_sysreg_s(0x3, PMCNTENSET_EL0); - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW); + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32); write_regn_el0(pmevcntr, 1, 0x1); precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E); report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0)); report(read_regn_el0(pmevcntr, 1) == 2, "CHAIN counter #1 set to 2"); report(read_sysreg(pmovsclr_el0) == 0x1, "overflow recorded for chained incr #2"); - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW); - write_regn_el0(pmevcntr, 1, ALL_SET); + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32); + write_regn_el0(pmevcntr, 1, ALL_SET_32); precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E); report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0)); @@ -654,8 +654,8 @@ static void test_chained_counters(bool unused) static void test_chained_sw_incr(bool unused) { uint32_t events[] = {SW_INCR, CHAIN}; - uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask(); - uint64_t cntr1 = (ALL_SET + 1) & pmevcntr_mask(); + uint64_t cntr0 = (PRE_OVERFLOW_32 + 100) & pmevcntr_mask(); + uint64_t cntr1 = (ALL_SET_32 + 1) & pmevcntr_mask(); int i; if (!satisfy_prerequisites(events, ARRAY_SIZE(events))) @@ -668,7 +668,7 @@ static void test_chained_sw_incr(bool unused) /* enable counters #0 and #1 */ write_sysreg_s(0x3, PMCNTENSET_EL0); - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW); + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32); set_pmcr(pmu.pmcr_ro | PMU_PMCR_E); isb(); @@ -686,8 +686,8 @@ static void test_chained_sw_incr(bool unused) pmu_reset(); write_regn_el0(pmevtyper, 1, events[1] | PMEVTYPER_EXCLUDE_EL0); - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW); - write_regn_el0(pmevcntr, 1, ALL_SET); + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32); + write_regn_el0(pmevcntr, 1, ALL_SET_32); write_sysreg_s(0x3, PMCNTENSET_EL0); set_pmcr(pmu.pmcr_ro | PMU_PMCR_E); isb(); @@ -725,7 +725,7 @@ static void test_chain_promotion(bool unused) /* Only enable even counter */ pmu_reset(); - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW); + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32); write_sysreg_s(0x1, PMCNTENSET_EL0); isb(); @@ -873,8 +873,8 @@ static void test_overflow_interrupt(bool overflow_at_64bits) write_regn_el0(pmevtyper, 0, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0); write_regn_el0(pmevtyper, 1, SW_INCR | PMEVTYPER_EXCLUDE_EL0); write_sysreg_s(0x3, PMCNTENSET_EL0); - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW); - write_regn_el0(pmevcntr, 1, PRE_OVERFLOW); + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32); + write_regn_el0(pmevcntr, 1, PRE_OVERFLOW_32); isb(); /* interrupts are disabled (PMINTENSET_EL1 == 0) */ @@ -893,13 +893,13 @@ static void test_overflow_interrupt(bool overflow_at_64bits) isb(); report(expect_interrupts(0), "no overflow interrupt after counting"); - /* enable interrupts (PMINTENSET_EL1 <= ALL_SET) */ + /* enable interrupts (PMINTENSET_EL1 <= ALL_SET_32) */ pmu_reset_stats(); - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW); - write_regn_el0(pmevcntr, 1, PRE_OVERFLOW); - write_sysreg(ALL_SET, pmintenset_el1); + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32); + write_regn_el0(pmevcntr, 1, PRE_OVERFLOW_32); + write_sysreg(ALL_SET_32, pmintenset_el1); isb(); mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E); @@ -916,7 +916,7 @@ static void test_overflow_interrupt(bool overflow_at_64bits) pmu_reset_stats(); write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0); - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW); + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32); isb(); mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E); report(expect_interrupts(0x1), @@ -924,8 +924,8 @@ static void test_overflow_interrupt(bool overflow_at_64bits) /* overflow on odd counter */ pmu_reset_stats(); - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW); - write_regn_el0(pmevcntr, 1, ALL_SET); + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32); + write_regn_el0(pmevcntr, 1, ALL_SET_32); isb(); mem_access_loop(addr, 400, pmu.pmcr_ro | PMU_PMCR_E); report(expect_interrupts(0x3), From patchwork Thu Jan 26 16:53:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13117470 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E95FC61DA2 for ; Thu, 26 Jan 2023 16:54:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231769AbjAZQyE (ORCPT ); Thu, 26 Jan 2023 11:54:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229486AbjAZQyD (ORCPT ); Thu, 26 Jan 2023 11:54:03 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2157A5EF9B for ; Thu, 26 Jan 2023 08:54:01 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id bb5-20020a17090b008500b0022c04c51749so1103698pjb.0 for ; Thu, 26 Jan 2023 08:54:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+qpoFxT5bGi671ATjgpVRNYqlsRSMwNrYoRRIZd0yxA=; b=NMakbLiNriJ0GR9sKDK6acX68ECIkL/QbsB8Dj7ijDtb3YWHAWdJHJBWAdk69w5/6U DmqbelOe7QALjOCkOD4OqGVHRdaqKgg681Q41sHhc6cEvKbD1PpPqTYwhtsrZz55oW6v j1H70pJJmDejv7QD6OEEfEc0aWNALuubYzbEPlYxIJN9o3WX8kLnNbxxAo5//ISKEpgI tjfo6saqHshunrYkkJqkmeKkvcaiO1JHsB4fW0Mf/aiaRZ4dIi2S7dsOWbxC4rVboICq dTx64SNeeIZu7wW3Je6PvCIyS/rMWZPqYctsVlxpI0JuinnjyxBOFDtF46p3fbebGH5e hHYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+qpoFxT5bGi671ATjgpVRNYqlsRSMwNrYoRRIZd0yxA=; b=Kjtz0dNfJ/NjDq6Z1f/ki86sgQcdQFRKYyVnWtP+yspBwoWF3isQ6/5oAdsy+nVvEg 5wQz1gsE9ZULd/DW7MUGmaSJcQ9J+EQD53O7kXO+HDZL21U7IFnvNJeJploYf6BGz3Bc 5k3ohGg4hpHLxq8A+iVtvrCTLMzuJBQJQpvY32i0X3aWD9e6qvpcWgG4EtB8jlo8RRWg JJjm7tXq/bqrFOA7veGMIJ+IDajZ50kG9Rs+gN5bDZcu/+memJXU0xn0LUp8eQMx9msc doIE+BsZ1a9oHCZBgyia6PfM86rRuo/z89zhEnhhGKm/Gxu5hM2mHrNQzIRNn3vpjJTL bJuQ== X-Gm-Message-State: AFqh2kp9m6PFPISRK1iFwbuKTmS0VWCGcPPG3ZJmiRiGVOFxC6IRu/v2 RCiK55GVXUwX0cgUDVGvcoG/1VYxJd65X+tHLowMbUY8KZQZ95UPw472ff12P585NWP6QCShIOP mThAfGTD25gZdsMN4loMq4PigNblcbFP9Wo8OSPufAm90KY2AW2cL88Ko1tuWpgo= X-Google-Smtp-Source: AMrXdXsqIrf58M2oEuYZ9S2XiUP9K1dLUxgzBFg0bfbTgu+RZg45ESq4etsxNclrlk3YNvi24uP9R/ehGMdlcA== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a05:6a00:70b:b0:58d:c1cb:3ab3 with SMTP id 11-20020a056a00070b00b0058dc1cb3ab3mr4177864pfl.64.1674752040334; Thu, 26 Jan 2023 08:54:00 -0800 (PST) Date: Thu, 26 Jan 2023 16:53:49 +0000 In-Reply-To: <20230126165351.2561582-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230126165351.2561582-1-ricarkol@google.com> X-Mailer: git-send-email 2.39.1.456.gfc5497dd1b-goog Message-ID: <20230126165351.2561582-5-ricarkol@google.com> Subject: [kvm-unit-tests PATCH v4 4/6] arm: pmu: Add tests for 64-bit overflows From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.linux.dev, andrew.jones@linux.dev Cc: maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oliver.upton@linux.dev, reijiw@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Modify all tests checking overflows to support both 32 (PMCR_EL0.LP == 0) and 64-bit overflows (PMCR_EL0.LP == 1). 64-bit overflows are only supported on PMUv3p5. Note that chained tests do not implement "overflow_at_64bits == true". That's because there are no CHAIN events when "PMCR_EL0.LP == 1" (for more details see AArch64.IncrementEventCounter() pseudocode in the ARM ARM DDI 0487H.a, J1.1.1 "aarch64/debug"). Signed-off-by: Ricardo Koller Reviewed-by: Reiji Watanabe Reviewed-by: Eric Auger --- arm/pmu.c | 100 ++++++++++++++++++++++++++++++++++++------------------ 1 file changed, 67 insertions(+), 33 deletions(-) diff --git a/arm/pmu.c b/arm/pmu.c index 08e956d..082fb41 100644 --- a/arm/pmu.c +++ b/arm/pmu.c @@ -28,6 +28,7 @@ #define PMU_PMCR_X (1 << 4) #define PMU_PMCR_DP (1 << 5) #define PMU_PMCR_LC (1 << 6) +#define PMU_PMCR_LP (1 << 7) #define PMU_PMCR_N_SHIFT 11 #define PMU_PMCR_N_MASK 0x1f #define PMU_PMCR_ID_SHIFT 16 @@ -57,8 +58,12 @@ #define ALL_SET_32 0x00000000FFFFFFFFULL #define ALL_CLEAR 0x0000000000000000ULL #define PRE_OVERFLOW_32 0x00000000FFFFFFF0ULL +#define PRE_OVERFLOW_64 0xFFFFFFFFFFFFFFF0ULL #define PRE_OVERFLOW2 0x00000000FFFFFFDCULL +#define PRE_OVERFLOW(__overflow_at_64bits) \ + (__overflow_at_64bits ? PRE_OVERFLOW_64 : PRE_OVERFLOW_32) + #define PMU_PPI 23 struct pmu { @@ -448,8 +453,10 @@ static bool check_overflow_prerequisites(bool overflow_at_64bits) static void test_basic_event_count(bool overflow_at_64bits) { uint32_t implemented_counter_mask, non_implemented_counter_mask; - uint32_t counter_mask; + uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits); + uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0; uint32_t events[] = {CPU_CYCLES, INST_RETIRED}; + uint32_t counter_mask; if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) || !check_overflow_prerequisites(overflow_at_64bits)) @@ -471,13 +478,13 @@ static void test_basic_event_count(bool overflow_at_64bits) * clear cycle and all event counters and allow counter enablement * through PMCNTENSET. LC is RES1. */ - set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P); + set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P | pmcr_lp); isb(); - report(get_pmcr() == (pmu.pmcr_ro | PMU_PMCR_LC), "pmcr: reset counters"); + report(get_pmcr() == (pmu.pmcr_ro | PMU_PMCR_LC | pmcr_lp), "pmcr: reset counters"); /* Preset counter #0 to pre overflow value to trigger an overflow */ - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32); - report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW_32, + write_regn_el0(pmevcntr, 0, pre_overflow); + report(read_regn_el0(pmevcntr, 0) == pre_overflow, "counter #0 preset to pre-overflow value"); report(!read_regn_el0(pmevcntr, 1), "counter #1 is 0"); @@ -530,6 +537,8 @@ static void test_mem_access(bool overflow_at_64bits) { void *addr = malloc(PAGE_SIZE); uint32_t events[] = {MEM_ACCESS, MEM_ACCESS}; + uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits); + uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0; if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) || !check_overflow_prerequisites(overflow_at_64bits)) @@ -541,7 +550,7 @@ static void test_mem_access(bool overflow_at_64bits) write_regn_el0(pmevtyper, 1, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0); write_sysreg_s(0x3, PMCNTENSET_EL0); isb(); - mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); + mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp); report_info("counter #0 is %ld (MEM_ACCESS)", read_regn_el0(pmevcntr, 0)); report_info("counter #1 is %ld (MEM_ACCESS)", read_regn_el0(pmevcntr, 1)); /* We may measure more than 20 mem access depending on the core */ @@ -551,11 +560,11 @@ static void test_mem_access(bool overflow_at_64bits) pmu_reset(); - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32); - write_regn_el0(pmevcntr, 1, PRE_OVERFLOW_32); + write_regn_el0(pmevcntr, 0, pre_overflow); + write_regn_el0(pmevcntr, 1, pre_overflow); write_sysreg_s(0x3, PMCNTENSET_EL0); isb(); - mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); + mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp); report(read_sysreg(pmovsclr_el0) == 0x3, "Ran 20 mem accesses with expected overflows on both counters"); report_info("cnt#0 = %ld cnt#1=%ld overflow=0x%lx", @@ -565,8 +574,10 @@ static void test_mem_access(bool overflow_at_64bits) static void test_sw_incr(bool overflow_at_64bits) { + uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits); + uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0; uint32_t events[] = {SW_INCR, SW_INCR}; - uint64_t cntr0 = (PRE_OVERFLOW_32 + 100) & pmevcntr_mask(); + uint64_t cntr0 = (pre_overflow + 100) & pmevcntr_mask(); int i; if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) || @@ -580,7 +591,7 @@ static void test_sw_incr(bool overflow_at_64bits) /* enable counters #0 and #1 */ write_sysreg_s(0x3, PMCNTENSET_EL0); - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32); + write_regn_el0(pmevcntr, 0, pre_overflow); isb(); for (i = 0; i < 100; i++) @@ -588,14 +599,14 @@ static void test_sw_incr(bool overflow_at_64bits) isb(); report_info("SW_INCR counter #0 has value %ld", read_regn_el0(pmevcntr, 0)); - report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW_32, + report(read_regn_el0(pmevcntr, 0) == pre_overflow, "PWSYNC does not increment if PMCR.E is unset"); pmu_reset(); - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32); + write_regn_el0(pmevcntr, 0, pre_overflow); write_sysreg_s(0x3, PMCNTENSET_EL0); - set_pmcr(pmu.pmcr_ro | PMU_PMCR_E); + set_pmcr(pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp); isb(); for (i = 0; i < 100; i++) @@ -613,6 +624,7 @@ static void test_sw_incr(bool overflow_at_64bits) static void test_chained_counters(bool unused) { uint32_t events[] = {CPU_CYCLES, CHAIN}; + uint64_t all_set = pmevcntr_mask(); if (!satisfy_prerequisites(events, ARRAY_SIZE(events))) return; @@ -643,11 +655,11 @@ static void test_chained_counters(bool unused) report(read_sysreg(pmovsclr_el0) == 0x1, "overflow recorded for chained incr #2"); write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32); - write_regn_el0(pmevcntr, 1, ALL_SET_32); + write_regn_el0(pmevcntr, 1, all_set); precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E); report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0)); - report(!read_regn_el0(pmevcntr, 1), "CHAIN counter #1 wrapped"); + report(read_regn_el0(pmevcntr, 1) == 0, "CHAIN counter #1 wrapped"); report(read_sysreg(pmovsclr_el0) == 0x3, "overflow on even and odd counters"); } @@ -855,6 +867,9 @@ static bool expect_interrupts(uint32_t bitmap) static void test_overflow_interrupt(bool overflow_at_64bits) { + uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits); + uint64_t all_set = pmevcntr_mask(); + uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0; uint32_t events[] = {MEM_ACCESS, SW_INCR}; void *addr = malloc(PAGE_SIZE); int i; @@ -873,16 +888,16 @@ static void test_overflow_interrupt(bool overflow_at_64bits) write_regn_el0(pmevtyper, 0, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0); write_regn_el0(pmevtyper, 1, SW_INCR | PMEVTYPER_EXCLUDE_EL0); write_sysreg_s(0x3, PMCNTENSET_EL0); - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32); - write_regn_el0(pmevcntr, 1, PRE_OVERFLOW_32); + write_regn_el0(pmevcntr, 0, pre_overflow); + write_regn_el0(pmevcntr, 1, pre_overflow); isb(); /* interrupts are disabled (PMINTENSET_EL1 == 0) */ - mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E); + mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp); report(expect_interrupts(0), "no overflow interrupt after preset"); - set_pmcr(pmu.pmcr_ro | PMU_PMCR_E); + set_pmcr(pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp); isb(); for (i = 0; i < 100; i++) @@ -897,12 +912,12 @@ static void test_overflow_interrupt(bool overflow_at_64bits) pmu_reset_stats(); - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32); - write_regn_el0(pmevcntr, 1, PRE_OVERFLOW_32); + write_regn_el0(pmevcntr, 0, pre_overflow); + write_regn_el0(pmevcntr, 1, pre_overflow); write_sysreg(ALL_SET_32, pmintenset_el1); isb(); - mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E); + mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp); for (i = 0; i < 100; i++) write_sysreg(0x3, pmswinc_el0); @@ -911,25 +926,40 @@ static void test_overflow_interrupt(bool overflow_at_64bits) report(expect_interrupts(0x3), "overflow interrupts expected on #0 and #1"); - /* promote to 64-b */ + /* + * promote to 64-b: + * + * This only applies to the !overflow_at_64bits case, as + * overflow_at_64bits doesn't implement CHAIN events. The + * overflow_at_64bits case just checks that chained counters are + * not incremented when PMCR.LP == 1. + */ pmu_reset_stats(); write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0); - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32); + write_regn_el0(pmevcntr, 0, pre_overflow); isb(); - mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E); - report(expect_interrupts(0x1), - "expect overflow interrupt on 32b boundary"); + mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp); + report(expect_interrupts(0x1), "expect overflow interrupt"); /* overflow on odd counter */ pmu_reset_stats(); - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32); - write_regn_el0(pmevcntr, 1, ALL_SET_32); + write_regn_el0(pmevcntr, 0, pre_overflow); + write_regn_el0(pmevcntr, 1, all_set); isb(); - mem_access_loop(addr, 400, pmu.pmcr_ro | PMU_PMCR_E); - report(expect_interrupts(0x3), - "expect overflow interrupt on even and odd counter"); + mem_access_loop(addr, 400, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp); + if (overflow_at_64bits) { + report(expect_interrupts(0x1), + "expect overflow interrupt on even counter"); + report(read_regn_el0(pmevcntr, 1) == all_set, + "Odd counter did not change"); + } else { + report(expect_interrupts(0x3), + "expect overflow interrupt on even and odd counter"); + report(read_regn_el0(pmevcntr, 1) != all_set, + "Odd counter wrapped"); + } } #endif @@ -1138,10 +1168,13 @@ int main(int argc, char *argv[]) report_prefix_pop(); } else if (strcmp(argv[1], "pmu-basic-event-count") == 0) { run_event_test(argv[1], test_basic_event_count, false); + run_event_test(argv[1], test_basic_event_count, true); } else if (strcmp(argv[1], "pmu-mem-access") == 0) { run_event_test(argv[1], test_mem_access, false); + run_event_test(argv[1], test_mem_access, true); } else if (strcmp(argv[1], "pmu-sw-incr") == 0) { run_event_test(argv[1], test_sw_incr, false); + run_event_test(argv[1], test_sw_incr, true); } else if (strcmp(argv[1], "pmu-chained-counters") == 0) { run_event_test(argv[1], test_chained_counters, false); } else if (strcmp(argv[1], "pmu-chained-sw-incr") == 0) { @@ -1150,6 +1183,7 @@ int main(int argc, char *argv[]) run_event_test(argv[1], test_chain_promotion, false); } else if (strcmp(argv[1], "pmu-overflow-interrupt") == 0) { run_event_test(argv[1], test_overflow_interrupt, false); + run_event_test(argv[1], test_overflow_interrupt, true); } else { report_abort("Unknown sub-test '%s'", argv[1]); } From patchwork Thu Jan 26 16:53:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13117471 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28A97C52D11 for ; Thu, 26 Jan 2023 16:54:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231772AbjAZQyF (ORCPT ); Thu, 26 Jan 2023 11:54:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33858 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231745AbjAZQyE (ORCPT ); Thu, 26 Jan 2023 11:54:04 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7FF47392AC for ; Thu, 26 Jan 2023 08:54:02 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id i17-20020a25bc11000000b007b59a5b74aaso2434155ybh.7 for ; Thu, 26 Jan 2023 08:54:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=tOFFsrMOvyCTg6BeWwNjQP7I8O4rsRpyLcw1dHFSKvQ=; b=CLgqeDQXmoncZ2pXfsfJ+wxHgbm1Yc4B/40B7BOtexUUY4XjjyxkE3TCF00T5BUKOJ z/+ufWbrQFLAWEyjcfyqLaPKssvj1hAmmobsrK19z/DLUo0JpnPtPBoMpOaBO9Rt5R47 p0RbMFAr7Iw2azc8+ShwmyYYh5X9GqW0zPt1X67uemPx6OUv9smJFomn87Sp/ymLSB6P lFqDds9eZMVp99u4L/3OHkTYMeKs/4aDDJDgdeoxZvoP2SR/W9peLqSvBKnqwy+P3eNm ajoSLTufrB+CUI0mkeCEIxHMEYmSiUhOnfWDLTIEiKf3JwQ8hCEJ13wAdniFzqgtJc31 2Zbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=tOFFsrMOvyCTg6BeWwNjQP7I8O4rsRpyLcw1dHFSKvQ=; b=kJ9iuHZZ/7QRtZ70TyXxNct7P+cBn7uyH6/aRQVsqKV2emfZ4ZW1fv+m089iop+dSB 5uAQX1Q5cZPTvGowU5qqdLotiacgTXLu4O+XTevFpbB/fYjFPW5B02TSFUa/TLNeTP4X EhBdAc+jwueB1a4IOJkvYig/bUf7yukksxCZq337Ab7zWQslXquUZdFrulMz9+5iPSb4 WXFjDpncVlIi/68GpifUjyk1QXmli7IPLa7rMQAH54ZutWjTt8UkG6Ra0AQVyipUe+ey ELctsyJD2oQyGH3wsDp2JH6SZTpxB0mA0YZ0l9QAzghy6gQHVjyBo/4PxVsUH7eRCsYi WJjQ== X-Gm-Message-State: AFqh2koTW+Gjt6gPj4jcxMli/4u8PVXn69ah3HTxwdboNowirQ7jUyhX 1dl0dug+MhK4tYA9GzrF3TtOa862DX0sU5b7WaRLpY4yT3V1h+8EzNYSW43WziFah9jdj0zsziQ 5ZN3+l05sUyR/HTpxqD04rQ9yhrkqOh2HSZWjIsAwZPdc+kdjTnS6yRzfLU/srX0= X-Google-Smtp-Source: AMrXdXtDNv00dKI1COJb6goV4rKTo90bcFMRSqcNiTkW1VPWThNWNyyFGTdeqoIbJMFqeBzphd0+QvwpWlsJqQ== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a25:4d54:0:b0:7d4:e9c5:b6cf with SMTP id a81-20020a254d54000000b007d4e9c5b6cfmr4202442ybb.311.1674752042076; Thu, 26 Jan 2023 08:54:02 -0800 (PST) Date: Thu, 26 Jan 2023 16:53:50 +0000 In-Reply-To: <20230126165351.2561582-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230126165351.2561582-1-ricarkol@google.com> X-Mailer: git-send-email 2.39.1.456.gfc5497dd1b-goog Message-ID: <20230126165351.2561582-6-ricarkol@google.com> Subject: [kvm-unit-tests PATCH v4 5/6] arm: pmu: Print counter values as hexadecimals From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.linux.dev, andrew.jones@linux.dev Cc: maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oliver.upton@linux.dev, reijiw@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The arm/pmu test prints the value of counters as %ld. Most tests start with counters around 0 or UINT_MAX, so having something like -16 instead of 0xffff_fff0 is not very useful. Report counter values as hexadecimals. Reported-by: Alexandru Elisei Signed-off-by: Ricardo Koller Reviewed-by: Eric Auger Reviewed-by: Oliver Upton --- arm/pmu.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/arm/pmu.c b/arm/pmu.c index 082fb41..1e93ea2 100644 --- a/arm/pmu.c +++ b/arm/pmu.c @@ -551,8 +551,8 @@ static void test_mem_access(bool overflow_at_64bits) write_sysreg_s(0x3, PMCNTENSET_EL0); isb(); mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp); - report_info("counter #0 is %ld (MEM_ACCESS)", read_regn_el0(pmevcntr, 0)); - report_info("counter #1 is %ld (MEM_ACCESS)", read_regn_el0(pmevcntr, 1)); + report_info("counter #0 is 0x%lx (MEM_ACCESS)", read_regn_el0(pmevcntr, 0)); + report_info("counter #1 is 0x%lx (MEM_ACCESS)", read_regn_el0(pmevcntr, 1)); /* We may measure more than 20 mem access depending on the core */ report((read_regn_el0(pmevcntr, 0) == read_regn_el0(pmevcntr, 1)) && (read_regn_el0(pmevcntr, 0) >= 20) && !read_sysreg(pmovsclr_el0), @@ -567,7 +567,7 @@ static void test_mem_access(bool overflow_at_64bits) mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp); report(read_sysreg(pmovsclr_el0) == 0x3, "Ran 20 mem accesses with expected overflows on both counters"); - report_info("cnt#0 = %ld cnt#1=%ld overflow=0x%lx", + report_info("cnt#0=0x%lx cnt#1=0x%lx overflow=0x%lx", read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1), read_sysreg(pmovsclr_el0)); } @@ -598,7 +598,7 @@ static void test_sw_incr(bool overflow_at_64bits) write_sysreg(0x1, pmswinc_el0); isb(); - report_info("SW_INCR counter #0 has value %ld", read_regn_el0(pmevcntr, 0)); + report_info("SW_INCR counter #0 has value 0x%lx", read_regn_el0(pmevcntr, 0)); report(read_regn_el0(pmevcntr, 0) == pre_overflow, "PWSYNC does not increment if PMCR.E is unset"); @@ -615,7 +615,7 @@ static void test_sw_incr(bool overflow_at_64bits) isb(); report(read_regn_el0(pmevcntr, 0) == cntr0, "counter #0 after + 100 SW_INCR"); report(read_regn_el0(pmevcntr, 1) == 100, "counter #1 after + 100 SW_INCR"); - report_info("counter values after 100 SW_INCR #0=%ld #1=%ld", + report_info("counter values after 100 SW_INCR #0=0x%lx #1=0x%lx", read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1)); report(read_sysreg(pmovsclr_el0) == 0x1, "overflow on counter #0 after 100 SW_INCR"); @@ -691,7 +691,7 @@ static void test_chained_sw_incr(bool unused) report((read_sysreg(pmovsclr_el0) == 0x1) && (read_regn_el0(pmevcntr, 1) == 1), "overflow and chain counter incremented after 100 SW_INCR/CHAIN"); - report_info("overflow=0x%lx, #0=%ld #1=%ld", read_sysreg(pmovsclr_el0), + report_info("overflow=0x%lx, #0=0x%lx #1=0x%lx", read_sysreg(pmovsclr_el0), read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1)); /* 64b SW_INCR and overflow on CHAIN counter*/ @@ -712,7 +712,7 @@ static void test_chained_sw_incr(bool unused) (read_regn_el0(pmevcntr, 0) == cntr0) && (read_regn_el0(pmevcntr, 1) == cntr1), "expected overflows and values after 100 SW_INCR/CHAIN"); - report_info("overflow=0x%lx, #0=%ld #1=%ld", read_sysreg(pmovsclr_el0), + report_info("overflow=0x%lx, #0=0x%lx #1=0x%lx", read_sysreg(pmovsclr_el0), read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1)); } @@ -744,11 +744,11 @@ static void test_chain_promotion(bool unused) mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); report(!read_regn_el0(pmevcntr, 1) && (read_sysreg(pmovsclr_el0) == 0x1), "odd counter did not increment on overflow if disabled"); - report_info("MEM_ACCESS counter #0 has value %ld", + report_info("MEM_ACCESS counter #0 has value 0x%lx", read_regn_el0(pmevcntr, 0)); - report_info("CHAIN counter #1 has value %ld", + report_info("CHAIN counter #1 has value 0x%lx", read_regn_el0(pmevcntr, 1)); - report_info("overflow counter %ld", read_sysreg(pmovsclr_el0)); + report_info("overflow counter 0x%lx", read_sysreg(pmovsclr_el0)); /* start at 0xFFFFFFDC, +20 with CHAIN enabled, +20 with CHAIN disabled */ pmu_reset(); From patchwork Thu Jan 26 16:53:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13117472 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C7ECC61DA3 for ; Thu, 26 Jan 2023 16:54:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231570AbjAZQyG (ORCPT ); Thu, 26 Jan 2023 11:54:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33898 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231771AbjAZQyF (ORCPT ); Thu, 26 Jan 2023 11:54:05 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 50492CA16 for ; Thu, 26 Jan 2023 08:54:04 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id a62-20020a25ca41000000b0080b838a5199so2448760ybg.6 for ; Thu, 26 Jan 2023 08:54:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=bB+klmLMzU6sC10FeFNal2TB7uAaIFG8RIqJADzg1hw=; b=defmWuPcSX0rx6PAkaZvSab8Ui3ahIP51HTTVLzdQVcey8LDHaKIKfUhBoogt1PCVh RWBvMIKnPWAeRjq2D7aV/VbFTlWIhGzA5Yet/vluKdyZ/4mk+w3gSz/htFfxdwXyDD2/ Zn0fT6lua5GSiel+06MbcoS70OQbfQtSnrFZKaOM3gcSoSi1qzEACjdxpygGvgVc6zdM A5ykqWWeDhD9/cgwQQecJCWyqWpyeKrJqswoa/wVRTc1gROsWNPUDS8Xo9CawxOAZCSH a5ux1EAps9cZa5pyK9WaSF0kOyjIkmrD8WSfjE21da5K2GWiedPkWx4PGsuaRm5qxTFe n+Rw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=bB+klmLMzU6sC10FeFNal2TB7uAaIFG8RIqJADzg1hw=; b=dkdjpEyqPTWMMIyYNj26Uj40JoR59jhA+IyTpEvj1zY6N29b2xSkUtVMMkcva6W2t6 +0R6On5m+oYCxSPM7/n3dIQIf3GpSysXfbVHcSWCjWVLxVeuVWBGSguWSGMAT/Oj3Ded iXcW74J/+YsuX+sYZj++bzev33sFdqzvfnJV0FpYeq1xnBC+xpGbhwu9l5llHW4t0q9s TdS2tlPHZKZmTP7Z+Phixqevk4z3e4Jev85p7CMwS/faDZuFgV5CSiowD08He473LZxg 9OIpZnhcfUfCXa45/gE51GrHKDieIdR7VeYm4zFapFjpbGLfNKi9L+lRgwB2zBWncYYK gZMA== X-Gm-Message-State: AO0yUKX8KooRoGTu95d+YO1j5ocGZPGkLQUWlhjPwCB8q5n7aNUNYUaK 3JGwOD8dNNUvTppCyM5xoFzkT3fKVoi85aeohjGb7wAwegafRJPs+kCuSGRZw8YfnKQEFQp6pOv N2W9ZR8ir+d4CfuC1MqeUrGJRu/aTtBASj8HCKkhVAGOvDIAs7hYv8TcdWEp/Gdc= X-Google-Smtp-Source: AK7set87Kh7//SFpwrnNUjDXgO1zm00dWJo5vzBtlOGAMpaXt8X/NlOOEIDPD+eUpvUdzn2ZvuSsZUl/cLU4qg== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a25:9109:0:b0:80b:e21f:2865 with SMTP id v9-20020a259109000000b0080be21f2865mr289916ybl.373.1674752043525; Thu, 26 Jan 2023 08:54:03 -0800 (PST) Date: Thu, 26 Jan 2023 16:53:51 +0000 In-Reply-To: <20230126165351.2561582-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230126165351.2561582-1-ricarkol@google.com> X-Mailer: git-send-email 2.39.1.456.gfc5497dd1b-goog Message-ID: <20230126165351.2561582-7-ricarkol@google.com> Subject: [kvm-unit-tests PATCH v4 6/6] arm: pmu: Fix test_overflow_interrupt() From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.linux.dev, andrew.jones@linux.dev Cc: maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oliver.upton@linux.dev, reijiw@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org test_overflow_interrupt() (from arm/pmu.c) has a test that passes because the previous test leaves the state needed to pass: the overflow status register with the expected bits. The test (that should fail) does not enable the PMU after mem_access_loop(), which clears the PMCR, and before writing into the software increment register. Fix by clearing the previous test state (pmovsclr_el0) and by enabling the PMU before the sw_incr test. Fixes: 4f5ef94f3aac ("arm: pmu: Test overflow interrupts") Reported-by: Reiji Watanabe Signed-off-by: Ricardo Koller Reviewed-by: Eric Auger --- arm/pmu.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arm/pmu.c b/arm/pmu.c index 1e93ea2..f91b5ca 100644 --- a/arm/pmu.c +++ b/arm/pmu.c @@ -914,10 +914,15 @@ static void test_overflow_interrupt(bool overflow_at_64bits) write_regn_el0(pmevcntr, 0, pre_overflow); write_regn_el0(pmevcntr, 1, pre_overflow); + write_sysreg(ALL_SET_32, pmovsclr_el0); write_sysreg(ALL_SET_32, pmintenset_el1); isb(); mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp); + + set_pmcr(pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp); + isb(); + for (i = 0; i < 100; i++) write_sysreg(0x3, pmswinc_el0);