From patchwork Mon Jun 19 20:03:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 13284899 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EABD6EB64DA for ; Mon, 19 Jun 2023 20:05:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230126AbjFSUFE (ORCPT ); Mon, 19 Jun 2023 16:05:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55618 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229907AbjFSUFC (ORCPT ); Mon, 19 Jun 2023 16:05:02 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 92443137 for ; Mon, 19 Jun 2023 13:04:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687205052; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HOP3j9GhnE1/CcRWevSWFyG95Au2p+0j6upUhVgBVtU=; b=NaVzTLi3abi2+vB8T3R3b+k0wLaM+S4LbOARWt1F7R7Sg0R0BIPRJ/zDQ2jUwRZ2LxOuOi D0Nf4zCAwLWP2xIMpPtT0ZyfpDoRUKXlUnEU/Cyht7uTgADh0L7ibArux5a06DLjepVJDN ONSJjhQCQkW3u90NztNpYmoXeddd7g8= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-222-a8zJjUu1P3uatMvcYlNqCQ-1; Mon, 19 Jun 2023 16:04:08 -0400 X-MC-Unique: a8zJjUu1P3uatMvcYlNqCQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C468C800A15; Mon, 19 Jun 2023 20:04:07 +0000 (UTC) Received: from laptop.redhat.com (unknown [10.39.194.211]) by smtp.corp.redhat.com (Postfix) with ESMTP id AAE85C1603B; Mon, 19 Jun 2023 20:04:05 +0000 (UTC) From: Eric Auger To: eric.auger.pro@gmail.com, eric.auger@redhat.com, kvm@vger.kernel.org, kvmarm@lists.linux.dev, andrew.jones@linux.dev, maz@kernel.org, will@kernel.org, oliver.upton@linux.dev, ricarkol@google.com, reijiw@google.com, alexandru.elisei@arm.com Cc: mark.rutland@arm.com Subject: [kvm-unit-tests PATCH v3 1/6] arm: pmu: pmu-chain-promotion: Improve debug messages Date: Mon, 19 Jun 2023 22:03:56 +0200 Message-Id: <20230619200401.1963751-2-eric.auger@redhat.com> In-Reply-To: <20230619200401.1963751-1-eric.auger@redhat.com> References: <20230619200401.1963751-1-eric.auger@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The pmu-chain-promotion test is composed of several subtests. In case of failures, the current logs are really dificult to analyze since they look very similar and sometimes duplicated for each subtest. Add prefixes for each subtest and introduce a macro that prints the registers we are mostly interested in, namerly the 2 first counters and the overflow counter. Signed-off-by: Eric Auger Reviewed-by: Alexandru Elisei --- v1 -> v2: - Added Alexandru's R-b - s/2d/2nd --- arm/pmu.c | 63 ++++++++++++++++++++++++++++--------------------------- 1 file changed, 32 insertions(+), 31 deletions(-) diff --git a/arm/pmu.c b/arm/pmu.c index f6e95012..cc2e733e 100644 --- a/arm/pmu.c +++ b/arm/pmu.c @@ -715,6 +715,11 @@ static void test_chained_sw_incr(bool unused) report_info("overflow=0x%lx, #0=0x%lx #1=0x%lx", read_sysreg(pmovsclr_el0), read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1)); } +#define PRINT_REGS(__s) \ + report_info("%s #1=0x%lx #0=0x%lx overflow=0x%lx", __s, \ + read_regn_el0(pmevcntr, 1), \ + read_regn_el0(pmevcntr, 0), \ + read_sysreg(pmovsclr_el0)) static void test_chain_promotion(bool unused) { @@ -725,6 +730,7 @@ static void test_chain_promotion(bool unused) return; /* Only enable CHAIN counter */ + report_prefix_push("subtest1"); pmu_reset(); write_regn_el0(pmevtyper, 0, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0); write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0); @@ -732,83 +738,81 @@ static void test_chain_promotion(bool unused) isb(); mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); + PRINT_REGS("post"); report(!read_regn_el0(pmevcntr, 0), "chain counter not counting if even counter is disabled"); + report_prefix_pop(); /* Only enable even counter */ + report_prefix_push("subtest2"); pmu_reset(); write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32); write_sysreg_s(0x1, PMCNTENSET_EL0); isb(); mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); + PRINT_REGS("post"); report(!read_regn_el0(pmevcntr, 1) && (read_sysreg(pmovsclr_el0) == 0x1), "odd counter did not increment on overflow if disabled"); - report_info("MEM_ACCESS counter #0 has value 0x%lx", - read_regn_el0(pmevcntr, 0)); - report_info("CHAIN counter #1 has value 0x%lx", - read_regn_el0(pmevcntr, 1)); - report_info("overflow counter 0x%lx", read_sysreg(pmovsclr_el0)); + report_prefix_pop(); /* start at 0xFFFFFFDC, +20 with CHAIN enabled, +20 with CHAIN disabled */ + report_prefix_push("subtest3"); pmu_reset(); write_sysreg_s(0x3, PMCNTENSET_EL0); write_regn_el0(pmevcntr, 0, PRE_OVERFLOW2_32); isb(); + PRINT_REGS("init"); mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); - report_info("MEM_ACCESS counter #0 has value 0x%lx", - read_regn_el0(pmevcntr, 0)); + PRINT_REGS("After 1st loop"); /* disable the CHAIN event */ write_sysreg_s(0x2, PMCNTENCLR_EL0); mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); - report_info("MEM_ACCESS counter #0 has value 0x%lx", - read_regn_el0(pmevcntr, 0)); + PRINT_REGS("After 2nd loop"); report(read_sysreg(pmovsclr_el0) == 0x1, "should have triggered an overflow on #0"); report(!read_regn_el0(pmevcntr, 1), "CHAIN counter #1 shouldn't have incremented"); + report_prefix_pop(); /* start at 0xFFFFFFDC, +20 with CHAIN disabled, +20 with CHAIN enabled */ + report_prefix_push("subtest4"); pmu_reset(); write_sysreg_s(0x1, PMCNTENSET_EL0); write_regn_el0(pmevcntr, 0, PRE_OVERFLOW2_32); isb(); - report_info("counter #0 = 0x%lx, counter #1 = 0x%lx overflow=0x%lx", - read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1), - read_sysreg(pmovsclr_el0)); + PRINT_REGS("init"); mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); - report_info("MEM_ACCESS counter #0 has value 0x%lx", - read_regn_el0(pmevcntr, 0)); + PRINT_REGS("After 1st loop"); /* enable the CHAIN event */ write_sysreg_s(0x3, PMCNTENSET_EL0); isb(); mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); - report_info("MEM_ACCESS counter #0 has value 0x%lx", - read_regn_el0(pmevcntr, 0)); + + PRINT_REGS("After 2nd loop"); report((read_regn_el0(pmevcntr, 1) == 1) && (read_sysreg(pmovsclr_el0) == 0x1), "CHAIN counter enabled: CHAIN counter was incremented and overflow"); - - report_info("CHAIN counter #1 = 0x%lx, overflow=0x%lx", - read_regn_el0(pmevcntr, 1), read_sysreg(pmovsclr_el0)); + report_prefix_pop(); /* start as MEM_ACCESS/CPU_CYCLES and move to CHAIN/MEM_ACCESS */ + report_prefix_push("subtest5"); pmu_reset(); write_regn_el0(pmevtyper, 0, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0); write_regn_el0(pmevtyper, 1, CPU_CYCLES | PMEVTYPER_EXCLUDE_EL0); write_sysreg_s(0x3, PMCNTENSET_EL0); write_regn_el0(pmevcntr, 0, PRE_OVERFLOW2_32); isb(); + PRINT_REGS("init"); mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); - report_info("MEM_ACCESS counter #0 has value 0x%lx", - read_regn_el0(pmevcntr, 0)); + PRINT_REGS("After 1st loop"); /* 0 becomes CHAINED */ write_sysreg_s(0x0, PMCNTENSET_EL0); @@ -817,37 +821,34 @@ static void test_chain_promotion(bool unused) write_regn_el0(pmevcntr, 1, 0x0); mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); - report_info("MEM_ACCESS counter #0 has value 0x%lx", - read_regn_el0(pmevcntr, 0)); + PRINT_REGS("After 2nd loop"); report((read_regn_el0(pmevcntr, 1) == 1) && (read_sysreg(pmovsclr_el0) == 0x1), "32b->64b: CHAIN counter incremented and overflow"); - - report_info("CHAIN counter #1 = 0x%lx, overflow=0x%lx", - read_regn_el0(pmevcntr, 1), read_sysreg(pmovsclr_el0)); + report_prefix_pop(); /* start as CHAIN/MEM_ACCESS and move to MEM_ACCESS/CPU_CYCLES */ + report_prefix_push("subtest6"); pmu_reset(); write_regn_el0(pmevtyper, 0, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0); write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0); write_regn_el0(pmevcntr, 0, PRE_OVERFLOW2_32); write_sysreg_s(0x3, PMCNTENSET_EL0); + PRINT_REGS("init"); mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); - report_info("counter #0=0x%lx, counter #1=0x%lx", - read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1)); + PRINT_REGS("After 1st loop"); write_sysreg_s(0x0, PMCNTENSET_EL0); write_regn_el0(pmevtyper, 1, CPU_CYCLES | PMEVTYPER_EXCLUDE_EL0); write_sysreg_s(0x3, PMCNTENSET_EL0); mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); + PRINT_REGS("After 2nd loop"); report(read_sysreg(pmovsclr_el0) == 1, "overflow is expected on counter 0"); - report_info("counter #0=0x%lx, counter #1=0x%lx overflow=0x%lx", - read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1), - read_sysreg(pmovsclr_el0)); + report_prefix_pop(); } static bool expect_interrupts(uint32_t bitmap) From patchwork Mon Jun 19 20:03:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 13284900 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D0F2EB64D9 for ; Mon, 19 Jun 2023 20:05:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230088AbjFSUFJ (ORCPT ); Mon, 19 Jun 2023 16:05:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55634 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229957AbjFSUFD (ORCPT ); Mon, 19 Jun 2023 16:05:03 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BDB42183 for ; Mon, 19 Jun 2023 13:04:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687205055; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jbPR7rzMfEXUzkGvuOyQ+Kf7ghmQ72DxloMHa8LZzA4=; b=NIl7gGUU00QlvUFgfqhES08AczKYD19FeL0W0/jsEGWtDgerPLX3ZqUG1oTqqmyktCdyOT ONBhol5oZRW03415Jd2cnwgFNng5Fi8/c0sMkgQCnjO+mq5+2qPo4/lNUQHs10c2bs7zeZ VoTV3EDOLRWNoq3U62w/HxeAF2+b2Ow= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-68-nLbYMQPyN9SXjO8NyHpptg-1; Mon, 19 Jun 2023 16:04:11 -0400 X-MC-Unique: nLbYMQPyN9SXjO8NyHpptg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2ED9D29AA2CE; Mon, 19 Jun 2023 20:04:11 +0000 (UTC) Received: from laptop.redhat.com (unknown [10.39.194.211]) by smtp.corp.redhat.com (Postfix) with ESMTP id 189D2C1603B; Mon, 19 Jun 2023 20:04:07 +0000 (UTC) From: Eric Auger To: eric.auger.pro@gmail.com, eric.auger@redhat.com, kvm@vger.kernel.org, kvmarm@lists.linux.dev, andrew.jones@linux.dev, maz@kernel.org, will@kernel.org, oliver.upton@linux.dev, ricarkol@google.com, reijiw@google.com, alexandru.elisei@arm.com Cc: mark.rutland@arm.com Subject: [kvm-unit-tests PATCH v3 2/6] arm: pmu: pmu-chain-promotion: Introduce defines for count and margin values Date: Mon, 19 Jun 2023 22:03:57 +0200 Message-Id: <20230619200401.1963751-3-eric.auger@redhat.com> In-Reply-To: <20230619200401.1963751-1-eric.auger@redhat.com> References: <20230619200401.1963751-1-eric.auger@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The pmu-chain-promotion test is composed of separate subtests. Some of them apply some settings on a first MEM_ACCESS loop iterations, change the settings and run another MEM_ACCESS loop. The PRE_OVERFLOW2 MEM_ACCESS counter init value is defined so that the first loop does not overflow and the second loop overflows. At the moment the MEM_ACCESS count number is hardcoded to 20 and PRE_OVERFLOW2 is set to UINT32_MAX - 20 - 15 where 15 acts as a margin. Introduce defines for the count number and the margin so that it becomes easier to change them. Signed-off-by: Eric Auger Reviewed-by: Alexandru Elisei --- v1 -> v2: - 2d -> 2nd and use @COUNT in comment - Added Alexandru's R-b --- arm/pmu.c | 35 +++++++++++++++++++++-------------- 1 file changed, 21 insertions(+), 14 deletions(-) diff --git a/arm/pmu.c b/arm/pmu.c index cc2e733e..51c0fe80 100644 --- a/arm/pmu.c +++ b/arm/pmu.c @@ -55,11 +55,18 @@ #define EXT_COMMON_EVENTS_LOW 0x4000 #define EXT_COMMON_EVENTS_HIGH 0x403F -#define ALL_SET_32 0x00000000FFFFFFFFULL +#define ALL_SET_32 0x00000000FFFFFFFFULL #define ALL_CLEAR 0x0000000000000000ULL #define PRE_OVERFLOW_32 0x00000000FFFFFFF0ULL -#define PRE_OVERFLOW2_32 0x00000000FFFFFFDCULL #define PRE_OVERFLOW_64 0xFFFFFFFFFFFFFFF0ULL +#define COUNT 20 +#define MARGIN 15 +/* + * PRE_OVERFLOW2 is set so that 1st @COUNT iterations do not + * produce 32b overflow and 2nd @COUNT iterations do. To accommodate + * for some observed variability we take into account a given @MARGIN + */ +#define PRE_OVERFLOW2_32 (ALL_SET_32 - COUNT - MARGIN) #define PRE_OVERFLOW(__overflow_at_64bits) \ (__overflow_at_64bits ? PRE_OVERFLOW_64 : PRE_OVERFLOW_32) @@ -737,7 +744,7 @@ static void test_chain_promotion(bool unused) write_sysreg_s(0x2, PMCNTENSET_EL0); isb(); - mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); + mem_access_loop(addr, COUNT, pmu.pmcr_ro | PMU_PMCR_E); PRINT_REGS("post"); report(!read_regn_el0(pmevcntr, 0), "chain counter not counting if even counter is disabled"); @@ -750,13 +757,13 @@ static void test_chain_promotion(bool unused) write_sysreg_s(0x1, PMCNTENSET_EL0); isb(); - mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); + mem_access_loop(addr, COUNT, pmu.pmcr_ro | PMU_PMCR_E); PRINT_REGS("post"); report(!read_regn_el0(pmevcntr, 1) && (read_sysreg(pmovsclr_el0) == 0x1), "odd counter did not increment on overflow if disabled"); report_prefix_pop(); - /* start at 0xFFFFFFDC, +20 with CHAIN enabled, +20 with CHAIN disabled */ + /* 1st COUNT with CHAIN enabled, next COUNT with CHAIN disabled */ report_prefix_push("subtest3"); pmu_reset(); write_sysreg_s(0x3, PMCNTENSET_EL0); @@ -764,12 +771,12 @@ static void test_chain_promotion(bool unused) isb(); PRINT_REGS("init"); - mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); + mem_access_loop(addr, COUNT, pmu.pmcr_ro | PMU_PMCR_E); PRINT_REGS("After 1st loop"); /* disable the CHAIN event */ write_sysreg_s(0x2, PMCNTENCLR_EL0); - mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); + mem_access_loop(addr, COUNT, pmu.pmcr_ro | PMU_PMCR_E); PRINT_REGS("After 2nd loop"); report(read_sysreg(pmovsclr_el0) == 0x1, "should have triggered an overflow on #0"); @@ -777,7 +784,7 @@ static void test_chain_promotion(bool unused) "CHAIN counter #1 shouldn't have incremented"); report_prefix_pop(); - /* start at 0xFFFFFFDC, +20 with CHAIN disabled, +20 with CHAIN enabled */ + /* 1st COUNT with CHAIN disabled, next COUNT with CHAIN enabled */ report_prefix_push("subtest4"); pmu_reset(); @@ -786,13 +793,13 @@ static void test_chain_promotion(bool unused) isb(); PRINT_REGS("init"); - mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); + mem_access_loop(addr, COUNT, pmu.pmcr_ro | PMU_PMCR_E); PRINT_REGS("After 1st loop"); /* enable the CHAIN event */ write_sysreg_s(0x3, PMCNTENSET_EL0); isb(); - mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); + mem_access_loop(addr, COUNT, pmu.pmcr_ro | PMU_PMCR_E); PRINT_REGS("After 2nd loop"); @@ -811,7 +818,7 @@ static void test_chain_promotion(bool unused) isb(); PRINT_REGS("init"); - mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); + mem_access_loop(addr, COUNT, pmu.pmcr_ro | PMU_PMCR_E); PRINT_REGS("After 1st loop"); /* 0 becomes CHAINED */ @@ -820,7 +827,7 @@ static void test_chain_promotion(bool unused) write_sysreg_s(0x3, PMCNTENSET_EL0); write_regn_el0(pmevcntr, 1, 0x0); - mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); + mem_access_loop(addr, COUNT, pmu.pmcr_ro | PMU_PMCR_E); PRINT_REGS("After 2nd loop"); report((read_regn_el0(pmevcntr, 1) == 1) && @@ -837,14 +844,14 @@ static void test_chain_promotion(bool unused) write_sysreg_s(0x3, PMCNTENSET_EL0); PRINT_REGS("init"); - mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); + mem_access_loop(addr, COUNT, pmu.pmcr_ro | PMU_PMCR_E); PRINT_REGS("After 1st loop"); write_sysreg_s(0x0, PMCNTENSET_EL0); write_regn_el0(pmevtyper, 1, CPU_CYCLES | PMEVTYPER_EXCLUDE_EL0); write_sysreg_s(0x3, PMCNTENSET_EL0); - mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); + mem_access_loop(addr, COUNT, pmu.pmcr_ro | PMU_PMCR_E); PRINT_REGS("After 2nd loop"); report(read_sysreg(pmovsclr_el0) == 1, "overflow is expected on counter 0"); From patchwork Mon Jun 19 20:03:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 13284901 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6528EEB64DB for ; Mon, 19 Jun 2023 20:05:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230187AbjFSUFK (ORCPT ); Mon, 19 Jun 2023 16:05:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55650 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230006AbjFSUFD (ORCPT ); Mon, 19 Jun 2023 16:05:03 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 67580E0 for ; Mon, 19 Jun 2023 13:04:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687205057; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GqIaQZFh1djsuY4Wlv94t2aq9RRl3AYVJO/RgodlbrM=; b=ZUiexX7hofLay+KV65v0qKPft+sTRoDl4XmLKtOmd/k3VRr6G90zh9ZCvhTavCn+hjcjrB bIf5imW1yphD1pSP+w9JG61e01WXnt2G3W3dg2Ou7jL4MuuE0BAB4w9/rsi5/IjGyZZ8M9 HqFkhUPtRK9KPMpb3/OOqoofPiRrAB0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-482-9Uaz64kCNAWaxBaB0ptp0Q-1; Mon, 19 Jun 2023 16:04:14 -0400 X-MC-Unique: 9Uaz64kCNAWaxBaB0ptp0Q-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 86026185A78B; Mon, 19 Jun 2023 20:04:13 +0000 (UTC) Received: from laptop.redhat.com (unknown [10.39.194.211]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7510EC1604C; Mon, 19 Jun 2023 20:04:11 +0000 (UTC) From: Eric Auger To: eric.auger.pro@gmail.com, eric.auger@redhat.com, kvm@vger.kernel.org, kvmarm@lists.linux.dev, andrew.jones@linux.dev, maz@kernel.org, will@kernel.org, oliver.upton@linux.dev, ricarkol@google.com, reijiw@google.com, alexandru.elisei@arm.com Cc: mark.rutland@arm.com Subject: [kvm-unit-tests PATCH v3 3/6] arm: pmu: Add extra DSB barriers in the mem_access loop Date: Mon, 19 Jun 2023 22:03:58 +0200 Message-Id: <20230619200401.1963751-4-eric.auger@redhat.com> In-Reply-To: <20230619200401.1963751-1-eric.auger@redhat.com> References: <20230619200401.1963751-1-eric.auger@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The mem access loop currently features ISB barriers only. However the mem_access loop counts the number of accesses to memory. ISB do not garantee the PE cannot reorder memory access. Let's add a DSB ISH before the write to PMCR_EL0 that enables the PMU to make sure any previous memory access aren't counted in the loop, another one after the PMU gets enabled (to make sure loop memory accesses cannot be reordered before the PMU gets enabled) and a last one after the last iteration, before disabling the PMU. Signed-off-by: Eric Auger Suggested-by: Alexandru Elisei Reviewed-by: Alexandru Elisei --- v2 -> v3: - Added Alexandru's R-b v1 -> v2: - added yet another DSB after PMU enabled as suggested by Alexandru This was discussed in https://lore.kernel.org/all/YzxmHpV2rpfaUdWi@monolith.localdoman/ --- arm/pmu.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arm/pmu.c b/arm/pmu.c index 51c0fe80..74dd4c10 100644 --- a/arm/pmu.c +++ b/arm/pmu.c @@ -301,13 +301,16 @@ static void mem_access_loop(void *addr, long loop, uint32_t pmcr) { uint64_t pmcr64 = pmcr; asm volatile( + " dsb ish\n" " msr pmcr_el0, %[pmcr]\n" " isb\n" + " dsb ish\n" " mov x10, %[loop]\n" "1: sub x10, x10, #1\n" " ldr x9, [%[addr]]\n" " cmp x10, #0x0\n" " b.gt 1b\n" + " dsb ish\n" " msr pmcr_el0, xzr\n" " isb\n" : From patchwork Mon Jun 19 20:03:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 13284903 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45628EB64DB for ; Mon, 19 Jun 2023 20:05:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230295AbjFSUFQ (ORCPT ); Mon, 19 Jun 2023 16:05:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55674 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229547AbjFSUFN (ORCPT ); Mon, 19 Jun 2023 16:05:13 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6C4F7E65 for ; Mon, 19 Jun 2023 13:04:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687205060; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=N+qf1a+VgjPuvHu//QonjTW6u5bwlPFey212v/LxOuA=; b=ZdMYn3pTHkVL0y1jV8BrUefFKiAEqNL5my+qtz3XhK1lUpMJrwkXSGs1pIipfcbDn3tQrY TZPgOaj8HjUUnxHH5dcHNwGz2eYMXeFu0DyEsTAWuoYMo6UVPcb/FmGYPRLGjcEVVt+WGd Up9Aap+KLYtFFBY9NY2i/PSxK4ikGK8= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-113-oKvPbwtEOISJg-syfA3cJg-1; Mon, 19 Jun 2023 16:04:16 -0400 X-MC-Unique: oKvPbwtEOISJg-syfA3cJg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E083180120F; Mon, 19 Jun 2023 20:04:15 +0000 (UTC) Received: from laptop.redhat.com (unknown [10.39.194.211]) by smtp.corp.redhat.com (Postfix) with ESMTP id CB409C1603B; Mon, 19 Jun 2023 20:04:13 +0000 (UTC) From: Eric Auger To: eric.auger.pro@gmail.com, eric.auger@redhat.com, kvm@vger.kernel.org, kvmarm@lists.linux.dev, andrew.jones@linux.dev, maz@kernel.org, will@kernel.org, oliver.upton@linux.dev, ricarkol@google.com, reijiw@google.com, alexandru.elisei@arm.com Cc: mark.rutland@arm.com Subject: [kvm-unit-tests PATCH v3 4/6] arm: pmu: Fix chain counter enable/disable sequences Date: Mon, 19 Jun 2023 22:03:59 +0200 Message-Id: <20230619200401.1963751-5-eric.auger@redhat.com> In-Reply-To: <20230619200401.1963751-1-eric.auger@redhat.com> References: <20230619200401.1963751-1-eric.auger@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In some ARM ARM ddi0487 revisions it is said that disabling/enabling a pair of counters that are paired by a CHAIN event should follow a given sequence: Enable the high counter first, isb, enable low counter Disable the low counter first, isb, disable the high counter This was the case in Fc. However this is not written anymore in subsequent revions such as Ia. Anyway, just in case, and because it also makes the code a little bit simpler, introduce 2 helpers to enable/disable chain counters that execute those sequences and replace the existing PMCNTENCLR/ENSET calls (at least this cannot do any harm). Also fix 2 write_sysreg_s(0x0, PMCNTENSET_EL0) in subtest 5 & 6 and replace them by PMCNTENCLR writes since writing 0 in PMCNTENSET_EL0 has no effect. While at it, aslo removes a useless pmevtyper setting in test_chained_sw_incr() since the type of the counter is not reset by pmu_reset() Signed-off-by: Eric Auger Reviewed-by: Alexandru Elisei --- v2 -> v3: - use enable_chain_counter() in test_chain_promotion() and test_chained_sw_incr() as well. This also fix the missing ISB reported by Alexandru. - Also removes a useless pmevtyper setting in test_chained_sw_incr() v1 -> v2: - fix the enable_chain_counter()/disable_chain_counter() sequence, ie. swap n + 1 / n as reported by Alexandru. - fix an other comment using the 'low' terminology --- arm/pmu.c | 48 ++++++++++++++++++++++++++++++++---------------- 1 file changed, 32 insertions(+), 16 deletions(-) diff --git a/arm/pmu.c b/arm/pmu.c index 74dd4c10..0995a249 100644 --- a/arm/pmu.c +++ b/arm/pmu.c @@ -631,6 +631,22 @@ static void test_sw_incr(bool overflow_at_64bits) "overflow on counter #0 after 100 SW_INCR"); } +static void enable_chain_counter(int even) +{ + write_sysreg_s(BIT(even + 1), PMCNTENSET_EL0); /* Enable the high counter first */ + isb(); + write_sysreg_s(BIT(even), PMCNTENSET_EL0); /* Enable the low counter */ + isb(); +} + +static void disable_chain_counter(int even) +{ + write_sysreg_s(BIT(even), PMCNTENCLR_EL0); /* Disable the low counter first*/ + isb(); + write_sysreg_s(BIT(even + 1), PMCNTENCLR_EL0); /* Disable the high counter */ + isb(); +} + static void test_chained_counters(bool unused) { uint32_t events[] = {CPU_CYCLES, CHAIN}; @@ -643,9 +659,8 @@ static void test_chained_counters(bool unused) write_regn_el0(pmevtyper, 0, CPU_CYCLES | PMEVTYPER_EXCLUDE_EL0); write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0); - /* enable counters #0 and #1 */ - write_sysreg_s(0x3, PMCNTENSET_EL0); write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32); + enable_chain_counter(0); precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E); @@ -655,10 +670,10 @@ static void test_chained_counters(bool unused) /* test 64b overflow */ pmu_reset(); - write_sysreg_s(0x3, PMCNTENSET_EL0); write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32); write_regn_el0(pmevcntr, 1, 0x1); + enable_chain_counter(0); precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E); report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0)); report(read_regn_el0(pmevcntr, 1) == 2, "CHAIN counter #1 set to 2"); @@ -687,8 +702,7 @@ static void test_chained_sw_incr(bool unused) write_regn_el0(pmevtyper, 0, SW_INCR | PMEVTYPER_EXCLUDE_EL0); write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0); - /* enable counters #0 and #1 */ - write_sysreg_s(0x3, PMCNTENSET_EL0); + enable_chain_counter(0); write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32); set_pmcr(pmu.pmcr_ro | PMU_PMCR_E); @@ -707,10 +721,9 @@ static void test_chained_sw_incr(bool unused) /* 64b SW_INCR and overflow on CHAIN counter*/ pmu_reset(); - write_regn_el0(pmevtyper, 1, events[1] | PMEVTYPER_EXCLUDE_EL0); write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32); write_regn_el0(pmevcntr, 1, ALL_SET_32); - write_sysreg_s(0x3, PMCNTENSET_EL0); + enable_chain_counter(0); set_pmcr(pmu.pmcr_ro | PMU_PMCR_E); isb(); @@ -769,16 +782,17 @@ static void test_chain_promotion(bool unused) /* 1st COUNT with CHAIN enabled, next COUNT with CHAIN disabled */ report_prefix_push("subtest3"); pmu_reset(); - write_sysreg_s(0x3, PMCNTENSET_EL0); write_regn_el0(pmevcntr, 0, PRE_OVERFLOW2_32); - isb(); + enable_chain_counter(0); PRINT_REGS("init"); mem_access_loop(addr, COUNT, pmu.pmcr_ro | PMU_PMCR_E); PRINT_REGS("After 1st loop"); /* disable the CHAIN event */ - write_sysreg_s(0x2, PMCNTENCLR_EL0); + disable_chain_counter(0); + write_sysreg_s(0x1, PMCNTENSET_EL0); /* Enable the low counter */ + isb(); mem_access_loop(addr, COUNT, pmu.pmcr_ro | PMU_PMCR_E); PRINT_REGS("After 2nd loop"); report(read_sysreg(pmovsclr_el0) == 0x1, @@ -799,9 +813,11 @@ static void test_chain_promotion(bool unused) mem_access_loop(addr, COUNT, pmu.pmcr_ro | PMU_PMCR_E); PRINT_REGS("After 1st loop"); - /* enable the CHAIN event */ - write_sysreg_s(0x3, PMCNTENSET_EL0); + /* Disable the low counter first and enable the chain counter */ + write_sysreg_s(0x1, PMCNTENCLR_EL0); isb(); + enable_chain_counter(0); + mem_access_loop(addr, COUNT, pmu.pmcr_ro | PMU_PMCR_E); PRINT_REGS("After 2nd loop"); @@ -825,10 +841,10 @@ static void test_chain_promotion(bool unused) PRINT_REGS("After 1st loop"); /* 0 becomes CHAINED */ - write_sysreg_s(0x0, PMCNTENSET_EL0); + write_sysreg_s(0x3, PMCNTENCLR_EL0); write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0); - write_sysreg_s(0x3, PMCNTENSET_EL0); write_regn_el0(pmevcntr, 1, 0x0); + enable_chain_counter(0); mem_access_loop(addr, COUNT, pmu.pmcr_ro | PMU_PMCR_E); PRINT_REGS("After 2nd loop"); @@ -844,13 +860,13 @@ static void test_chain_promotion(bool unused) write_regn_el0(pmevtyper, 0, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0); write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0); write_regn_el0(pmevcntr, 0, PRE_OVERFLOW2_32); - write_sysreg_s(0x3, PMCNTENSET_EL0); + enable_chain_counter(0); PRINT_REGS("init"); mem_access_loop(addr, COUNT, pmu.pmcr_ro | PMU_PMCR_E); PRINT_REGS("After 1st loop"); - write_sysreg_s(0x0, PMCNTENSET_EL0); + disable_chain_counter(0); write_regn_el0(pmevtyper, 1, CPU_CYCLES | PMEVTYPER_EXCLUDE_EL0); write_sysreg_s(0x3, PMCNTENSET_EL0); From patchwork Mon Jun 19 20:04:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 13284902 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07102EB64D9 for ; Mon, 19 Jun 2023 20:05:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230452AbjFSUFP (ORCPT ); Mon, 19 Jun 2023 16:05:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55684 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230006AbjFSUFN (ORCPT ); Mon, 19 Jun 2023 16:05:13 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ECD77E4A for ; Mon, 19 Jun 2023 13:04:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687205065; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=El7ZbFEFYO2h3mXEecy2DkF2h1Jjv/TFBhVrLMkx06Q=; b=gvkWtVl398LagnZ8yDMNB0ViRhW4bOcLw1UrP/4h+ovVX4gKN7n307waCXzPQshn5cwzTQ T2OLpyUG/CLI4lPQLPghYlgGsS0r0h1TsXj3Y1xv2yPwkhhoiWB3Ulvltvo4Kox2auiW/o 8M2y9G7bw/9zqu8TFDtE94SBBvxLT5o= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-586-kLXr9zHyPuWivPQiDJS8Rg-1; Mon, 19 Jun 2023 16:04:21 -0400 X-MC-Unique: kLXr9zHyPuWivPQiDJS8Rg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0DB00185A78E; Mon, 19 Jun 2023 20:04:21 +0000 (UTC) Received: from laptop.redhat.com (unknown [10.39.194.211]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3346FC1603B; Mon, 19 Jun 2023 20:04:16 +0000 (UTC) From: Eric Auger To: eric.auger.pro@gmail.com, eric.auger@redhat.com, kvm@vger.kernel.org, kvmarm@lists.linux.dev, andrew.jones@linux.dev, maz@kernel.org, will@kernel.org, oliver.upton@linux.dev, ricarkol@google.com, reijiw@google.com, alexandru.elisei@arm.com Cc: mark.rutland@arm.com Subject: [kvm-unit-tests PATCH v3 5/6] arm: pmu: Add pmu-mem-access-reliability test Date: Mon, 19 Jun 2023 22:04:00 +0200 Message-Id: <20230619200401.1963751-6-eric.auger@redhat.com> In-Reply-To: <20230619200401.1963751-1-eric.auger@redhat.com> References: <20230619200401.1963751-1-eric.auger@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a new basic test that runs MEM_ACCESS loop over 100 iterations and make sure the number of measured MEM_ACCESS never overflows the margin. Some other pmu tests rely on this pattern and if the MEM_ACCESS measurement is not reliable, it is better to report it beforehand and not confuse the user any further. Without the subsequent patch, this typically fails on ThunderXv2 with the following logs: INFO: pmu: pmu-mem-access-reliability: 32-bit overflows: overflow=1 min=21 max=41 COUNT=20 MARGIN=15 FAIL: pmu: pmu-mem-access-reliability: 32-bit overflows: mem_access count is reliable Signed-off-by: Eric Auger Reviewed-by: Alexandru Elisei Tested-by: Alexandru Elisei --- v2 -> v3: - rename variables as suggested by Alexandru, rework the traces accordingly. Use all_set. v1 -> v2: - use mem-access instead of memaccess as suggested by Mark - simplify the logic and add comments in the test loop --- arm/pmu.c | 59 +++++++++++++++++++++++++++++++++++++++++++++++ arm/unittests.cfg | 6 +++++ 2 files changed, 65 insertions(+) diff --git a/arm/pmu.c b/arm/pmu.c index 0995a249..491d2958 100644 --- a/arm/pmu.c +++ b/arm/pmu.c @@ -56,6 +56,11 @@ #define EXT_COMMON_EVENTS_HIGH 0x403F #define ALL_SET_32 0x00000000FFFFFFFFULL +#define ALL_SET_64 0xFFFFFFFFFFFFFFFFULL + +#define ALL_SET(__overflow_at_64bits) \ + (__overflow_at_64bits ? ALL_SET_64 : ALL_SET_32) + #define ALL_CLEAR 0x0000000000000000ULL #define PRE_OVERFLOW_32 0x00000000FFFFFFF0ULL #define PRE_OVERFLOW_64 0xFFFFFFFFFFFFFFF0ULL @@ -67,6 +72,10 @@ * for some observed variability we take into account a given @MARGIN */ #define PRE_OVERFLOW2_32 (ALL_SET_32 - COUNT - MARGIN) +#define PRE_OVERFLOW2_64 (ALL_SET_64 - COUNT - MARGIN) + +#define PRE_OVERFLOW2(__overflow_at_64bits) \ + (__overflow_at_64bits ? PRE_OVERFLOW2_64 : PRE_OVERFLOW2_32) #define PRE_OVERFLOW(__overflow_at_64bits) \ (__overflow_at_64bits ? PRE_OVERFLOW_64 : PRE_OVERFLOW_32) @@ -744,6 +753,53 @@ static void test_chained_sw_incr(bool unused) read_regn_el0(pmevcntr, 0), \ read_sysreg(pmovsclr_el0)) +/* + * This test checks that a mem access loop featuring COUNT accesses + * does not overflow with an init value of PRE_OVERFLOW2. It also + * records the min/max access count to see how much the counting + * is (un)reliable + */ +static void test_mem_access_reliability(bool overflow_at_64bits) +{ + uint32_t events[] = {MEM_ACCESS}; + void *addr = malloc(PAGE_SIZE); + uint64_t cntr_val, num_events, max = 0, min = pmevcntr_mask(); + uint64_t pre_overflow2 = PRE_OVERFLOW2(overflow_at_64bits); + uint64_t all_set = ALL_SET(overflow_at_64bits); + uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0; + bool overflow = false; + + if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) || + !check_overflow_prerequisites(overflow_at_64bits)) + return; + + pmu_reset(); + write_regn_el0(pmevtyper, 0, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0); + for (int i = 0; i < 100; i++) { + pmu_reset(); + write_regn_el0(pmevcntr, 0, pre_overflow2); + write_sysreg_s(0x1, PMCNTENSET_EL0); + isb(); + mem_access_loop(addr, COUNT, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp); + cntr_val = read_regn_el0(pmevcntr, 0); + if (cntr_val >= pre_overflow2) { + num_events = cntr_val - pre_overflow2; + } else { + /* unexpected counter overflow */ + num_events = cntr_val + all_set - pre_overflow2; + overflow = true; + report_info("iter=%d num_events=%ld min=%ld max=%ld overflow!!!", + i, num_events, min, max); + } + /* record extreme value */ + max = MAX(num_events, max); + min = MIN(num_events, min); + } + report_info("overflow=%d min=%ld max=%ld expected=%d acceptable margin=%d", + overflow, min, max, COUNT, MARGIN); + report(!overflow, "mem_access count is reliable"); +} + static void test_chain_promotion(bool unused) { uint32_t events[] = {MEM_ACCESS, CHAIN}; @@ -1201,6 +1257,9 @@ int main(int argc, char *argv[]) } else if (strcmp(argv[1], "pmu-basic-event-count") == 0) { run_event_test(argv[1], test_basic_event_count, false); run_event_test(argv[1], test_basic_event_count, true); + } else if (strcmp(argv[1], "pmu-mem-access-reliability") == 0) { + run_event_test(argv[1], test_mem_access_reliability, false); + run_event_test(argv[1], test_mem_access_reliability, true); } else if (strcmp(argv[1], "pmu-mem-access") == 0) { run_event_test(argv[1], test_mem_access, false); run_event_test(argv[1], test_mem_access, true); diff --git a/arm/unittests.cfg b/arm/unittests.cfg index 5e67b558..fe601cbb 100644 --- a/arm/unittests.cfg +++ b/arm/unittests.cfg @@ -90,6 +90,12 @@ groups = pmu arch = arm64 extra_params = -append 'pmu-mem-access' +[pmu-mem-access-reliability] +file = pmu.flat +groups = pmu +arch = arm64 +extra_params = -append 'pmu-mem-access-reliability' + [pmu-sw-incr] file = pmu.flat groups = pmu From patchwork Mon Jun 19 20:04:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 13284904 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6F3DEB64DA for ; Mon, 19 Jun 2023 20:05:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231154AbjFSUFR (ORCPT ); Mon, 19 Jun 2023 16:05:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55716 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230059AbjFSUFN (ORCPT ); Mon, 19 Jun 2023 16:05:13 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AD845E68 for ; Mon, 19 Jun 2023 13:04:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687205067; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=f6UslqvHCtU9IgsxF806ah5Xe5PbELYOJwnKl7X2bTs=; b=JKuu2ksjDq0wnVLDKXau/1EGQ/lWpWZ6XUvp+UMvB7AH0UFm2R2ck1CDdba2bUj4Jb/9JH prlGicajo0vZuww+Z84K4ChPtzzMLMSOkzM5Re3THaZ8HbRwWHFMTy3lQX5lAx4k0uK0h0 veR98U4bSrpdyaXzSBlxkpedXznryPo= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-552-EpOejwurNr6Mx1yi44eXmQ-1; Mon, 19 Jun 2023 16:04:24 -0400 X-MC-Unique: EpOejwurNr6Mx1yi44eXmQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 69F1E80120F; Mon, 19 Jun 2023 20:04:23 +0000 (UTC) Received: from laptop.redhat.com (unknown [10.39.194.211]) by smtp.corp.redhat.com (Postfix) with ESMTP id 54635C1603B; Mon, 19 Jun 2023 20:04:21 +0000 (UTC) From: Eric Auger To: eric.auger.pro@gmail.com, eric.auger@redhat.com, kvm@vger.kernel.org, kvmarm@lists.linux.dev, andrew.jones@linux.dev, maz@kernel.org, will@kernel.org, oliver.upton@linux.dev, ricarkol@google.com, reijiw@google.com, alexandru.elisei@arm.com Cc: mark.rutland@arm.com Subject: [kvm-unit-tests PATCH v3 6/6] arm: pmu-chain-promotion: Increase the count and margin values Date: Mon, 19 Jun 2023 22:04:01 +0200 Message-Id: <20230619200401.1963751-7-eric.auger@redhat.com> In-Reply-To: <20230619200401.1963751-1-eric.auger@redhat.com> References: <20230619200401.1963751-1-eric.auger@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Let's increase the mem_access loop count by defining COUNT=250 (instead of 20) and define a more reasonable margin (100 instead of 15 previously) so that it gives better chance to accommodate for HW implementation variance. Those values were chosen arbitrarily higher. Those values fix the random failures on ThunderX2 machines. Signed-off-by: Eric Auger Tested-by: Mark Rutland --- arm/pmu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arm/pmu.c b/arm/pmu.c index 491d2958..63822d19 100644 --- a/arm/pmu.c +++ b/arm/pmu.c @@ -64,8 +64,8 @@ #define ALL_CLEAR 0x0000000000000000ULL #define PRE_OVERFLOW_32 0x00000000FFFFFFF0ULL #define PRE_OVERFLOW_64 0xFFFFFFFFFFFFFFF0ULL -#define COUNT 20 -#define MARGIN 15 +#define COUNT 250 +#define MARGIN 100 /* * PRE_OVERFLOW2 is set so that 1st @COUNT iterations do not * produce 32b overflow and 2nd @COUNT iterations do. To accommodate