From patchwork Mon Dec 16 20:47:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 11295221 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 35F371580 for ; Mon, 16 Dec 2019 20:48:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1489924683 for ; Mon, 16 Dec 2019 20:48:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="JuT3wKT0" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727435AbfLPUsQ (ORCPT ); Mon, 16 Dec 2019 15:48:16 -0500 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:32342 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727036AbfLPUsO (ORCPT ); Mon, 16 Dec 2019 15:48:14 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1576529294; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5QgGQR5lmCzHdU2tw8u26Dp0LVz138MaKuXfv7AyzLc=; b=JuT3wKT0NUxnVUL8Gyg/dpTnyQoBym9nVOgh7O523VxgLg16wTTrR8XwacysfLy3N4Z3VT pxi4oGbx5gpTW6aOO6jv28TTY0FisaXEoQltAep1ztiMamCqhIvgZ3s0jEZoRVpnKQbU75 Im3JH9SwdXCDxTxbinBcqRvqPOntw8c= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-209-BnUFytWeO2mWLhhFFJvweA-1; Mon, 16 Dec 2019 15:48:12 -0500 X-MC-Unique: BnUFytWeO2mWLhhFFJvweA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id AA7BC108EE8B; Mon, 16 Dec 2019 20:48:10 +0000 (UTC) Received: from laptop.redhat.com (ovpn-116-117.ams2.redhat.com [10.36.116.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id F38875D9C9; Mon, 16 Dec 2019 20:48:07 +0000 (UTC) From: Eric Auger To: eric.auger.pro@gmail.com, eric.auger@redhat.com, maz@kernel.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, qemu-devel@nongnu.org, qemu-arm@nongnu.org Cc: drjones@redhat.com, andrew.murray@arm.com, andre.przywara@arm.com, peter.maydell@linaro.org, alexandru.elisei@arm.com Subject: [kvm-unit-tests PATCH 01/10] arm64: Provide read/write_sysreg_s Date: Mon, 16 Dec 2019 21:47:48 +0100 Message-Id: <20191216204757.4020-2-eric.auger@redhat.com> In-Reply-To: <20191216204757.4020-1-eric.auger@redhat.com> References: <20191216204757.4020-1-eric.auger@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Andrew Jones Sometimes we need to test access to system registers which are missing assembler mnemonics. Signed-off-by: Andrew Jones Reviewed-by: Alexandru Elisei --- lib/arm64/asm/sysreg.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/lib/arm64/asm/sysreg.h b/lib/arm64/asm/sysreg.h index a03830b..a45eebd 100644 --- a/lib/arm64/asm/sysreg.h +++ b/lib/arm64/asm/sysreg.h @@ -38,6 +38,17 @@ asm volatile("msr " xstr(r) ", %x0" : : "rZ" (__val)); \ } while (0) +#define read_sysreg_s(r) ({ \ + u64 __val; \ + asm volatile("mrs_s %0, " xstr(r) : "=r" (__val)); \ + __val; \ +}) + +#define write_sysreg_s(v, r) do { \ + u64 __val = (u64)v; \ + asm volatile("msr_s " xstr(r) ", %x0" : : "rZ" (__val));\ +} while (0) + asm( " .irp num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30\n" " .equ .L__reg_num_x\\num, \\num\n" From patchwork Mon Dec 16 20:47:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 11295225 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3E70A138D for ; Mon, 16 Dec 2019 20:48:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1D4CD24679 for ; Mon, 16 Dec 2019 20:48:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="hMiI9ttB" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727524AbfLPUsU (ORCPT ); Mon, 16 Dec 2019 15:48:20 -0500 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:50375 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727492AbfLPUsT (ORCPT ); Mon, 16 Dec 2019 15:48:19 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1576529298; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UXlYDyzqTLtrcUrTm0BlWqi6aN682MJ2N066VEjbccE=; b=hMiI9ttB+Q9F3kNdHjV7emdT4CdCJ96XB5LjELJt5G2ex459Qu6VUTjJh9fK8N41pnUnBx aPrBx+liVliuppeFJGHKBTx6F3bLJSH6uc144LYwXo8oxYTL8jHpTY6QpzhaoDvsFyd5Jj CZqv/KD67/JQWwphl1WQUJaGkuq4V0o= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-200-9IPC6q1fOY-jzKGQ4MZISw-1; Mon, 16 Dec 2019 15:48:17 -0500 X-MC-Unique: 9IPC6q1fOY-jzKGQ4MZISw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 61C0619264B2; Mon, 16 Dec 2019 20:48:15 +0000 (UTC) Received: from laptop.redhat.com (ovpn-116-117.ams2.redhat.com [10.36.116.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0E7ED5D9C9; Mon, 16 Dec 2019 20:48:10 +0000 (UTC) From: Eric Auger To: eric.auger.pro@gmail.com, eric.auger@redhat.com, maz@kernel.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, qemu-devel@nongnu.org, qemu-arm@nongnu.org Cc: drjones@redhat.com, andrew.murray@arm.com, andre.przywara@arm.com, peter.maydell@linaro.org, alexandru.elisei@arm.com Subject: [kvm-unit-tests PATCH 02/10] arm: pmu: Let pmu tests take a sub-test parameter Date: Mon, 16 Dec 2019 21:47:49 +0100 Message-Id: <20191216204757.4020-3-eric.auger@redhat.com> In-Reply-To: <20191216204757.4020-1-eric.auger@redhat.com> References: <20191216204757.4020-1-eric.auger@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org As we intend to introduce more PMU tests, let's add a sub-test parameter that will allow to categorize them. Existing tests are in the cycle-counter category. Signed-off-by: Eric Auger Reviewed-by: Andre Przywara --- arm/pmu.c | 24 +++++++++++++++--------- arm/unittests.cfg | 7 ++++--- 2 files changed, 19 insertions(+), 12 deletions(-) diff --git a/arm/pmu.c b/arm/pmu.c index d5a03a6..e5e012d 100644 --- a/arm/pmu.c +++ b/arm/pmu.c @@ -287,22 +287,28 @@ int main(int argc, char *argv[]) { int cpi = 0; - if (argc > 1) - cpi = atol(argv[1]); - if (!pmu_probe()) { printf("No PMU found, test skipped...\n"); return report_summary(); } + if (argc < 2) + report_abort("no test specified"); + report_prefix_push("pmu"); - report(check_pmcr(), "Control register"); - report(check_cycles_increase(), - "Monotonically increasing cycle count"); - report(check_cpi(cpi), "Cycle/instruction ratio"); - - pmccntr64_test(); + if (strcmp(argv[1], "cycle-counter") == 0) { + report_prefix_push(argv[1]); + if (argc > 2) + cpi = atol(argv[2]); + report(check_pmcr(), "Control register"); + report(check_cycles_increase(), + "Monotonically increasing cycle count"); + report(check_cpi(cpi), "Cycle/instruction ratio"); + pmccntr64_test(); + } else { + report_abort("Unknown sub-test '%s'", argv[1]); + } return report_summary(); } diff --git a/arm/unittests.cfg b/arm/unittests.cfg index daeb5a0..79f0d7a 100644 --- a/arm/unittests.cfg +++ b/arm/unittests.cfg @@ -61,21 +61,22 @@ file = pci-test.flat groups = pci # Test PMU support -[pmu] +[pmu-cycle-counter] file = pmu.flat groups = pmu +extra_params = -append 'cycle-counter 0' # Test PMU support (TCG) with -icount IPC=1 #[pmu-tcg-icount-1] #file = pmu.flat -#extra_params = -icount 0 -append '1' +#extra_params = -icount 0 -append 'cycle-counter 1' #groups = pmu #accel = tcg # Test PMU support (TCG) with -icount IPC=256 #[pmu-tcg-icount-256] #file = pmu.flat -#extra_params = -icount 8 -append '256' +#extra_params = -icount 8 -append 'cycle-counter 256' #groups = pmu #accel = tcg From patchwork Mon Dec 16 20:47:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 11295223 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 59D701580 for ; Mon, 16 Dec 2019 20:48:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3755A218AC for ; Mon, 16 Dec 2019 20:48:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="FzPSZZR3" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727531AbfLPUsX (ORCPT ); Mon, 16 Dec 2019 15:48:23 -0500 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:41054 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727539AbfLPUsW (ORCPT ); Mon, 16 Dec 2019 15:48:22 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1576529302; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Dznikfzn+n4WX4926tiKNygB6iX1YlaEs/0YdUiJxdA=; b=FzPSZZR3GUMf2K+z7Zw2QA5KFlhZJiyH/IcqTvVNehhY9SmFu61xIaVkaBlz9BGZi4VQOS RO6BNVdTvkn4RcVTPcTnsMb5IEjMGRVpD53OqjHJaV3023SCSb8qG4NgHASfusH6BOI+pR YfSGTATomzPNvOat6DcdIj4NRxJukRc= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-63-ZsCK1exoNFqIWFgq7Whmzw-1; Mon, 16 Dec 2019 15:48:20 -0500 X-MC-Unique: ZsCK1exoNFqIWFgq7Whmzw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 6D5B28E10CF; Mon, 16 Dec 2019 20:48:18 +0000 (UTC) Received: from laptop.redhat.com (ovpn-116-117.ams2.redhat.com [10.36.116.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id B9BA65D9C9; Mon, 16 Dec 2019 20:48:15 +0000 (UTC) From: Eric Auger To: eric.auger.pro@gmail.com, eric.auger@redhat.com, maz@kernel.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, qemu-devel@nongnu.org, qemu-arm@nongnu.org Cc: drjones@redhat.com, andrew.murray@arm.com, andre.przywara@arm.com, peter.maydell@linaro.org, alexandru.elisei@arm.com Subject: [kvm-unit-tests PATCH 03/10] arm: pmu: Add a pmu struct Date: Mon, 16 Dec 2019 21:47:50 +0100 Message-Id: <20191216204757.4020-4-eric.auger@redhat.com> In-Reply-To: <20191216204757.4020-1-eric.auger@redhat.com> References: <20191216204757.4020-1-eric.auger@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This struct aims at storing information potentially used by all tests such as the pmu version, the read-only part of the PMCR, the number of implemented event counters, ... Signed-off-by: Eric Auger Reviewed-by: Andre Przywara --- arm/pmu.c | 30 +++++++++++++++++++++++++----- 1 file changed, 25 insertions(+), 5 deletions(-) diff --git a/arm/pmu.c b/arm/pmu.c index e5e012d..d24857e 100644 --- a/arm/pmu.c +++ b/arm/pmu.c @@ -33,7 +33,14 @@ #define NR_SAMPLES 10 -static unsigned int pmu_version; +struct pmu { + unsigned int version; + unsigned int nb_implemented_counters; + uint32_t pmcr_ro; +}; + +static struct pmu pmu; + #if defined(__arm__) #define ID_DFR0_PERFMON_SHIFT 24 #define ID_DFR0_PERFMON_MASK 0xf @@ -265,7 +272,7 @@ static bool check_cpi(int cpi) static void pmccntr64_test(void) { #ifdef __arm__ - if (pmu_version == 0x3) { + if (pmu.version == 0x3) { if (ERRATA(9e3f7a296940)) { write_sysreg(0xdead, PMCCNTR64); report(read_sysreg(PMCCNTR64) == 0xdead, "pmccntr64"); @@ -278,9 +285,22 @@ static void pmccntr64_test(void) /* Return FALSE if no PMU found, otherwise return TRUE */ static bool pmu_probe(void) { - pmu_version = get_pmu_version(); - report_info("PMU version: %d", pmu_version); - return pmu_version != 0 && pmu_version != 0xf; + uint32_t pmcr; + + pmu.version = get_pmu_version(); + report_info("PMU version: %d", pmu.version); + + if (pmu.version == 0 || pmu.version == 0xF) + return false; + + pmcr = get_pmcr(); + pmu.pmcr_ro = pmcr & 0xFFFFFF80; + pmu.nb_implemented_counters = + (pmcr >> PMU_PMCR_N_SHIFT) & PMU_PMCR_N_MASK; + report_info("Implements %d event counters", + pmu.nb_implemented_counters); + + return true; } int main(int argc, char *argv[]) From patchwork Mon Dec 16 20:47:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 11295227 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 584DE138D for ; Mon, 16 Dec 2019 20:48:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3730124676 for ; Mon, 16 Dec 2019 20:48:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="EWvssBwA" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727580AbfLPUs1 (ORCPT ); Mon, 16 Dec 2019 15:48:27 -0500 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:43318 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727501AbfLPUs0 (ORCPT ); Mon, 16 Dec 2019 15:48:26 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1576529305; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RjCVoMYbyhxjOhuGwZyyYSLknIArKPWpJID9BdwLtBQ=; b=EWvssBwAfF3kiNCaCHP/LtVmyytVcJOFoi/rO5523lPY4xaj3ZKizHEEQUL7S+OxXHY/nB K0fmYedCWCmsvpc1/qV3CzmJaUClivNmrG9PUVn+sAKXW4LxltWrJ/l2s60BTdfmt+X3O/ gyOFNgVJulAaUJ5olHlrTcBUgT+5z3w= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-129-mAQz2_z2MAmqowES4roMzw-1; Mon, 16 Dec 2019 15:48:24 -0500 X-MC-Unique: mAQz2_z2MAmqowES4roMzw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 7B6C98024E5; Mon, 16 Dec 2019 20:48:21 +0000 (UTC) Received: from laptop.redhat.com (ovpn-116-117.ams2.redhat.com [10.36.116.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id C42505D9C9; Mon, 16 Dec 2019 20:48:18 +0000 (UTC) From: Eric Auger To: eric.auger.pro@gmail.com, eric.auger@redhat.com, maz@kernel.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, qemu-devel@nongnu.org, qemu-arm@nongnu.org Cc: drjones@redhat.com, andrew.murray@arm.com, andre.przywara@arm.com, peter.maydell@linaro.org, alexandru.elisei@arm.com Subject: [kvm-unit-tests PATCH 04/10] arm: pmu: Check Required Event Support Date: Mon, 16 Dec 2019 21:47:51 +0100 Message-Id: <20191216204757.4020-5-eric.auger@redhat.com> In-Reply-To: <20191216204757.4020-1-eric.auger@redhat.com> References: <20191216204757.4020-1-eric.auger@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org If event counters are implemented check the common events required by the PMUv3 are implemented. Some are unconditionally required (SW_INCR, CPU_CYCLES, either INST_RETIRED or INST_SPEC). Some others only are required if the implementation implements some other features. Check those wich are unconditionally required. This test currently fails on TCG as neither INST_RETIRED or INST_SPEC are supported. Signed-off-by: Eric Auger --- v1 ->v2: - add a comment to explain the PMCEID0/1 splits --- arm/pmu.c | 71 +++++++++++++++++++++++++++++++++++++++++++++++ arm/unittests.cfg | 6 ++++ 2 files changed, 77 insertions(+) diff --git a/arm/pmu.c b/arm/pmu.c index d24857e..d88ef22 100644 --- a/arm/pmu.c +++ b/arm/pmu.c @@ -101,6 +101,10 @@ static inline void precise_instrs_loop(int loop, uint32_t pmcr) : [pmcr] "r" (pmcr), [z] "r" (0) : "cc"); } + +/* event counter tests only implemented for aarch64 */ +static void test_event_introspection(void) {} + #elif defined(__aarch64__) #define ID_AA64DFR0_PERFMON_SHIFT 8 #define ID_AA64DFR0_PERFMON_MASK 0xf @@ -139,6 +143,70 @@ static inline void precise_instrs_loop(int loop, uint32_t pmcr) : [pmcr] "r" (pmcr) : "cc"); } + +#define PMCEID1_EL0 sys_reg(11, 3, 9, 12, 7) + +static bool is_event_supported(uint32_t n, bool warn) +{ + uint64_t pmceid0 = read_sysreg(pmceid0_el0); + uint64_t pmceid1 = read_sysreg_s(PMCEID1_EL0); + bool supported; + uint32_t reg; + + /* + * The low 32-bits of PMCEID0/1 respectly describe + * event support for events 0-31/32-63. Their High + * 32-bits describe support for extended events + * starting at 0x4000, using the same split. + */ + if (n >= 0x0 && n <= 0x1F) + reg = pmceid0 & 0xFFFFFFFF; + else if (n >= 0x4000 && n <= 0x401F) + reg = pmceid0 >> 32; + else if (n >= 0x20 && n <= 0x3F) + reg = pmceid1 & 0xFFFFFFFF; + else if (n >= 0x4020 && n <= 0x403F) + reg = pmceid1 >> 32; + else + abort(); + + supported = reg & (1 << n); + if (!supported && warn) + report_info("event %d is not supported", n); + return supported; +} + +static void test_event_introspection(void) +{ + bool required_events; + + if (!pmu.nb_implemented_counters) { + report_skip("No event counter, skip ..."); + return; + } + + /* PMUv3 requires an implementation includes some common events */ + required_events = is_event_supported(0x0, true) /* SW_INCR */ && + is_event_supported(0x11, true) /* CPU_CYCLES */ && + (is_event_supported(0x8, true) /* INST_RETIRED */ || + is_event_supported(0x1B, true) /* INST_PREC */); + + if (pmu.version == 0x4) { + /* ARMv8.1 PMU: STALL_FRONTEND and STALL_BACKEND are required */ + required_events = required_events || + is_event_supported(0x23, true) || + is_event_supported(0x24, true); + } + + /* + * L1D_CACHE_REFILL(0x3) and L1D_CACHE(0x4) are only required if + * L1 data / unified cache. BR_MIS_PRED(0x10), BR_PRED(0x12) are only + * required if program-flow prediction is implemented. + */ + + report(required_events, "Check required events are implemented"); +} + #endif /* @@ -326,6 +394,9 @@ int main(int argc, char *argv[]) "Monotonically increasing cycle count"); report(check_cpi(cpi), "Cycle/instruction ratio"); pmccntr64_test(); + } else if (strcmp(argv[1], "event-introspection") == 0) { + report_prefix_push(argv[1]); + test_event_introspection(); } else { report_abort("Unknown sub-test '%s'", argv[1]); } diff --git a/arm/unittests.cfg b/arm/unittests.cfg index 79f0d7a..4433ef3 100644 --- a/arm/unittests.cfg +++ b/arm/unittests.cfg @@ -66,6 +66,12 @@ file = pmu.flat groups = pmu extra_params = -append 'cycle-counter 0' +[pmu-event-introspection] +file = pmu.flat +groups = pmu +arch = arm64 +extra_params = -append 'event-introspection' + # Test PMU support (TCG) with -icount IPC=1 #[pmu-tcg-icount-1] #file = pmu.flat From patchwork Mon Dec 16 20:47:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 11295233 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8E6F117F0 for ; Mon, 16 Dec 2019 20:48:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 626CA2465E for ; Mon, 16 Dec 2019 20:48:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Njup6Y9f" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727607AbfLPUsi (ORCPT ); Mon, 16 Dec 2019 15:48:38 -0500 Received: from us-smtp-2.mimecast.com ([207.211.31.81]:46418 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726445AbfLPUsi (ORCPT ); Mon, 16 Dec 2019 15:48:38 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1576529317; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qqgi4yXzByAocEYIrdhE6bBEIHbw5QcBl06XsTZ132U=; b=Njup6Y9fL+Qfr1ldsuF+vN1+ro5AEHNVIry+kRkCSXWBNGwH6NYw3Q6ylnuw41ZUDkrXr1 3U1a0p1JoIb5HCbwYzoDBbjoLkVdM6wTgqPGspvJQhGKBez5/FXvufmWnqd2nC2JAviIYS xczluLbnfA/hRm6iVtwoteqU1uI0GOA= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-17-qieL2TobNmy6pxnl1MxLLA-1; Mon, 16 Dec 2019 15:48:28 -0500 X-MC-Unique: qieL2TobNmy6pxnl1MxLLA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C0769108EE95; Mon, 16 Dec 2019 20:48:26 +0000 (UTC) Received: from laptop.redhat.com (ovpn-116-117.ams2.redhat.com [10.36.116.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id D30645D9C9; Mon, 16 Dec 2019 20:48:21 +0000 (UTC) From: Eric Auger To: eric.auger.pro@gmail.com, eric.auger@redhat.com, maz@kernel.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, qemu-devel@nongnu.org, qemu-arm@nongnu.org Cc: drjones@redhat.com, andrew.murray@arm.com, andre.przywara@arm.com, peter.maydell@linaro.org, alexandru.elisei@arm.com Subject: [kvm-unit-tests PATCH 05/10] arm: pmu: Basic event counter Tests Date: Mon, 16 Dec 2019 21:47:52 +0100 Message-Id: <20191216204757.4020-6-eric.auger@redhat.com> In-Reply-To: <20191216204757.4020-1-eric.auger@redhat.com> References: <20191216204757.4020-1-eric.auger@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Adds the following tests: - event-counter-config: test event counter configuration - basic-event-count: - programs counters #0 and #1 to count 2 required events (resp. CPU_CYCLES and INST_RETIRED). Counter #0 is preset to a value close enough to the 32b overflow limit so that we check the overflow bit is set after the execution of the asm loop. - mem-access: counts MEM_ACCESS event on counters #0 and #1 with and without 32-bit overflow. Signed-off-by: Eric Auger --- arm/pmu.c | 261 ++++++++++++++++++++++++++++++++++++++++++++++ arm/unittests.cfg | 18 ++++ 2 files changed, 279 insertions(+) diff --git a/arm/pmu.c b/arm/pmu.c index d88ef22..139dae3 100644 --- a/arm/pmu.c +++ b/arm/pmu.c @@ -18,9 +18,15 @@ #include "asm/barrier.h" #include "asm/sysreg.h" #include "asm/processor.h" +#include +#include #define PMU_PMCR_E (1 << 0) +#define PMU_PMCR_P (1 << 1) #define PMU_PMCR_C (1 << 2) +#define PMU_PMCR_D (1 << 3) +#define PMU_PMCR_X (1 << 4) +#define PMU_PMCR_DP (1 << 5) #define PMU_PMCR_LC (1 << 6) #define PMU_PMCR_N_SHIFT 11 #define PMU_PMCR_N_MASK 0x1f @@ -104,6 +110,9 @@ static inline void precise_instrs_loop(int loop, uint32_t pmcr) /* event counter tests only implemented for aarch64 */ static void test_event_introspection(void) {} +static void test_event_counter_config(void) {} +static void test_basic_event_count(void) {} +static void test_mem_access(void) {} #elif defined(__aarch64__) #define ID_AA64DFR0_PERFMON_SHIFT 8 @@ -145,6 +154,32 @@ static inline void precise_instrs_loop(int loop, uint32_t pmcr) } #define PMCEID1_EL0 sys_reg(11, 3, 9, 12, 7) +#define PMCNTENSET_EL0 sys_reg(11, 3, 9, 12, 1) +#define PMCNTENCLR_EL0 sys_reg(11, 3, 9, 12, 2) + +#define PMEVTYPER_EXCLUDE_EL1 (1 << 31) +#define PMEVTYPER_EXCLUDE_EL0 (1 << 30) + +#define regn_el0(__reg, __n) __reg ## __n ## _el0 +#define write_regn(__reg, __n, __val) \ + write_sysreg((__val), __reg ## __n ## _el0) + +#define read_regn(__reg, __n) \ + read_sysreg(__reg ## __n ## _el0) + +#define print_pmevtyper(__s, __n) do { \ + uint32_t val; \ + val = read_regn(pmevtyper, __n);\ + report_info("%s pmevtyper%d=0x%x, eventcount=0x%x (p=%ld, u=%ld nsk=%ld, nsu=%ld, nsh=%ld m=%ld, mt=%ld)", \ + (__s), (__n), val, val & 0xFFFF, \ + (BIT_MASK(31) & val) >> 31, \ + (BIT_MASK(30) & val) >> 30, \ + (BIT_MASK(29) & val) >> 29, \ + (BIT_MASK(28) & val) >> 28, \ + (BIT_MASK(27) & val) >> 27, \ + (BIT_MASK(26) & val) >> 26, \ + (BIT_MASK(25) & val) >> 25); \ + } while (0) static bool is_event_supported(uint32_t n, bool warn) { @@ -207,6 +242,223 @@ static void test_event_introspection(void) report(required_events, "Check required events are implemented"); } +static inline void mem_access_loop(void *addr, int loop, uint32_t pmcr) +{ +asm volatile( + " msr pmcr_el0, %[pmcr]\n" + " isb\n" + " mov x10, %[loop]\n" + "1: sub x10, x10, #1\n" + " mov x8, %[addr]\n" + " ldr x9, [x8]\n" + " cmp x10, #0x0\n" + " b.gt 1b\n" + " msr pmcr_el0, xzr\n" + " isb\n" + : + : [addr] "r" (addr), [pmcr] "r" (pmcr), [loop] "r" (loop) + : ); +} + + +static void pmu_reset(void) +{ + /* reset all counters, counting disabled at PMCR level*/ + set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P); + /* Disable all counters */ + write_sysreg_s(0xFFFFFFFF, PMCNTENCLR_EL0); + /* clear overflow reg */ + write_sysreg(0xFFFFFFFF, pmovsclr_el0); + /* disable overflow interrupts on all counters */ + write_sysreg(0xFFFFFFFF, pmintenclr_el1); + isb(); +} + +static void test_event_counter_config(void) +{ + int i; + + if (!pmu.nb_implemented_counters) { + report_skip("No event counter, skip ..."); + return; + } + + pmu_reset(); + + /* + * Test setting through PMESELR/PMXEVTYPER and PMEVTYPERn read, + * select counter 0 + */ + write_sysreg(1, PMSELR_EL0); + /* program this counter to count unsupported event */ + write_sysreg(0xEA, PMXEVTYPER_EL0); + write_sysreg(0xdeadbeef, PMXEVCNTR_EL0); + report((read_regn(pmevtyper, 1) & 0xFFF) == 0xEA, + "PMESELR/PMXEVTYPER/PMEVTYPERn"); + report((read_regn(pmevcntr, 1) == 0xdeadbeef), + "PMESELR/PMXEVCNTR/PMEVCNTRn"); + + /* try configure an unsupported event within the range [0x0, 0x3F] */ + for (i = 0; i <= 0x3F; i++) { + if (!is_event_supported(i, false)) + goto test_unsupported; + } + report_skip("pmevtyper: all events within [0x0, 0x3F] are supported"); + +test_unsupported: + /* select counter 0 */ + write_sysreg(0, PMSELR_EL0); + /* program this counter to count unsupported event */ + write_sysreg(i, PMXEVCNTR_EL0); + /* read the counter value */ + read_sysreg(PMXEVCNTR_EL0); + report(read_sysreg(PMXEVCNTR_EL0) == i, + "read of a counter programmed with unsupported event"); + +} + +static bool satisfy_prerequisites(uint32_t *events, unsigned int nb_events) +{ + int i; + + if (pmu.nb_implemented_counters < nb_events) { + report_skip("Skip test as number of counters is too small (%d)", + pmu.nb_implemented_counters); + return false; + } + + for (i = 0; i < nb_events; i++) { + if (!is_event_supported(events[i], false)) { + report_skip("Skip test as event %d is not supported", + events[i]); + return false; + } + } + return true; +} + +static void test_basic_event_count(void) +{ + uint32_t implemented_counter_mask, non_implemented_counter_mask; + uint32_t counter_mask; + uint32_t events[] = { + 0x11, /* CPU_CYCLES */ + 0x8, /* INST_RETIRED */ + }; + + if (!satisfy_prerequisites(events, ARRAY_SIZE(events))) + return; + + implemented_counter_mask = (1 << pmu.nb_implemented_counters) - 1; + non_implemented_counter_mask = ~((1 << 31) | implemented_counter_mask); + counter_mask = implemented_counter_mask | non_implemented_counter_mask; + + write_regn(pmevtyper, 0, events[0] | PMEVTYPER_EXCLUDE_EL0); + write_regn(pmevtyper, 1, events[1] | PMEVTYPER_EXCLUDE_EL0); + + /* disable all counters */ + write_sysreg_s(0xFFFFFFFF, PMCNTENCLR_EL0); + report(!read_sysreg_s(PMCNTENCLR_EL0) && !read_sysreg_s(PMCNTENSET_EL0), + "pmcntenclr: disable all counters"); + + /* + * clear cycle and all event counters and allow counter enablement + * through PMCNTENSET. LC is RES1. + */ + set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P); + isb(); + report(get_pmcr() == (pmu.pmcr_ro | PMU_PMCR_LC), "pmcr: reset counters"); + + /* Preset counter #0 to 0xFFFFFFF0 to trigger an overflow interrupt */ + write_regn(pmevcntr, 0, 0xFFFFFFF0); + report(read_regn(pmevcntr, 0) == 0xFFFFFFF0, + "counter #0 preset to 0xFFFFFFF0"); + report(!read_regn(pmevcntr, 1), "counter #1 is 0"); + + /* + * Enable all implemented counters and also attempt to enable + * not supported counters. Counting still is disabled by !PMCR.E + */ + write_sysreg_s(counter_mask, PMCNTENSET_EL0); + + /* check only those implemented are enabled */ + report((read_sysreg_s(PMCNTENSET_EL0) == read_sysreg_s(PMCNTENCLR_EL0)) && + (read_sysreg_s(PMCNTENSET_EL0) == implemented_counter_mask), + "pmcntenset: enabled implemented_counters"); + + /* Disable all counters but counters #0 and #1 */ + write_sysreg_s(~0x3, PMCNTENCLR_EL0); + report((read_sysreg_s(PMCNTENSET_EL0) == read_sysreg_s(PMCNTENCLR_EL0)) && + (read_sysreg_s(PMCNTENSET_EL0) == 0x3), + "pmcntenset: just enabled #0 and #1"); + + /* clear overflow register */ + write_sysreg(0xFFFFFFFF, pmovsclr_el0); + report(!read_sysreg(pmovsclr_el0), "check overflow reg is 0"); + + /* disable overflow interrupts on all counters*/ + write_sysreg(0xFFFFFFFF, pmintenclr_el1); + report(!read_sysreg(pmintenclr_el1), + "pmintenclr_el1=0, all interrupts disabled"); + + /* enable overflow interrupts on all event counters */ + write_sysreg(implemented_counter_mask | non_implemented_counter_mask, + pmintenset_el1); + report(read_sysreg(pmintenset_el1) == implemented_counter_mask, + "overflow interrupts enabled on all implemented counters"); + + /* Set PMCR.E, execute asm code and unset PMCR.E */ + precise_instrs_loop(20, pmu.pmcr_ro | PMU_PMCR_E); + + report_info("counter #0 is 0x%lx (CPU_CYCLES)", + read_regn(pmevcntr, 0)); + report_info("counter #1 is 0x%lx (INST_RETIRED)", + read_regn(pmevcntr, 1)); + + report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0)); + report(read_sysreg(pmovsclr_el0) & 0x1, + "check overflow happened on #0 only"); +} + +static void test_mem_access(void) +{ + void *addr = malloc(PAGE_SIZE); + uint32_t events[] = { + 0x13, /* MEM_ACCESS */ + 0x13, /* MEM_ACCESS */ + }; + + if (!satisfy_prerequisites(events, ARRAY_SIZE(events))) + return; + + pmu_reset(); + + write_regn(pmevtyper, 0, events[0] | PMEVTYPER_EXCLUDE_EL0); + write_regn(pmevtyper, 1, events[1] | PMEVTYPER_EXCLUDE_EL0); + write_sysreg_s(0x3, PMCNTENSET_EL0); + isb(); + mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); + report_info("counter #0 is %ld (MEM_ACCESS)", read_regn(pmevcntr, 0)); + report_info("counter #1 is %ld (MEM_ACCESS)", read_regn(pmevcntr, 1)); + /* We may not measure exactly 20 mem access. Depends on the platform */ + report((read_regn(pmevcntr, 0) == read_regn(pmevcntr, 1)) && + (read_regn(pmevcntr, 0) >= 20) && !read_sysreg(pmovsclr_el0), + "Ran 20 mem accesses"); + + pmu_reset(); + + write_regn(pmevcntr, 0, 0xFFFFFFFA); + write_regn(pmevcntr, 1, 0xFFFFFFF0); + write_sysreg_s(0x3, PMCNTENSET_EL0); + isb(); + mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); + report(read_sysreg(pmovsclr_el0) == 0x3, + "Ran 20 mem accesses with expected overflows on both counters"); + report_info("cnt#0 = %ld cnt#1=%ld overflow=0x%lx", + read_regn(pmevcntr, 0), read_regn(pmevcntr, 1), + read_sysreg(pmovsclr_el0)); +} + #endif /* @@ -397,6 +649,15 @@ int main(int argc, char *argv[]) } else if (strcmp(argv[1], "event-introspection") == 0) { report_prefix_push(argv[1]); test_event_introspection(); + } else if (strcmp(argv[1], "event-counter-config") == 0) { + report_prefix_push(argv[1]); + test_event_counter_config(); + } else if (strcmp(argv[1], "basic-event-count") == 0) { + report_prefix_push(argv[1]); + test_basic_event_count(); + } else if (strcmp(argv[1], "mem-access") == 0) { + report_prefix_push(argv[1]); + test_mem_access(); } else { report_abort("Unknown sub-test '%s'", argv[1]); } diff --git a/arm/unittests.cfg b/arm/unittests.cfg index 4433ef3..7a59403 100644 --- a/arm/unittests.cfg +++ b/arm/unittests.cfg @@ -72,6 +72,24 @@ groups = pmu arch = arm64 extra_params = -append 'event-introspection' +[pmu-event-counter-config] +file = pmu.flat +groups = pmu +arch = arm64 +extra_params = -append 'event-counter-config' + +[pmu-basic-event-count] +file = pmu.flat +groups = pmu +arch = arm64 +extra_params = -append 'basic-event-count' + +[pmu-mem-access] +file = pmu.flat +groups = pmu +arch = arm64 +extra_params = -append 'mem-access' + # Test PMU support (TCG) with -icount IPC=1 #[pmu-tcg-icount-1] #file = pmu.flat From patchwork Mon Dec 16 20:47:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 11295229 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 784F4138D for ; Mon, 16 Dec 2019 20:48:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4C6D72465E for ; Mon, 16 Dec 2019 20:48:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="a7B9y4NZ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727594AbfLPUse (ORCPT ); Mon, 16 Dec 2019 15:48:34 -0500 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:43995 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727501AbfLPUse (ORCPT ); Mon, 16 Dec 2019 15:48:34 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1576529312; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tjYVs+BNNU240LHN8Sh/MBUq5QkpqVGcrM9TYOWuLzo=; b=a7B9y4NZVrX1mWetk7GyO+COcOscsiQ9+6FLayeVndni+q/qJZFvBYWpzqCtLFrQF0xTAE jawjG4swE4JilDrsU8MN+z5e8EfsBjlA5Q53val0bCnDKBAOzXWkukdgX36xS485z8g2M6 Gc6RD8JFBX3EtIl4efelECJZGw59e0M= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-401-N7zJNoN6MZy9JS5P6-AjUQ-1; Mon, 16 Dec 2019 15:48:31 -0500 X-MC-Unique: N7zJNoN6MZy9JS5P6-AjUQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id CEEF78024EB; Mon, 16 Dec 2019 20:48:29 +0000 (UTC) Received: from laptop.redhat.com (ovpn-116-117.ams2.redhat.com [10.36.116.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id 251135D9C9; Mon, 16 Dec 2019 20:48:26 +0000 (UTC) From: Eric Auger To: eric.auger.pro@gmail.com, eric.auger@redhat.com, maz@kernel.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, qemu-devel@nongnu.org, qemu-arm@nongnu.org Cc: drjones@redhat.com, andrew.murray@arm.com, andre.przywara@arm.com, peter.maydell@linaro.org, alexandru.elisei@arm.com Subject: [kvm-unit-tests PATCH 06/10] arm: pmu: Test chained counter Date: Mon, 16 Dec 2019 21:47:53 +0100 Message-Id: <20191216204757.4020-7-eric.auger@redhat.com> In-Reply-To: <20191216204757.4020-1-eric.auger@redhat.com> References: <20191216204757.4020-1-eric.auger@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add 2 tests exercising chained counters. The first one uses CPU_CYCLES and the second one uses SW_INCR. Signed-off-by: Eric Auger --- arm/pmu.c | 128 ++++++++++++++++++++++++++++++++++++++++++++++ arm/unittests.cfg | 12 +++++ 2 files changed, 140 insertions(+) diff --git a/arm/pmu.c b/arm/pmu.c index 139dae3..ad98771 100644 --- a/arm/pmu.c +++ b/arm/pmu.c @@ -113,6 +113,8 @@ static void test_event_introspection(void) {} static void test_event_counter_config(void) {} static void test_basic_event_count(void) {} static void test_mem_access(void) {} +static void test_chained_counters(void) {} +static void test_chained_sw_incr(void) {} #elif defined(__aarch64__) #define ID_AA64DFR0_PERFMON_SHIFT 8 @@ -459,6 +461,126 @@ static void test_mem_access(void) read_sysreg(pmovsclr_el0)); } +static void test_chained_counters(void) +{ + uint32_t events[] = { 0x11 /* CPU_CYCLES */, 0x1E /* CHAIN */}; + + if (!satisfy_prerequisites(events, ARRAY_SIZE(events))) + return; + + pmu_reset(); + + write_regn(pmevtyper, 0, events[0] | PMEVTYPER_EXCLUDE_EL0); + write_regn(pmevtyper, 1, events[1] | PMEVTYPER_EXCLUDE_EL0); + /* enable counters #0 and #1 */ + write_sysreg_s(0x3, PMCNTENSET_EL0); + /* preset counter #0 at 0xFFFFFFF0 */ + write_regn(pmevcntr, 0, 0xFFFFFFF0); + + precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E); + + report(read_regn(pmevcntr, 1) == 1, "CHAIN counter #1 incremented"); + report(!read_sysreg(pmovsclr_el0), "check no overflow is recorded"); + + /* test 64b overflow */ + + pmu_reset(); + write_sysreg_s(0x3, PMCNTENSET_EL0); + + write_regn(pmevcntr, 0, 0xFFFFFFF0); + write_regn(pmevcntr, 1, 0x1); + precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E); + report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0)); + report(read_regn(pmevcntr, 1) == 2, "CHAIN counter #1 incremented"); + report(!read_sysreg(pmovsclr_el0), "check no overflow is recorded"); + + write_regn(pmevcntr, 0, 0xFFFFFFF0); + write_regn(pmevcntr, 1, 0xFFFFFFFF); + + precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E); + report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0)); + report(!read_regn(pmevcntr, 1), "CHAIN counter #1 wrapped"); + report(read_sysreg(pmovsclr_el0) == 0x2, + "check no overflow is recorded"); +} + +static void test_chained_sw_incr(void) +{ + uint32_t events[] = { 0x0 /* SW_INCR */, 0x0 /* SW_INCR */}; + int i; + + if (!satisfy_prerequisites(events, ARRAY_SIZE(events))) + return; + + pmu_reset(); + + write_regn(pmevtyper, 0, events[0] | PMEVTYPER_EXCLUDE_EL0); + write_regn(pmevtyper, 1, events[1] | PMEVTYPER_EXCLUDE_EL0); + /* enable counters #0 and #1 */ + write_sysreg_s(0x3, PMCNTENSET_EL0); + + /* preset counter #0 at 0xFFFFFFF0 */ + write_regn(pmevcntr, 0, 0xFFFFFFF0); + + for (i = 0; i < 100; i++) + write_sysreg(0x1, pmswinc_el0); + + report_info("SW_INCR counter #0 has value %ld", read_regn(pmevcntr, 0)); + report(read_regn(pmevcntr, 0) == 0xFFFFFFF0, + "PWSYNC does not increment if PMCR.E is unset"); + + pmu_reset(); + + write_regn(pmevcntr, 0, 0xFFFFFFF0); + write_sysreg_s(0x3, PMCNTENSET_EL0); + set_pmcr(pmu.pmcr_ro | PMU_PMCR_E); + + for (i = 0; i < 100; i++) + write_sysreg(0x3, pmswinc_el0); + + report(read_regn(pmevcntr, 0) == 84, "counter #1 after + 100 SW_INCR"); + report(read_regn(pmevcntr, 1) == 100, + "counter #0 after + 100 SW_INCR"); + report_info("counter values after 100 SW_INCR #0=%ld #1=%ld", + read_regn(pmevcntr, 0), read_regn(pmevcntr, 1)); + report(read_sysreg(pmovsclr_el0) == 0x1, + "overflow reg after 100 SW_INCR"); + + /* 64b SW_INCR */ + pmu_reset(); + + events[1] = 0x1E /* CHAIN */; + write_regn(pmevtyper, 1, events[1] | PMEVTYPER_EXCLUDE_EL0); + write_regn(pmevcntr, 0, 0xFFFFFFF0); + write_sysreg_s(0x3, PMCNTENSET_EL0); + set_pmcr(pmu.pmcr_ro | PMU_PMCR_E); + for (i = 0; i < 100; i++) + write_sysreg(0x3, pmswinc_el0); + + report(!read_sysreg(pmovsclr_el0) && (read_regn(pmevcntr, 1) == 1), + "overflow reg after 100 SW_INCR/CHAIN"); + report_info("overflow=0x%lx, #0=%ld #1=%ld", read_sysreg(pmovsclr_el0), + read_regn(pmevcntr, 0), read_regn(pmevcntr, 1)); + + /* 64b SW_INCR and overflow on CHAIN counter*/ + pmu_reset(); + + write_regn(pmevtyper, 1, events[1] | PMEVTYPER_EXCLUDE_EL0); + write_regn(pmevcntr, 0, 0xFFFFFFF0); + write_regn(pmevcntr, 1, 0xFFFFFFFF); + write_sysreg_s(0x3, PMCNTENSET_EL0); + set_pmcr(pmu.pmcr_ro | PMU_PMCR_E); + for (i = 0; i < 100; i++) + write_sysreg(0x3, pmswinc_el0); + + report((read_sysreg(pmovsclr_el0) == 0x2) && + (read_regn(pmevcntr, 1) == 0) && + (read_regn(pmevcntr, 0) == 84), + "overflow reg after 100 SW_INCR/CHAIN"); + report_info("overflow=0x%lx, #0=%ld #1=%ld", read_sysreg(pmovsclr_el0), + read_regn(pmevcntr, 0), read_regn(pmevcntr, 1)); +} + #endif /* @@ -658,6 +780,12 @@ int main(int argc, char *argv[]) } else if (strcmp(argv[1], "mem-access") == 0) { report_prefix_push(argv[1]); test_mem_access(); + } else if (strcmp(argv[1], "chained-counters") == 0) { + report_prefix_push(argv[1]); + test_chained_counters(); + } else if (strcmp(argv[1], "chained-sw-incr") == 0) { + report_prefix_push(argv[1]); + test_chained_sw_incr(); } else { report_abort("Unknown sub-test '%s'", argv[1]); } diff --git a/arm/unittests.cfg b/arm/unittests.cfg index 7a59403..1bd4319 100644 --- a/arm/unittests.cfg +++ b/arm/unittests.cfg @@ -90,6 +90,18 @@ groups = pmu arch = arm64 extra_params = -append 'mem-access' +[pmu-chained-counters] +file = pmu.flat +groups = pmu +arch = arm64 +extra_params = -append 'chained-counters' + +[pmu-chained-sw-incr] +file = pmu.flat +groups = pmu +arch = arm64 +extra_params = -append 'chained-sw-incr' + # Test PMU support (TCG) with -icount IPC=1 #[pmu-tcg-icount-1] #file = pmu.flat From patchwork Mon Dec 16 20:47:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 11295231 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3858F138D for ; Mon, 16 Dec 2019 20:48:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0C5FD2465E for ; Mon, 16 Dec 2019 20:48:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ZWvxGfM9" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727491AbfLPUsi (ORCPT ); Mon, 16 Dec 2019 15:48:38 -0500 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:23991 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727571AbfLPUsh (ORCPT ); Mon, 16 Dec 2019 15:48:37 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1576529316; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+KgNy5VsyIbFC+acRMDGjgJJ4wS9jemc5hSKxwg70KQ=; b=ZWvxGfM9nuPVGoQCjzm7HYKlAg1a1baFzlkH57ZlHYLwB7PZm2eSTuUXV4nclYEAzyKj7d 3dE7hlcRsB1icbi6UGDGzyRYU0rnIG35b37IfIbyKSTLYeGOChXA05eENEUdEoHoSaB/zO /eXQ0FikhaG8B4ZG2c0uZylhvVfTIXI= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-15-1h7B-h2wNcq0MmOi1a7LSA-1; Mon, 16 Dec 2019 15:48:35 -0500 X-MC-Unique: 1h7B-h2wNcq0MmOi1a7LSA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E9187A0374; Mon, 16 Dec 2019 20:48:33 +0000 (UTC) Received: from laptop.redhat.com (ovpn-116-117.ams2.redhat.com [10.36.116.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id 33C7C5D9C9; Mon, 16 Dec 2019 20:48:30 +0000 (UTC) From: Eric Auger To: eric.auger.pro@gmail.com, eric.auger@redhat.com, maz@kernel.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, qemu-devel@nongnu.org, qemu-arm@nongnu.org Cc: drjones@redhat.com, andrew.murray@arm.com, andre.przywara@arm.com, peter.maydell@linaro.org, alexandru.elisei@arm.com Subject: [kvm-unit-tests PATCH 07/10] arm: pmu: test 32-bit <-> 64-bit transitions Date: Mon, 16 Dec 2019 21:47:54 +0100 Message-Id: <20191216204757.4020-8-eric.auger@redhat.com> In-Reply-To: <20191216204757.4020-1-eric.auger@redhat.com> References: <20191216204757.4020-1-eric.auger@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Test configurations where we transit from 32b to 64b counters and conversely. Also tests configuration where chain counters are configured but only one counter is enabled. Signed-off-by: Eric Auger --- arm/pmu.c | 135 ++++++++++++++++++++++++++++++++++++++++++++++ arm/unittests.cfg | 6 +++ 2 files changed, 141 insertions(+) diff --git a/arm/pmu.c b/arm/pmu.c index ad98771..8506544 100644 --- a/arm/pmu.c +++ b/arm/pmu.c @@ -115,6 +115,7 @@ static void test_basic_event_count(void) {} static void test_mem_access(void) {} static void test_chained_counters(void) {} static void test_chained_sw_incr(void) {} +static void test_chain_promotion(void) {} #elif defined(__aarch64__) #define ID_AA64DFR0_PERFMON_SHIFT 8 @@ -581,6 +582,137 @@ static void test_chained_sw_incr(void) read_regn(pmevcntr, 0), read_regn(pmevcntr, 1)); } +static void test_chain_promotion(void) +{ + uint32_t events[] = { 0x13 /* MEM_ACCESS */, 0x1E /* CHAIN */}; + void *addr = malloc(PAGE_SIZE); + + if (!satisfy_prerequisites(events, ARRAY_SIZE(events))) + return; + + /* Only enable CHAIN counter */ + pmu_reset(); + write_regn(pmevtyper, 0, events[0] | PMEVTYPER_EXCLUDE_EL0); + write_regn(pmevtyper, 1, events[1] | PMEVTYPER_EXCLUDE_EL0); + write_sysreg_s(0x2, PMCNTENSET_EL0); + isb(); + + mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); + report(!read_regn(pmevcntr, 0), + "chain counter not counting if even counter is disabled"); + + /* Only enable even counter */ + pmu_reset(); + write_regn(pmevcntr, 0, 0xFFFFFFF0); + write_sysreg_s(0x1, PMCNTENSET_EL0); + isb(); + + mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); + report(!read_regn(pmevcntr, 1) && (read_sysreg(pmovsclr_el0) == 0x1), + "odd counter did not increment on overflow if disabled"); + report_info("MEM_ACCESS counter #0 has value %ld", + read_regn(pmevcntr, 0)); + report_info("CHAIN counter #1 has value %ld", + read_regn(pmevcntr, 1)); + report_info("overflow counter %ld", read_sysreg(pmovsclr_el0)); + + /* start at 0xFFFFFFDC, +20 with CHAIN enabled, +20 with CHAIN disabled */ + pmu_reset(); + write_sysreg_s(0x3, PMCNTENSET_EL0); + write_regn(pmevcntr, 0, 0xFFFFFFDC); + isb(); + + mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); + report_info("MEM_ACCESS counter #0 has value 0x%lx", + read_regn(pmevcntr, 0)); + + /* disable the CHAIN event */ + write_sysreg_s(0x2, PMCNTENCLR_EL0); + mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); + report_info("MEM_ACCESS counter #0 has value 0x%lx", + read_regn(pmevcntr, 0)); + report(read_sysreg(pmovsclr_el0) == 0x1, + "should have triggered an overflow on #0"); + report(!read_regn(pmevcntr, 1), + "CHAIN counter #1 shouldn't have incremented"); + + /* start at 0xFFFFFFDC, +20 with CHAIN disabled, +20 with CHAIN enabled */ + + pmu_reset(); + write_sysreg_s(0x1, PMCNTENSET_EL0); + write_regn(pmevcntr, 0, 0xFFFFFFDC); + isb(); + report_info("counter #0 = 0x%lx, counter #1 = 0x%lx overflow=0x%lx", + read_regn(pmevcntr, 0), read_regn(pmevcntr, 1), + read_sysreg(pmovsclr_el0)); + + mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); + report_info("MEM_ACCESS counter #0 has value 0x%lx", + read_regn(pmevcntr, 0)); + + /* enable the CHAIN event */ + write_sysreg_s(0x3, PMCNTENSET_EL0); + isb(); + mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); + report_info("MEM_ACCESS counter #0 has value 0x%lx", + read_regn(pmevcntr, 0)); + + report((read_regn(pmevcntr, 1) == 1) && !read_sysreg(pmovsclr_el0), + "CHAIN counter #1 should have incremented and no overflow expected"); + + report_info("CHAIN counter #1 = 0x%lx, overflow=0x%lx", + read_regn(pmevcntr, 1), read_sysreg(pmovsclr_el0)); + + /* start as MEM_ACCESS/CPU_CYCLES and move to CHAIN/MEM_ACCESS */ + pmu_reset(); + write_regn(pmevtyper, 0, 0x13 /* MEM_ACCESS */ | PMEVTYPER_EXCLUDE_EL0); + write_regn(pmevtyper, 1, 0x11 /* CPU_CYCLES */ | PMEVTYPER_EXCLUDE_EL0); + write_sysreg_s(0x3, PMCNTENSET_EL0); + write_regn(pmevcntr, 0, 0xFFFFFFDC); + isb(); + + mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); + report_info("MEM_ACCESS counter #0 has value 0x%lx", + read_regn(pmevcntr, 0)); + + /* 0 becomes CHAINED */ + write_sysreg_s(0x0, PMCNTENSET_EL0); + write_regn(pmevtyper, 1, 0x1E /* CHAIN */ | PMEVTYPER_EXCLUDE_EL0); + write_sysreg_s(0x3, PMCNTENSET_EL0); + + mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); + report_info("MEM_ACCESS counter #0 has value 0x%lx", + read_regn(pmevcntr, 0)); + + report((read_regn(pmevcntr, 1) == 1) && !read_sysreg(pmovsclr_el0), + "CHAIN counter #1 should have incremented and no overflow expected"); + + report_info("CHAIN counter #1 = 0x%lx, overflow=0x%lx", + read_regn(pmevcntr, 1), read_sysreg(pmovsclr_el0)); + + /* start as CHAIN/MEM_ACCESS and move to MEM_ACCESS/CPU_CYCLES */ + pmu_reset(); + write_regn(pmevtyper, 0, 0x13 /* MEM_ACCESS */ | PMEVTYPER_EXCLUDE_EL0); + write_regn(pmevtyper, 1, 0x1E /* CHAIN */ | PMEVTYPER_EXCLUDE_EL0); + write_regn(pmevcntr, 0, 0xFFFFFFDC); + write_sysreg_s(0x3, PMCNTENSET_EL0); + + mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); + report_info("counter #0=0x%lx, counter #1=0x%lx", + read_regn(pmevcntr, 0), read_regn(pmevcntr, 1)); + + write_sysreg_s(0x0, PMCNTENSET_EL0); + write_regn(pmevtyper, 1, 0x11 /* CPU_CYCLES */ | PMEVTYPER_EXCLUDE_EL0); + write_sysreg_s(0x3, PMCNTENSET_EL0); + + mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E); + report(read_sysreg(pmovsclr_el0) == 1, + "overflow is expected on counter 0"); + report_info("counter #0=0x%lx, counter #1=0x%lx overflow=0x%lx", + read_regn(pmevcntr, 0), read_regn(pmevcntr, 1), + read_sysreg(pmovsclr_el0)); +} + #endif /* @@ -786,6 +918,9 @@ int main(int argc, char *argv[]) } else if (strcmp(argv[1], "chained-sw-incr") == 0) { report_prefix_push(argv[1]); test_chained_sw_incr(); + } else if (strcmp(argv[1], "chain-promotion") == 0) { + report_prefix_push(argv[1]); + test_chain_promotion(); } else { report_abort("Unknown sub-test '%s'", argv[1]); } diff --git a/arm/unittests.cfg b/arm/unittests.cfg index 1bd4319..eb6e87e 100644 --- a/arm/unittests.cfg +++ b/arm/unittests.cfg @@ -102,6 +102,12 @@ groups = pmu arch = arm64 extra_params = -append 'chained-sw-incr' +[pmu-chain-promotion] +file = pmu.flat +groups = pmu +arch = arm64 +extra_params = -append 'chain-promotion' + # Test PMU support (TCG) with -icount IPC=1 #[pmu-tcg-icount-1] #file = pmu.flat From patchwork Mon Dec 16 20:47:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 11295239 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0870C138D for ; Mon, 16 Dec 2019 20:49:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id DB84721D7D for ; Mon, 16 Dec 2019 20:48:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="coEMHF6x" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726885AbfLPUs7 (ORCPT ); Mon, 16 Dec 2019 15:48:59 -0500 Received: from us-smtp-1.mimecast.com ([205.139.110.61]:32576 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726571AbfLPUs6 (ORCPT ); Mon, 16 Dec 2019 15:48:58 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1576529337; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xlSr73pKSiopKdILGn5ZrLva3IOgBSdl74YTmPimHio=; b=coEMHF6xnRZaxsmeSuZdjR0pNBdy27RSCNFd5TQN1V8dDzvI0lueDuyrLp57pBRUV0WhBu AoiCEjNiOMT23EZZ5fjn6L9dOicXfgtyd5UiosFlUgeZLLPYnsnJYgyARVmRP76d4fR8Qo gAvuMNcimZfzJY5H7txb1ekUDGrncXg= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-124-OlX8qa1eNJigoM1oY1pwZA-1; Mon, 16 Dec 2019 15:48:53 -0500 X-MC-Unique: OlX8qa1eNJigoM1oY1pwZA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 027DF10B9593; Mon, 16 Dec 2019 20:48:37 +0000 (UTC) Received: from laptop.redhat.com (ovpn-116-117.ams2.redhat.com [10.36.116.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4E2015D9C9; Mon, 16 Dec 2019 20:48:34 +0000 (UTC) From: Eric Auger To: eric.auger.pro@gmail.com, eric.auger@redhat.com, maz@kernel.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, qemu-devel@nongnu.org, qemu-arm@nongnu.org Cc: drjones@redhat.com, andrew.murray@arm.com, andre.przywara@arm.com, peter.maydell@linaro.org, alexandru.elisei@arm.com Subject: [kvm-unit-tests PATCH 08/10] arm: gic: Provide per-IRQ helper functions Date: Mon, 16 Dec 2019 21:47:55 +0100 Message-Id: <20191216204757.4020-9-eric.auger@redhat.com> In-Reply-To: <20191216204757.4020-1-eric.auger@redhat.com> References: <20191216204757.4020-1-eric.auger@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Andre Przywara A common theme when accessing per-IRQ parameters in the GIC distributor is to set fields of a certain bit width in a range of MMIO registers. Examples are the enabled status (one bit per IRQ), the level/edge configuration (2 bits per IRQ) or the priority (8 bits per IRQ). Add a generic helper function which is able to mask and set the respective number of bits, given the IRQ number and the MMIO offset. Provide wrappers using this function to easily allow configuring an IRQ. For now assume that private IRQ numbers always refer to the current CPU. In a GICv2 accessing the "other" private IRQs is not easily doable (the registers are banked per CPU on the same MMIO address), so we impose the same limitation on GICv3, even though those registers are not banked there anymore. Signed-off-by: Andre Przywara --- initialize reg --- lib/arm/asm/gic-v3.h | 2 + lib/arm/asm/gic.h | 9 +++++ lib/arm/gic.c | 90 ++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 101 insertions(+) diff --git a/lib/arm/asm/gic-v3.h b/lib/arm/asm/gic-v3.h index 347be2f..4a445a5 100644 --- a/lib/arm/asm/gic-v3.h +++ b/lib/arm/asm/gic-v3.h @@ -23,6 +23,8 @@ #define GICD_CTLR_ENABLE_G1A (1U << 1) #define GICD_CTLR_ENABLE_G1 (1U << 0) +#define GICD_IROUTER 0x6000 + /* Re-Distributor registers, offsets from RD_base */ #define GICR_TYPER 0x0008 diff --git a/lib/arm/asm/gic.h b/lib/arm/asm/gic.h index 1fc10a0..21cdb58 100644 --- a/lib/arm/asm/gic.h +++ b/lib/arm/asm/gic.h @@ -15,6 +15,7 @@ #define GICD_IIDR 0x0008 #define GICD_IGROUPR 0x0080 #define GICD_ISENABLER 0x0100 +#define GICD_ICENABLER 0x0180 #define GICD_ISPENDR 0x0200 #define GICD_ICPENDR 0x0280 #define GICD_ISACTIVER 0x0300 @@ -73,5 +74,13 @@ extern void gic_write_eoir(u32 irqstat); extern void gic_ipi_send_single(int irq, int cpu); extern void gic_ipi_send_mask(int irq, const cpumask_t *dest); +void gic_set_irq_bit(int irq, int offset); +void gic_enable_irq(int irq); +void gic_disable_irq(int irq); +void gic_set_irq_priority(int irq, u8 prio); +void gic_set_irq_target(int irq, int cpu); +void gic_set_irq_group(int irq, int group); +int gic_get_irq_group(int irq); + #endif /* !__ASSEMBLY__ */ #endif /* _ASMARM_GIC_H_ */ diff --git a/lib/arm/gic.c b/lib/arm/gic.c index 9430116..aa9cb86 100644 --- a/lib/arm/gic.c +++ b/lib/arm/gic.c @@ -146,3 +146,93 @@ void gic_ipi_send_mask(int irq, const cpumask_t *dest) assert(gic_common_ops && gic_common_ops->ipi_send_mask); gic_common_ops->ipi_send_mask(irq, dest); } + +enum gic_bit_access { + ACCESS_READ, + ACCESS_SET, + ACCESS_RMW +}; + +static u8 gic_masked_irq_bits(int irq, int offset, int bits, u8 value, + enum gic_bit_access access) +{ + void *base; + int split = 32 / bits; + int shift = (irq % split) * bits; + u32 reg = 0, mask = ((1U << bits) - 1) << shift; + + switch (gic_version()) { + case 2: + base = gicv2_dist_base(); + break; + case 3: + if (irq < 32) + base = gicv3_sgi_base(); + else + base = gicv3_dist_base(); + break; + default: + return 0; + } + base += offset + (irq / split) * 4; + + switch (access) { + case ACCESS_READ: + return (readl(base) & mask) >> shift; + case ACCESS_SET: + reg = 0; + break; + case ACCESS_RMW: + reg = readl(base) & ~mask; + break; + } + + writel(reg | ((u32)value << shift), base); + + return 0; +} + +void gic_set_irq_bit(int irq, int offset) +{ + gic_masked_irq_bits(irq, offset, 1, 1, ACCESS_SET); +} + +void gic_enable_irq(int irq) +{ + gic_set_irq_bit(irq, GICD_ISENABLER); +} + +void gic_disable_irq(int irq) +{ + gic_set_irq_bit(irq, GICD_ICENABLER); +} + +void gic_set_irq_priority(int irq, u8 prio) +{ + gic_masked_irq_bits(irq, GICD_IPRIORITYR, 8, prio, ACCESS_RMW); +} + +void gic_set_irq_target(int irq, int cpu) +{ + if (irq < 32) + return; + + if (gic_version() == 2) { + gic_masked_irq_bits(irq, GICD_ITARGETSR, 8, 1U << cpu, + ACCESS_RMW); + + return; + } + + writeq(cpus[cpu], gicv3_dist_base() + GICD_IROUTER + irq * 8); +} + +void gic_set_irq_group(int irq, int group) +{ + gic_masked_irq_bits(irq, GICD_IGROUPR, 1, group, ACCESS_RMW); +} + +int gic_get_irq_group(int irq) +{ + return gic_masked_irq_bits(irq, GICD_IGROUPR, 1, 0, ACCESS_READ); +} From patchwork Mon Dec 16 20:47:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 11295235 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 363751580 for ; Mon, 16 Dec 2019 20:48:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1400F24655 for ; Mon, 16 Dec 2019 20:48:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="KWureBRn" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727613AbfLPUso (ORCPT ); Mon, 16 Dec 2019 15:48:44 -0500 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:47614 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727539AbfLPUsn (ORCPT ); Mon, 16 Dec 2019 15:48:43 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1576529323; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wIBn80+V74amPBupfe+F9anDq8XfKyQu39ZmejTiR10=; b=KWureBRnuqBP2CVxwO0mjGV0YingWtGbOdRyqivR4k3xpe+Hz/tNDaIK3kHV+of8M5mQJT HEXDd2fsnVHn4XwZgX2S2uRyXmUi3VBM24T7cY1/fhb8av5EGtgx50lAdkJJwRcC7v111R x+eszrq80Oco4ZStfZoTOVBMy45UfIM= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-305-gv3zSrO2M92zwqDgWdhoog-1; Mon, 16 Dec 2019 15:48:42 -0500 X-MC-Unique: gv3zSrO2M92zwqDgWdhoog-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 12AAB8E10DB; Mon, 16 Dec 2019 20:48:40 +0000 (UTC) Received: from laptop.redhat.com (ovpn-116-117.ams2.redhat.com [10.36.116.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5A0485D9C9; Mon, 16 Dec 2019 20:48:37 +0000 (UTC) From: Eric Auger To: eric.auger.pro@gmail.com, eric.auger@redhat.com, maz@kernel.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, qemu-devel@nongnu.org, qemu-arm@nongnu.org Cc: drjones@redhat.com, andrew.murray@arm.com, andre.przywara@arm.com, peter.maydell@linaro.org, alexandru.elisei@arm.com Subject: [kvm-unit-tests PATCH 09/10] arm/arm64: gic: Introduce setup_irq() helper Date: Mon, 16 Dec 2019 21:47:56 +0100 Message-Id: <20191216204757.4020-10-eric.auger@redhat.com> In-Reply-To: <20191216204757.4020-1-eric.auger@redhat.com> References: <20191216204757.4020-1-eric.auger@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org ipi_enable() code would be reusable for other interrupts than IPI. Let's rename it setup_irq() and pass an interrupt handler pointer. We also export it to use it in other tests such as the PMU's one. Signed-off-by: Eric Auger --- arm/gic.c | 24 +++--------------------- lib/arm/asm/gic.h | 3 +++ lib/arm/gic.c | 11 +++++++++++ 3 files changed, 17 insertions(+), 21 deletions(-) diff --git a/arm/gic.c b/arm/gic.c index fcf4c1f..ba43ae5 100644 --- a/arm/gic.c +++ b/arm/gic.c @@ -215,20 +215,9 @@ static void ipi_test_smp(void) report_prefix_pop(); } -static void ipi_enable(void) -{ - gic_enable_defaults(); -#ifdef __arm__ - install_exception_handler(EXCPTN_IRQ, ipi_handler); -#else - install_irq_handler(EL1H_IRQ, ipi_handler); -#endif - local_irq_enable(); -} - static void ipi_send(void) { - ipi_enable(); + setup_irq(ipi_handler); wait_on_ready(); ipi_test_self(); ipi_test_smp(); @@ -238,7 +227,7 @@ static void ipi_send(void) static void ipi_recv(void) { - ipi_enable(); + setup_irq(ipi_handler); cpumask_set_cpu(smp_processor_id(), &ready); while (1) wfi(); @@ -295,14 +284,7 @@ static void ipi_clear_active_handler(struct pt_regs *regs __unused) static void run_active_clear_test(void) { report_prefix_push("active"); - gic_enable_defaults(); -#ifdef __arm__ - install_exception_handler(EXCPTN_IRQ, ipi_clear_active_handler); -#else - install_irq_handler(EL1H_IRQ, ipi_clear_active_handler); -#endif - local_irq_enable(); - + setup_irq(ipi_clear_active_handler); ipi_test_self(); report_prefix_pop(); } diff --git a/lib/arm/asm/gic.h b/lib/arm/asm/gic.h index 21cdb58..55dd84b 100644 --- a/lib/arm/asm/gic.h +++ b/lib/arm/asm/gic.h @@ -82,5 +82,8 @@ void gic_set_irq_target(int irq, int cpu); void gic_set_irq_group(int irq, int group); int gic_get_irq_group(int irq); +typedef void (*handler_t)(struct pt_regs *regs __unused); +extern void setup_irq(handler_t handler); + #endif /* !__ASSEMBLY__ */ #endif /* _ASMARM_GIC_H_ */ diff --git a/lib/arm/gic.c b/lib/arm/gic.c index aa9cb86..8416dde 100644 --- a/lib/arm/gic.c +++ b/lib/arm/gic.c @@ -236,3 +236,14 @@ int gic_get_irq_group(int irq) { return gic_masked_irq_bits(irq, GICD_IGROUPR, 1, 0, ACCESS_READ); } + +void setup_irq(handler_t handler) +{ + gic_enable_defaults(); +#ifdef __arm__ + install_exception_handler(EXCPTN_IRQ, handler); +#else + install_irq_handler(EL1H_IRQ, handler); +#endif + local_irq_enable(); +} From patchwork Mon Dec 16 20:47:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 11295237 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BA61D1580 for ; Mon, 16 Dec 2019 20:48:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8EFE521835 for ; Mon, 16 Dec 2019 20:48:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="UvJJlG0C" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727632AbfLPUst (ORCPT ); Mon, 16 Dec 2019 15:48:49 -0500 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:55041 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727629AbfLPUst (ORCPT ); Mon, 16 Dec 2019 15:48:49 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1576529328; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JVqCQEMGSyAN0RfyQhlZYKuYj1yMNsDA4gJwuQse6GI=; b=UvJJlG0COTddLdz4g05iniGoSgsEGi6rD9DO/Xsn+o5B9JEs81DZniSn6eYl63hCXQE7PQ KKnnOCY+cBs8+9T4K9r0KDuLI9Hv/j+pzGzsZ+y96iGCbjSsmk3yvmOeGbXeiXnNmGWaTa 8GM7YKKFZfsoCNeKiR6MNwzu/l8wPhs= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-324--XxOX5fLNpmjjCkVjiP1nw-1; Mon, 16 Dec 2019 15:48:47 -0500 X-MC-Unique: -XxOX5fLNpmjjCkVjiP1nw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5E67510B959E; Mon, 16 Dec 2019 20:48:45 +0000 (UTC) Received: from laptop.redhat.com (ovpn-116-117.ams2.redhat.com [10.36.116.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6A8075D9C9; Mon, 16 Dec 2019 20:48:40 +0000 (UTC) From: Eric Auger To: eric.auger.pro@gmail.com, eric.auger@redhat.com, maz@kernel.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, qemu-devel@nongnu.org, qemu-arm@nongnu.org Cc: drjones@redhat.com, andrew.murray@arm.com, andre.przywara@arm.com, peter.maydell@linaro.org, alexandru.elisei@arm.com Subject: [kvm-unit-tests PATCH 10/10] arm: pmu: Test overflow interrupts Date: Mon, 16 Dec 2019 21:47:57 +0100 Message-Id: <20191216204757.4020-11-eric.auger@redhat.com> In-Reply-To: <20191216204757.4020-1-eric.auger@redhat.com> References: <20191216204757.4020-1-eric.auger@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Test overflows for MEM_ACCESS and SW_INCR events. Also tests overflows with 64-bit events. Signed-off-by: Eric Auger --- arm/pmu.c | 134 ++++++++++++++++++++++++++++++++++++++++++++++ arm/unittests.cfg | 6 +++ 2 files changed, 140 insertions(+) diff --git a/arm/pmu.c b/arm/pmu.c index 8506544..9af9e42 100644 --- a/arm/pmu.c +++ b/arm/pmu.c @@ -45,6 +45,11 @@ struct pmu { uint32_t pmcr_ro; }; +struct pmu_stats { + unsigned long bitmap; + uint32_t interrupts[32]; +}; + static struct pmu pmu; #if defined(__arm__) @@ -116,6 +121,7 @@ static void test_mem_access(void) {} static void test_chained_counters(void) {} static void test_chained_sw_incr(void) {} static void test_chain_promotion(void) {} +static void test_overflow_interrupt(void) {} #elif defined(__aarch64__) #define ID_AA64DFR0_PERFMON_SHIFT 8 @@ -263,6 +269,43 @@ asm volatile( : ); } +static struct pmu_stats pmu_stats; + +static void irq_handler(struct pt_regs *regs) +{ + uint32_t irqstat, irqnr; + + irqstat = gic_read_iar(); + irqnr = gic_iar_irqnr(irqstat); + gic_write_eoir(irqstat); + + if (irqnr == 23) { + unsigned long overflows = read_sysreg(pmovsclr_el0); + int i; + + report_info("--> PMU overflow interrupt %d (counter bitmask 0x%lx)", + irqnr, overflows); + for (i = 0; i < 32; i++) { + if (test_and_clear_bit(i, &overflows)) { + pmu_stats.interrupts[i]++; + pmu_stats.bitmap |= 1 << i; + } + } + write_sysreg(0xFFFFFFFF, pmovsclr_el0); + } else { + report_info("Unexpected interrupt: %d\n", irqnr); + } +} + +static void pmu_reset_stats(void) +{ + int i; + + for (i = 0; i < 32; i++) + pmu_stats.interrupts[i] = 0; + + pmu_stats.bitmap = 0; +} static void pmu_reset(void) { @@ -274,6 +317,7 @@ static void pmu_reset(void) write_sysreg(0xFFFFFFFF, pmovsclr_el0); /* disable overflow interrupts on all counters */ write_sysreg(0xFFFFFFFF, pmintenclr_el1); + pmu_reset_stats(); isb(); } @@ -713,6 +757,93 @@ static void test_chain_promotion(void) read_sysreg(pmovsclr_el0)); } +static bool expect_interrupts(uint32_t bitmap) +{ + int i; + + if (pmu_stats.bitmap ^ bitmap) + return false; + + for (i = 0; i < 32; i++) { + if (test_and_clear_bit(i, &pmu_stats.bitmap)) + if (pmu_stats.interrupts[i] != 1) + return false; + } + return true; +} + +static void test_overflow_interrupt(void) +{ + uint32_t events[] = { 0x13 /* MEM_ACCESS */, 0x00 /* SW_INCR */}; + void *addr = malloc(PAGE_SIZE); + int i; + + if (!satisfy_prerequisites(events, ARRAY_SIZE(events))) + return; + + setup_irq(irq_handler); + gic_enable_irq(23); + + pmu_reset(); + + write_regn(pmevtyper, 0, events[0] | PMEVTYPER_EXCLUDE_EL0); + write_regn(pmevtyper, 1, events[1] | PMEVTYPER_EXCLUDE_EL0); + write_sysreg_s(0x3, PMCNTENSET_EL0); + write_regn(pmevcntr, 0, 0xFFFFFFF0); + write_regn(pmevcntr, 1, 0xFFFFFFF0); + isb(); + + /* interrupts are disabled */ + + mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E); + report(expect_interrupts(0), "no overflow interrupt received"); + + set_pmcr(pmu.pmcr_ro | PMU_PMCR_E); + for (i = 0; i < 100; i++) + write_sysreg(0x2, pmswinc_el0); + + set_pmcr(pmu.pmcr_ro); + report(expect_interrupts(0), "no overflow interrupt received"); + + /* enable interrupts */ + + pmu_reset_stats(); + + write_regn(pmevcntr, 0, 0xFFFFFFF0); + write_regn(pmevcntr, 1, 0xFFFFFFF0); + write_sysreg(0xFFFFFFFF, pmintenset_el1); + isb(); + + mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E); + for (i = 0; i < 100; i++) + write_sysreg(0x3, pmswinc_el0); + + mem_access_loop(addr, 200, pmu.pmcr_ro); + report_info("overflow=0x%lx", read_sysreg(pmovsclr_el0)); + report(expect_interrupts(0x3), + "overflow interrupts expected on #0 and #1"); + + /* promote to 64-b */ + + pmu_reset_stats(); + + events[1] = 0x1E /* CHAIN */; + write_regn(pmevtyper, 1, events[1] | PMEVTYPER_EXCLUDE_EL0); + write_regn(pmevcntr, 0, 0xFFFFFFF0); + isb(); + mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E); + report(expect_interrupts(0), + "no overflow interrupt expected on 32b boundary"); + + /* overflow on odd counter */ + pmu_reset_stats(); + write_regn(pmevcntr, 0, 0xFFFFFFF0); + write_regn(pmevcntr, 1, 0xFFFFFFFF); + isb(); + mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E); + report(expect_interrupts(0x2), + "expect overflow interrupt on odd counter"); +} #endif /* @@ -921,6 +1052,9 @@ int main(int argc, char *argv[]) } else if (strcmp(argv[1], "chain-promotion") == 0) { report_prefix_push(argv[1]); test_chain_promotion(); + } else if (strcmp(argv[1], "overflow-interrupt") == 0) { + report_prefix_push(argv[1]); + test_overflow_interrupt(); } else { report_abort("Unknown sub-test '%s'", argv[1]); } diff --git a/arm/unittests.cfg b/arm/unittests.cfg index eb6e87e..1d1bc27 100644 --- a/arm/unittests.cfg +++ b/arm/unittests.cfg @@ -108,6 +108,12 @@ groups = pmu arch = arm64 extra_params = -append 'chain-promotion' +[overflow-interrupt] +file = pmu.flat +groups = pmu +arch = arm64 +extra_params = -append 'overflow-interrupt' + # Test PMU support (TCG) with -icount IPC=1 #[pmu-tcg-icount-1] #file = pmu.flat