From patchwork Wed Jul 31 16:51:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Rob Herring (Arm)" X-Patchwork-Id: 13748981 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CFA33C3DA64 for ; Wed, 31 Jul 2024 16:54:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=UWa8ArDdNSscLerX9XvwIc/K3f9myw+w/OnLnxJ3S1A=; b=uTW11UvN8y1z1y0xW5+21oweFh 3KldoZb7ZrntQxMknC504qq8Uo1UGaBzByApABY0nmMBoAzdhdDuXU2oYpDiW1biu4UTHF8o9YmM8 XU0ZIYsXNaZ/kZeHkFvMmZDi10OgGv79soD3ufgUzmvsqNoGiNL0yZ0225E/tN8NAVdgfDnRIYIHh i0yDl0nSI/JWWVCzBCequa46dVQrMKbcGrDb2XbRX+VxvSaylKb0Iy0q4FuEu2/o9U4pYSSunXVtf TZ4q9Kg7SO9dfDxmXaTpZpBcVk847Kfwa9qHl+Ab73YkZcXPo+cnhS2cfPyO7o6m7EjqLSCo8mJ2a k41kf+fA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sZCar-00000001vsV-35NZ; Wed, 31 Jul 2024 16:54:29 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sZCYH-00000001v0w-25WS for linux-arm-kernel@lists.infradead.org; Wed, 31 Jul 2024 16:51:51 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id B96CB62571; Wed, 31 Jul 2024 16:51:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4D173C4AF0B; Wed, 31 Jul 2024 16:51:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1722444708; bh=qwLOrEKtP0iovFP0me6J6poit6hFpATmdQzESC42ZcA=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=AvNAbZMjPU9AqGj2nGAGEw2NhEvATXks4+3osrBCDevB8QiqyXscpUabYx47GC5s7 cPLrR3i6bGh5kXCkjzRwNPCN1knLAy5CraHiSu/eDON4TeRl+ZBNvMfihbqq30r1iL +Q9vH2v6EcocNPEDpvTOdb68drXmaq/jtNZRQxy3ZrDhDOaaoNXWjRQLmHmc86nIM4 A6LneQO43UdbfGKUoCbl8N4/CKH0r0ABiahxOXQEOdCQXRrqDAWKfr9yaTfJEPDjWv 3nS9uL6wQo+6kcg5dex75SdiTihLzANHls0iI2Y4c3yUFEvdf7zT8gLcjbyjKbgNrG zbJ0m+ah6GQZQ== From: "Rob Herring (Arm)" Date: Wed, 31 Jul 2024 10:51:22 -0600 Subject: [PATCH v3 5/7] arm64: perf/kvm: Use a common PMU cycle counter define MIME-Version: 1.0 Message-Id: <20240731-arm-pmu-3-9-icntr-v3-5-280a8d7ff465@kernel.org> References: <20240731-arm-pmu-3-9-icntr-v3-0-280a8d7ff465@kernel.org> In-Reply-To: <20240731-arm-pmu-3-9-icntr-v3-0-280a8d7ff465@kernel.org> To: Will Deacon , Marc Zyngier , Mark Rutland , James Clark , James Morse , Suzuki K Poulose , Catalin Marinas , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Oliver Upton , Zenghui Yu Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvmarm@lists.linux.dev X-Mailer: b4 0.15-dev X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240731_095149_746333_AB8D2933 X-CRM114-Status: GOOD ( 18.02 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The PMUv3 and KVM code each have a define for the PMU cycle counter index. Move KVM's define to a shared location and use it for PMUv3 driver. Reviewed-by: Marc Zyngier Acked-by: Mark Rutland Signed-off-by: Rob Herring (Arm) --- v2: - Move ARMV8_PMU_CYCLE_IDX to linux/perf/arm_pmuv3.h --- arch/arm64/kvm/sys_regs.c | 1 + drivers/perf/arm_pmuv3.c | 19 +++++++------------ include/kvm/arm_pmu.h | 1 - include/linux/perf/arm_pmuv3.h | 3 +++ 4 files changed, 11 insertions(+), 13 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 33497db257fb..7db24de37ed6 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -18,6 +18,7 @@ #include #include +#include #include #include #include diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index bd45fbcb9a5a..18046cf4b3a3 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -451,11 +451,6 @@ static const struct attribute_group armv8_pmuv3_caps_attr_group = { .attrs = armv8_pmuv3_caps_attrs, }; -/* - * Perf Events' indices - */ -#define ARMV8_IDX_CYCLE_COUNTER 31 - /* * We unconditionally enable ARMv8.5-PMU long event counter support * (64-bit events) where supported. Indicate if this arm_pmu has long @@ -574,7 +569,7 @@ static u64 armv8pmu_read_counter(struct perf_event *event) int idx = hwc->idx; u64 value; - if (idx == ARMV8_IDX_CYCLE_COUNTER) + if (idx == ARMV8_PMU_CYCLE_IDX) value = read_pmccntr(); else value = armv8pmu_read_hw_counter(event); @@ -607,7 +602,7 @@ static void armv8pmu_write_counter(struct perf_event *event, u64 value) value = armv8pmu_bias_long_counter(event, value); - if (idx == ARMV8_IDX_CYCLE_COUNTER) + if (idx == ARMV8_PMU_CYCLE_IDX) write_pmccntr(value); else armv8pmu_write_hw_counter(event, value); @@ -644,7 +639,7 @@ static void armv8pmu_write_event_type(struct perf_event *event) armv8pmu_write_evtype(idx - 1, hwc->config_base); armv8pmu_write_evtype(idx, chain_evt); } else { - if (idx == ARMV8_IDX_CYCLE_COUNTER) + if (idx == ARMV8_PMU_CYCLE_IDX) write_pmccfiltr(hwc->config_base); else armv8pmu_write_evtype(idx, hwc->config_base); @@ -772,7 +767,7 @@ static void armv8pmu_enable_user_access(struct arm_pmu *cpu_pmu) /* Clear any unused counters to avoid leaking their contents */ for_each_andnot_bit(i, cpu_pmu->cntr_mask, cpuc->used_mask, ARMPMU_MAX_HWEVENTS) { - if (i == ARMV8_IDX_CYCLE_COUNTER) + if (i == ARMV8_PMU_CYCLE_IDX) write_pmccntr(0); else armv8pmu_write_evcntr(i, 0); @@ -933,8 +928,8 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events *cpuc, /* Always prefer to place a cycle counter into the cycle counter. */ if ((evtype == ARMV8_PMUV3_PERFCTR_CPU_CYCLES) && !armv8pmu_event_get_threshold(&event->attr)) { - if (!test_and_set_bit(ARMV8_IDX_CYCLE_COUNTER, cpuc->used_mask)) - return ARMV8_IDX_CYCLE_COUNTER; + if (!test_and_set_bit(ARMV8_PMU_CYCLE_IDX, cpuc->used_mask)) + return ARMV8_PMU_CYCLE_IDX; else if (armv8pmu_event_is_64bit(event) && armv8pmu_event_want_user_access(event) && !armv8pmu_has_long_event(cpu_pmu)) @@ -1196,7 +1191,7 @@ static void __armv8pmu_probe_pmu(void *info) 0, FIELD_GET(ARMV8_PMU_PMCR_N, armv8pmu_pmcr_read())); /* Add the CPU cycles counter */ - set_bit(ARMV8_IDX_CYCLE_COUNTER, cpu_pmu->cntr_mask); + set_bit(ARMV8_PMU_CYCLE_IDX, cpu_pmu->cntr_mask); pmceid[0] = pmceid_raw[0] = read_pmceid0(); pmceid[1] = pmceid_raw[1] = read_pmceid1(); diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 334d7c5503cf..871067fb2616 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -10,7 +10,6 @@ #include #include -#define ARMV8_PMU_CYCLE_IDX (ARMV8_PMU_MAX_COUNTERS - 1) #if IS_ENABLED(CONFIG_HW_PERF_EVENTS) && IS_ENABLED(CONFIG_KVM) struct kvm_pmc { diff --git a/include/linux/perf/arm_pmuv3.h b/include/linux/perf/arm_pmuv3.h index 792b8e10b72a..f4ec76f725a3 100644 --- a/include/linux/perf/arm_pmuv3.h +++ b/include/linux/perf/arm_pmuv3.h @@ -9,6 +9,9 @@ #define ARMV8_PMU_MAX_GENERAL_COUNTERS 31 #define ARMV8_PMU_MAX_COUNTERS 32 +#define ARMV8_PMU_CYCLE_IDX 31 + + /* * Common architectural and microarchitectural event numbers. */