From patchwork Tue Jan 7 17:57:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Herring X-Patchwork-Id: 13929469 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D8790E77197 for ; Tue, 7 Jan 2025 18:02:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=/NZ5PkKWwqogh83p7m4ThLhG4atvxnGH1PhUqTUSDC0=; b=aQHov2tdsgVETP3+t9Bhs8bTzK 1n81Ya6o7jZy1j67JxCXdw4XJT4Lm/6sf/LEnwDrRfQqJKE53aiT7rZDLhku4HO+N/BEKtvOqMJrJ SeIIbcCSSnV0DvwOg/dh2Ry+Tw4FDstnzn4GLxyCDVsx4F1Cpc6KeDJS62p3IMELjyNDfgqAgQJRb x5nIZsR0cJ075HpAaAMp+tdy18kdwtq198EntaADvAdoNl/tCjAgZwlOArOMTykFqgaQsDXY9GBEC XXlcv9D/Vy5+TM8TyWdO0LXAU4gYUskm2EQsqWTm4RyET9cxpEtiXcnX0vB5h53bTvwjqLSMfmmi7 oUgrB54Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tVDtu-00000005wnl-0oLe; Tue, 07 Jan 2025 18:01:58 +0000 Received: from nyc.source.kernel.org ([147.75.193.91]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tVDq3-00000005vrH-3Y2d for linux-arm-kernel@lists.infradead.org; Tue, 07 Jan 2025 17:58:00 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 1CE76A41B45; Tue, 7 Jan 2025 17:56:10 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 614AFC4CEE1; Tue, 7 Jan 2025 17:57:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1736272678; bh=5DSA0+mdRxVJt+CbOzGSEBIaXyQQIsiFn5Zx0DYYHsU=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=iMupMDetIuxxR8k18HLGg2FUn2kCX6hxYkaaqfxG7fpiYH2mBOQ4UolihxY+CDDLW WXZaQbkAJ8Yw9+yOmq+CM22Ioo308t7bgWuG8NgxuGVVAWXma397cCLjasEF/WkjZm J1cst0YDL65G13cq5oVY+ByCJdUuffYIhXujfshHPamkdOYnjZWUSdsmJFUNhYmUot QWjdPABU2NLInrCQ+C3nTclm1+nLK7CvYi4woV5W1WqItywwDnbMgOXuFaSpK8+7ct OdeveqnnT+WbsDCGo/nODGUn+n/5aOBPP/CMWjVM5LAzBsY597hXGjJRaVAMSbnADV yd5hixivigfOQ== From: "Rob Herring (Arm)" Date: Tue, 07 Jan 2025 11:57:20 -0600 Subject: [PATCH 2/7] perf: arm_pmu: Don't disable counter in armpmu_add() MIME-Version: 1.0 Message-Id: <20250107-arm-pmu-cleanups-v1-v1-2-313951346a25@kernel.org> References: <20250107-arm-pmu-cleanups-v1-v1-0-313951346a25@kernel.org> In-Reply-To: <20250107-arm-pmu-cleanups-v1-v1-0-313951346a25@kernel.org> To: Will Deacon , Mark Rutland Cc: Marc Zyngier , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org X-Mailer: b4 0.15-dev X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250107_095759_954953_B8199D34 X-CRM114-Status: GOOD ( 17.81 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mark Rutland Currently armpmu_add() tries to handle a newly-allocated counter having a stale associated event, but this should not be possible, and if this were to happen the current mitigation is insufficient and potentially expensive. It would be better to warn if we encounter the impossible case. Calls to pmu::add() and pmu::del() are serialized by the core perf code, and armpmu_del() clears the relevant slot in pmu_hw_events::events[] before clearing the bit in pmu_hw_events::used_mask such that the counter can be reallocated. Thus when armpmu_add() allocates a counter index from pmu_hw_events::used_mask, it should not be possible to observe a stale even in pmu_hw_events::events[] unless either pmu_hw_events::used_mask or pmu_hw_events::events[] have been corrupted. If this were to happen, we'd end up with two events with the same event->hw.idx, which would clash with each other during reprogramming, deletion, etc, and produce bogus results. Add a WARN_ON_ONCE() for this case so that we can detect if this ever occurs in practice. That possiblity aside, there's no need to call arm_pmu::disable(event) for the new event. The PMU reset code initialises the counter in a disabled state, and armpmu_del() will disable the counter before it can be reused. Remove the redundant disable. Signed-off-by: Mark Rutland Signed-off-by: Rob Herring (Arm) --- drivers/perf/arm_pmu.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c index 398cce3d76fc44dcc1e1f84f9e3c62b3574a8d12..2f33e69a8caf203928559e54adb2d354888c4b09 100644 --- a/drivers/perf/arm_pmu.c +++ b/drivers/perf/arm_pmu.c @@ -342,12 +342,10 @@ armpmu_add(struct perf_event *event, int flags) if (idx < 0) return idx; - /* - * If there is an event in the counter we are going to use then make - * sure it is disabled. - */ + /* The newly-allocated counter should be empty */ + WARN_ON_ONCE(hw_events->events[idx]); + event->hw.idx = idx; - armpmu->disable(event); hw_events->events[idx] = event; hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE;