From patchwork Wed May 29 18:53:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rajnesh Kanwal X-Patchwork-Id: 13679421 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 14A43C41513 for ; Wed, 29 May 2024 18:54:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=mXRxlVStz34JCwaz+AsTCc718K2qXOwpV4Jh5UaWQYA=; b=Q2SBW7dVbhvvdl i6EcpiLczARZqvd7EO7wZn2vDA1++whqY+oaz192slV1ABT09sFArRD8vOCs4mshcUcbPIpWfxHUy oJqnNE8XU/T3L+NSl72/kwffYlYu+TYUER2drnIbDp7nvrk/fkxS1JCN9ke1nVgKwseVf6+t0YSlb 4/njpFz7nT8I+Aj6DfxeI/p+kvk7UR0cXGetv8kj0mbuLgbdp53JyQtBSwTcUhQe113cD3o0nlwFS IMrqyjBKZZm2LRVtsRgj5/TN2alJuKYAhyZaAF5AxpgEsm+LyqZy5SlgqHLjWZ/bWDCoh274Fgq5E DKfZ0CsfY6lpJy8eVCag==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCORc-00000005G8t-0AHC; Wed, 29 May 2024 18:54:40 +0000 Received: from mail-wm1-x332.google.com ([2a00:1450:4864:20::332]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCORV-00000005G5c-3R85 for linux-riscv@lists.infradead.org; Wed, 29 May 2024 18:54:35 +0000 Received: by mail-wm1-x332.google.com with SMTP id 5b1f17b1804b1-4210aa00c94so429335e9.1 for ; Wed, 29 May 2024 11:54:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1717008871; x=1717613671; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=688BMYms93lIkdSUxmL3fbJu+G2/bRYi5WNxrC0uGt8=; b=NY7zLlbwxhbXUWaokDRn6A/gs+32Hq+rprblz0y4U6yBxvXOs02XuXfmObgFugUqSN 7wKG5u9kU6s+pVp0Rx5pMEK6KSj4zvdF6fMSQVDcfjfBg9HHCj7bNCp6UPFCVDlBmGji K4QF4w0LqF/HKVV4kagoRIWiE6NTkbPgvL8CGjeYxuIfB+KrXNZacmjRX23bBLUfpy1S rJCBlmJ6a70r4V7FPLow7xT2lb6AHGVv4zlgjrFgpoeDqY3PlsepGIcUX+ToK44/xpqT +TbUZMORnN5Wjfz1043xANy18RPVB/YT7EGZ2VHsNg1HCiS5ymDjEXUwW/V9MQ/R/np+ km3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1717008871; x=1717613671; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=688BMYms93lIkdSUxmL3fbJu+G2/bRYi5WNxrC0uGt8=; b=lO1kCi2si13JJ84IEl3IGgCwmwGA1amgQVLPBzQdfvBKpOWMzzfNRm9JCXcoGJRmli L2nmEOgnL8WVsgdM/FFgCLjNJk6Y2gbCH0cac0mKoiJdmV6eqABqAPmhvxXO7GHtjUCe jNOlBm9NJWFCOGGrpjvezzfmMWuh0EVeUfwzgOX/6LB1xXlIzCsRSTR3yP0T/HkjxBKd pOc3gz3/axQ1Now4rTDPkPSqn3auHfDvrsXmWHtDxSt1WcJPqnwMbtYeYTr+vg419ox7 MKQnBn1xKIvAKOGfWpAgJkGOCVAkK698A86/qqd/dqaleJjn6qeT+uK4pT4SEL5jyLu3 c0ZQ== X-Forwarded-Encrypted: i=1; AJvYcCUOq28SGYyVkeAkZiywpjdL0DCZlFFR7ch98eqQOR9C3HLNJ4Qr1ipkaC/Pl9r1IICDKzqSgWGK4eIAWqo6oj/LAM9FgSvE7hn4QjZ4xrks X-Gm-Message-State: AOJu0Ywix+GDJDjkIDy8fBBpsdOosm917D5uANimhyB3KB7zyivwxMsL cta9tZ7liPhrec3dd3EltjjHp03SeE7LXEvyW4YxbGJ8CO/64bEIJNpwUZ7GOZ0= X-Google-Smtp-Source: AGHT+IFXjh/sO2hf9fpunrpzb5/Ur0BabTosyM0Or0Urw/4lUp/bZLnXNorWqgLPv7xYyl4q0uveYw== X-Received: by 2002:a05:600c:4ed0:b0:421:22c4:db60 with SMTP id 5b1f17b1804b1-4212781b273mr404635e9.23.1717008870867; Wed, 29 May 2024 11:54:30 -0700 (PDT) Received: from rkanwal-XPS-15-9520.Home ([2a02:c7c:7527:ee00:7446:71c1:a41a:da9b]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4212706a23csm2787885e9.27.2024.05.29.11.54.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 May 2024 11:54:30 -0700 (PDT) From: Rajnesh Kanwal To: linux-kernel@vger.kernel.org Cc: linux-perf-users@vger.kernel.org, linux-riscv@lists.infradead.org, adrian.hunter@intel.com, alexander.shishkin@linux.intel.com, ajones@ventanamicro.com, anup@brainfault.org, acme@kernel.org, atishp@rivosinc.com, beeman@rivosinc.com, brauner@kernel.org, conor@kernel.org, heiko@sntech.de, irogers@google.com, mingo@redhat.com, james.clark@arm.com, renyu.zj@linux.alibaba.com, jolsa@kernel.org, jisheng.teoh@starfivetech.com, palmer@dabbelt.com, tech-control-transfer-records@lists.riscv.org, will@kernel.org, kaiwenxue1@gmail.com, Rajnesh Kanwal Subject: [PATCH RFC 1/6] perf: Increase the maximum number of samples to 256. Date: Wed, 29 May 2024 19:53:32 +0100 Message-Id: <20240529185337.182722-2-rkanwal@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240529185337.182722-1-rkanwal@rivosinc.com> References: <20240529185337.182722-1-rkanwal@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240529_115434_201368_CE17523D X-CRM114-Status: GOOD ( 13.74 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org RISCV CTR extension support a maximum depth of 256 last branch records. The 127 entries limit results in corrupting CTR entries for RISC-V if configured to be 256 entries. This will not impact any other architectures as it is just increasing maximum limit of possible entries. Signed-off-by: Rajnesh Kanwal --- tools/perf/util/machine.c | 21 ++++++++++++++------- 1 file changed, 14 insertions(+), 7 deletions(-) diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c index 527517db3182..ec12f0199d46 100644 --- a/tools/perf/util/machine.c +++ b/tools/perf/util/machine.c @@ -2254,25 +2254,32 @@ static void save_iterations(struct iterations *iter, iter->cycles += be[i].flags.cycles; } -#define CHASHSZ 127 -#define CHASHBITS 7 -#define NO_ENTRY 0xff +#define CHASHBITS 8 +#define NO_ENTRY 0xffU -#define PERF_MAX_BRANCH_DEPTH 127 +#define PERF_MAX_BRANCH_DEPTH 256 /* Remove loops. */ +/* Note: Last entry (i==ff) will never be checked against NO_ENTRY + * so it's safe to have an unsigned char array to process 256 entries + * without causing clash between last entry and NO_ENTRY value. + */ static int remove_loops(struct branch_entry *l, int nr, struct iterations *iter) { int i, j, off; - unsigned char chash[CHASHSZ]; + unsigned char chash[PERF_MAX_BRANCH_DEPTH]; memset(chash, NO_ENTRY, sizeof(chash)); - BUG_ON(PERF_MAX_BRANCH_DEPTH > 255); + BUG_ON(PERF_MAX_BRANCH_DEPTH > 256); for (i = 0; i < nr; i++) { - int h = hash_64(l[i].from, CHASHBITS) % CHASHSZ; + /* Remainder division by PERF_MAX_BRANCH_DEPTH is not + * needed as hash_64 will anyway limit the hash + * to CHASHBITS + */ + int h = hash_64(l[i].from, CHASHBITS); /* no collision handling for now */ if (chash[h] == NO_ENTRY) { From patchwork Wed May 29 18:53:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rajnesh Kanwal X-Patchwork-Id: 13679424 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4690BC25B75 for ; Wed, 29 May 2024 18:54:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=KDXxY0oHuMSiVSoYhEs822Y8yfhA3IYRl/8UfsqNiuw=; b=zrXAns0F90HF1x S718OxH67xcFTKw5NrZ6mMYRg8QrvEDmxAcgK/3rf81Htx0iMepXpgovy1xziRRQWrvOUBYTiBbGc DQyYhkfNKquAMyQuIwntAqcYmUCQ/SrjFJbzeb6/XEtbPFjoPaGU3z0n3NFxY44hkWzg6ojzAQr2Q zG3uaxJkkTFXDuGKcdaurVU7nHnClKjeahHtDLoZBXMU2g07D7YOI9vplgssomnlNdFNa0xs/ibGH FX8Q7oYWz4fdz+STDCV1Vl1O9mPG6PdpvaSvml0Xo2UdXmxE7oFS8XDIdOQf2dYuLtbFLnHacaMAu ox4t6S/ju7UwqAWoBgIQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCORc-00000005G9E-3l9X; Wed, 29 May 2024 18:54:40 +0000 Received: from mail-wr1-x42a.google.com ([2a00:1450:4864:20::42a]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCORW-00000005G5t-3FzF for linux-riscv@lists.infradead.org; Wed, 29 May 2024 18:54:37 +0000 Received: by mail-wr1-x42a.google.com with SMTP id ffacd0b85a97d-35b447d78f1so25086f8f.0 for ; Wed, 29 May 2024 11:54:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1717008872; x=1717613672; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9bkrwFm3shuYsCyMH7CeeRCQ3k6hoLktrnsincGFelA=; b=Se3hI9pqW1bpO+0GUXx2wg+ec4RPnlG2i8Gxw2I3tg6Fio+K0Ml3rvUmxkx3snhWwf ogJkkHZqShA3DILWsksUS43Cnmcqz1CY3UKF2csdPF6ux2tOAWXs5BjP/B4BeebumIZH bOI5wDuEq2oczISZ/wHP9vALmHQCYiAJzVpt+TSx95jSOo1OE+f6SdhILXYgGUyikiMQ iZ/GFS+ms5oOdWefm5042n/r1JlfVq5+hYlJZPvDNNXHVmc18g8sRMYElNuKTEKUviCj TpwacNG+3DjyrcH5+2/ZTKONoZ6yililrzNuosdnIn+O+uWz+VCc14hzIlZY9BwkPI58 7y+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1717008872; x=1717613672; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9bkrwFm3shuYsCyMH7CeeRCQ3k6hoLktrnsincGFelA=; b=WCeyUnzooDCKsLli9Pd4g2QtP2liKY8XkK3+D6imzagukY3k8gsRgOPyKr53vqG03v ZaX118A8L6E/aQlTVKWS1heVGKvWJL1hh50yGr7XkE6LP8JCePwBkvFiAaTpb1F+7fub v1FZ2wtyBYPnw8WSwsHjqgjduIkjm9DvucCfszWsLMARBT7Kmv2zegvHTmioVtKryvdj 8LFqiimql9GKevAE7UU22cUO9wAILgMOBuhG/LxZbUyvYk4t8IVrxMmtCTOSsMnkAtKb EGv9bjBsP9W8WS7rY0yeuXc6xIDClAenjBmCfES8DMKbwh/A8Ba+hlxNyeuaGiGN2mK0 rlJA== X-Forwarded-Encrypted: i=1; AJvYcCVcqsbBqeOR5LIltbGDw3QTQ6JlpbrYsFxI/KKDZzVXcHgesEcnfm7LdpGm6IzF6m2rixiVCJFn4ql8M2lPIu6ABHzN7NZBBoS/4BJrAMYR X-Gm-Message-State: AOJu0YxmJOePyUYfIBxZt7U07fqvkKrj0qv4+7GeoINgWtbkKAGZkdXs 9GNd3lTc6cOAdhe937VDxlfWh18Opko71Mgs36RYPk4re+mGeURA3DsmHx0ZOh0= X-Google-Smtp-Source: AGHT+IFoQdIT2bGEbgTFZXCZgpArQHuG+mz5mTYos9yf7MRsly5SzLPt+k6vmtWPXeZ8W9+MCZpm8w== X-Received: by 2002:adf:e3c1:0:b0:34d:9b4d:2a43 with SMTP id ffacd0b85a97d-35c7672a78bmr2946621f8f.0.1717008871994; Wed, 29 May 2024 11:54:31 -0700 (PDT) Received: from rkanwal-XPS-15-9520.Home ([2a02:c7c:7527:ee00:7446:71c1:a41a:da9b]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4212706a23csm2787885e9.27.2024.05.29.11.54.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 May 2024 11:54:31 -0700 (PDT) From: Rajnesh Kanwal To: linux-kernel@vger.kernel.org Cc: linux-perf-users@vger.kernel.org, linux-riscv@lists.infradead.org, adrian.hunter@intel.com, alexander.shishkin@linux.intel.com, ajones@ventanamicro.com, anup@brainfault.org, acme@kernel.org, atishp@rivosinc.com, beeman@rivosinc.com, brauner@kernel.org, conor@kernel.org, heiko@sntech.de, irogers@google.com, mingo@redhat.com, james.clark@arm.com, renyu.zj@linux.alibaba.com, jolsa@kernel.org, jisheng.teoh@starfivetech.com, palmer@dabbelt.com, tech-control-transfer-records@lists.riscv.org, will@kernel.org, kaiwenxue1@gmail.com, Rajnesh Kanwal Subject: [PATCH RFC 2/6] riscv: perf: Add Control transfer records CSR definations. Date: Wed, 29 May 2024 19:53:33 +0100 Message-Id: <20240529185337.182722-3-rkanwal@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240529185337.182722-1-rkanwal@rivosinc.com> References: <20240529185337.182722-1-rkanwal@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240529_115434_910560_331623B3 X-CRM114-Status: UNSURE ( 7.95 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Adding CSR defines for RISCV Control Transfer Records extension [0] along with bit-field macros for each CSR. [0]: https://github.com/riscv/riscv-control-transfer-records Signed-off-by: Rajnesh Kanwal --- arch/riscv/include/asm/csr.h | 83 ++++++++++++++++++++++++++++++++++++ 1 file changed, 83 insertions(+) diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h index 701963b64fc4..a80a2ee9d44e 100644 --- a/arch/riscv/include/asm/csr.h +++ b/arch/riscv/include/asm/csr.h @@ -309,6 +309,85 @@ #define CSR_SSCOUNTOVF 0xda0 +/* M-mode Control Transfer Records CSRs */ +#define CSR_MCTRCTL 0x34e + +/* S-mode Control Transfer Records CSRs */ +#define CSR_SCTRCTL 0x14e +#define CSR_SCTRSTATUS 0x14f +#define CSR_SCTRDEPTH 0x15f + +/* VS-mode Control Transfer Records CSRs */ +#define CSR_VSCTRCTL 0x24e + +/* xctrtl CSR bits. */ +#define CTRCTL_U_ENABLE _AC(0x1, UL) +#define CTRCTL_S_ENABLE _AC(0x2, UL) +#define CTRCTL_M_ENABLE _AC(0x4, UL) +#define CTRCTL_RASEMU _AC(0x80, UL) +#define CTRCTL_STE _AC(0x100, UL) +#define CTRCTL_MTE _AC(0x200, UL) +#define CTRCTL_BPFRZ _AC(0x800, UL) +#define CTRCTL_LCOFIFRZ _AC(0x1000, UL) +#define CTRCTL_EXCINH _AC(0x200000000, UL) +#define CTRCTL_INTRINH _AC(0x400000000, UL) +#define CTRCTL_TRETINH _AC(0x800000000, UL) +#define CTRCTL_NTBREN _AC(0x1000000000, UL) +#define CTRCTL_TKBRINH _AC(0x2000000000, UL) +#define CTRCTL_INDCALL_INH _AC(0x10000000000, UL) +#define CTRCTL_DIRCALL_INH _AC(0x20000000000, UL) +#define CTRCTL_INDJUMP_INH _AC(0x40000000000, UL) +#define CTRCTL_DIRJUMP_INH _AC(0x80000000000, UL) +#define CTRCTL_CORSWAP_INH _AC(0x100000000000, UL) +#define CTRCTL_RET_INH _AC(0x200000000000, UL) +#define CTRCTL_INDOJUMP_INH _AC(0x400000000000, UL) +#define CTRCTL_DIROJUMP_INH _AC(0x800000000000, UL) + +/* sctrstatus CSR bits. */ +#define SCTRSTATUS_WRPTR_MASK 0xFF +#define SCTRSTATUS_FROZEN _AC(0x80000000, UL) + +#ifdef CONFIG_RISCV_M_MODE +#define CTRCTL_KERNEL_ENABLE CTRCTL_M_ENABLE +#else +#define CTRCTL_KERNEL_ENABLE CTRCTL_S_ENABLE +#endif + +/* sctrdepth CSR bits. */ +#define SCTRDEPTH_MASK 0x7 + +#define SCTRDEPTH_MIN 0x0 /* 16 Entries. */ +#define SCTRDEPTH_MAX 0x4 /* 256 Entries. */ + +/* ctrsource, ctrtarget and ctrdata CSR bits. */ +#define CTRSOURCE_VALID 0x1ULL +#define CTRTARGET_MISP 0x1ULL + +#define CTRDATA_TYPE_MASK 0xF +#define CTRDATA_CCV 0x8000 +#define CTRDATA_CCM_MASK 0xFFF0000 +#define CTRDATA_CCE_MASK 0xF0000000 + +#define CTRDATA_TYPE_NONE 0 +#define CTRDATA_TYPE_EXCEPTION 1 +#define CTRDATA_TYPE_INTERRUPT 2 +#define CTRDATA_TYPE_TRAP_RET 3 +#define CTRDATA_TYPE_NONTAKEN_BRANCH 4 +#define CTRDATA_TYPE_TAKEN_BRANCH 5 +#define CTRDATA_TYPE_RESERVED_6 6 +#define CTRDATA_TYPE_RESERVED_7 7 +#define CTRDATA_TYPE_INDIRECT_CALL 8 +#define CTRDATA_TYPE_DIRECT_CALL 9 +#define CTRDATA_TYPE_INDIRECT_JUMP 10 +#define CTRDATA_TYPE_DIRECT_JUMP 11 +#define CTRDATA_TYPE_CO_ROUTINE_SWAP 12 +#define CTRDATA_TYPE_RETURN 13 +#define CTRDATA_TYPE_OTHER_INDIRECT_JUMP 14 +#define CTRDATA_TYPE_OTHER_DIRECT_JUMP 15 + +#define CTR_ENTRIES_FIRST 0x200 +#define CTR_ENTRIES_LAST 0x2ff + #define CSR_SSTATUS 0x100 #define CSR_SIE 0x104 #define CSR_STVEC 0x105 @@ -490,6 +569,8 @@ # define CSR_TOPEI CSR_MTOPEI # define CSR_TOPI CSR_MTOPI +# define CSR_CTRCTL CSR_MCTRCTL + # define SR_IE SR_MIE # define SR_PIE SR_MPIE # define SR_PP SR_MPP @@ -520,6 +601,8 @@ # define CSR_TOPEI CSR_STOPEI # define CSR_TOPI CSR_STOPI +# define CSR_CTRCTL CSR_SCTRCTL + # define SR_IE SR_SIE # define SR_PIE SR_SPIE # define SR_PP SR_SPP From patchwork Wed May 29 18:53:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rajnesh Kanwal X-Patchwork-Id: 13679423 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B0855C27C50 for ; Wed, 29 May 2024 18:54:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=zUOtNrgYWRpjjrgAisQ9hzhpP4xZcZr2xvBXbY1gDMs=; b=mmtT1MUOTmVFlX FuGVfKQExGUfUPpl4jqrqxFxdDn65d1I9V+UBIm1Sv7d7C/wcv6iBPPG8hOQlUHSzcrzExXOxNh0K lC6LNHlcEQ/UeNET98D456aVMKIkSxpnNgtN5RvTGbs1Q32lv+f9HkxGvLnM/C61HaWI9FRnwzxTa XziJxH31ICGsk4WbjRIzas5gQ7LmeQ3jv6nv+fZbxpxfPNKU+69Z+d9wPqtW8Xh9KyR5DQY/7nLDs xyTL1nelpf86ol8PbFQILV6A7LaEZr4l5hKiqE34e/C18ARsZ+Dlg3rV9snIsewj7otda6ZXg8RK/ 3nzN63VN1Try60DGN80A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCORd-00000005G9v-4AUy; Wed, 29 May 2024 18:54:42 +0000 Received: from mail-wr1-x42b.google.com ([2a00:1450:4864:20::42b]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCORX-00000005G6E-32ah for linux-riscv@lists.infradead.org; Wed, 29 May 2024 18:54:38 +0000 Received: by mail-wr1-x42b.google.com with SMTP id ffacd0b85a97d-35b447d78f1so25100f8f.0 for ; Wed, 29 May 2024 11:54:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1717008873; x=1717613673; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BSkmcpX/DU8tSqXScAea3itOYSZ+Zk7JjFi4MBclVMg=; b=L9AWwN9xwMPwJrVWKToFASTjosrcTemAwalBrECH9sFPmOYNUf5Unq4izmtUOgfw8C BlEvoO8K/ASS8E6zrPXSUnqgjfEjGlmnb0xeyQTMdof25y3x7DAStlVU+Q4PkNSm1W/G FnijtsdN43xYuAjhgGGCoAggHknMh5puUNoUHXQg14pJ2MOXr3FH/FFpYYvZvm4hM2pz twMETr6plKpUGX235Mf03pmiPv7dSzkK6hhPGOHjrUkmZc6r/Y7bIOvvM12QyI0qTjtw NastD9Dcvb/eppU562ENuzXM6TkWzAMd3U++PPaMuXh8auRF2rJ8m9QboJ/yo8OmVJg7 ++Ng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1717008873; x=1717613673; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BSkmcpX/DU8tSqXScAea3itOYSZ+Zk7JjFi4MBclVMg=; b=QCec+Pd4GzkwTH14ozbACm9Xppug6Sf4v6V9KW7ryng5uZug6F6zYTckF/W0jKxV9H 0uivX1ixuRVD00sIudHlCv7FJbDeHLXhJ4nOqi1QADsS89i6/JfXMfaMc0KdVZ2mEk7W iQ9S69lvPEZlt2Z6PlMyKeoGdsY5NycecVBsuXWJqyJKBFh/zKg96aEc4iVZuiwS0qYc y82MCj9TgwHzVjdcFivV+Eg4ccwL+ZbosvyNeQ4EwR1iVjDJuu/+aFXWmOwWO64cun75 p2PakOfOtGGlUfvYizEidTRwS7GME2jMNEh9LK2lCxYPsYG0Q1pdOsZJe8pcHJZtIeOO lPNw== X-Forwarded-Encrypted: i=1; AJvYcCWO/orAd4XKhnkicm+sfm94IJwNk73cn7TcuNgB0rCyzNhEtoOy/GkDnRpBhEwvT4t/4rUZvvGqrEzUUMQQ9Cy84lAb3QV+2Q7og4/AperC X-Gm-Message-State: AOJu0YxR8NpTMQAsNVf6+DYgvSdjr7xm4cbYNlhaJYrzI5ZhF0Hx4I+l D22NG8hPUps+7fF42j4FJXu0NDD3GqNsVMkjwax/S1/ENMPtGbzunPmUh/sttmk= X-Google-Smtp-Source: AGHT+IHHl9KK9z/L8izKsRybmVqfP8bMjXHwVAHsTsF/TnNuIRcfgvHrqZiLQ7fq4FiWPMrXEUFK7A== X-Received: by 2002:adf:f6cd:0:b0:354:fce5:4cc3 with SMTP id ffacd0b85a97d-35c7a8477f9mr2527916f8f.19.1717008872954; Wed, 29 May 2024 11:54:32 -0700 (PDT) Received: from rkanwal-XPS-15-9520.Home ([2a02:c7c:7527:ee00:7446:71c1:a41a:da9b]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4212706a23csm2787885e9.27.2024.05.29.11.54.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 May 2024 11:54:32 -0700 (PDT) From: Rajnesh Kanwal To: linux-kernel@vger.kernel.org Cc: linux-perf-users@vger.kernel.org, linux-riscv@lists.infradead.org, adrian.hunter@intel.com, alexander.shishkin@linux.intel.com, ajones@ventanamicro.com, anup@brainfault.org, acme@kernel.org, atishp@rivosinc.com, beeman@rivosinc.com, brauner@kernel.org, conor@kernel.org, heiko@sntech.de, irogers@google.com, mingo@redhat.com, james.clark@arm.com, renyu.zj@linux.alibaba.com, jolsa@kernel.org, jisheng.teoh@starfivetech.com, palmer@dabbelt.com, tech-control-transfer-records@lists.riscv.org, will@kernel.org, kaiwenxue1@gmail.com, Rajnesh Kanwal Subject: [PATCH RFC 3/6] riscv: perf: Add Control Transfer Records extension parsing Date: Wed, 29 May 2024 19:53:34 +0100 Message-Id: <20240529185337.182722-4-rkanwal@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240529185337.182722-1-rkanwal@rivosinc.com> References: <20240529185337.182722-1-rkanwal@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240529_115435_840607_076AC95E X-CRM114-Status: UNSURE ( 8.70 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Adding CTR extension in ISA extension map to lookup for extension availability. Signed-off-by: Rajnesh Kanwal --- arch/riscv/include/asm/hwcap.h | 4 ++++ arch/riscv/kernel/cpufeature.c | 2 ++ 2 files changed, 6 insertions(+) diff --git a/arch/riscv/include/asm/hwcap.h b/arch/riscv/include/asm/hwcap.h index b8cc459ee8a4..aff5ef398671 100644 --- a/arch/riscv/include/asm/hwcap.h +++ b/arch/riscv/include/asm/hwcap.h @@ -86,6 +86,8 @@ #define RISCV_ISA_EXT_SSCCFG 77 #define RISCV_ISA_EXT_SMCDELEG 78 #define RISCV_ISA_EXT_SMCNTRPMF 79 +#define RISCV_ISA_EXT_SMCTR 80 +#define RISCV_ISA_EXT_SSCTR 81 #define RISCV_ISA_EXT_XLINUXENVCFG 127 @@ -95,9 +97,11 @@ #ifdef CONFIG_RISCV_M_MODE #define RISCV_ISA_EXT_SxAIA RISCV_ISA_EXT_SMAIA #define RISCV_ISA_EXT_SxCSRIND RISCV_ISA_EXT_SMCSRIND +#define RISCV_ISA_EXT_SxCTR RISCV_ISA_EXT_SMCTR #else #define RISCV_ISA_EXT_SxAIA RISCV_ISA_EXT_SSAIA #define RISCV_ISA_EXT_SxCSRIND RISCV_ISA_EXT_SSCSRIND +#define RISCV_ISA_EXT_SxCTR RISCV_ISA_EXT_SSCTR #endif #endif /* _ASM_RISCV_HWCAP_H */ diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c index d1fb6a8c5492..4334d822b2f2 100644 --- a/arch/riscv/kernel/cpufeature.c +++ b/arch/riscv/kernel/cpufeature.c @@ -298,6 +298,7 @@ const struct riscv_isa_ext_data riscv_isa_ext[] = { __RISCV_ISA_EXT_DATA(zvkt, RISCV_ISA_EXT_ZVKT), __RISCV_ISA_EXT_DATA(smaia, RISCV_ISA_EXT_SMAIA), __RISCV_ISA_EXT_DATA(smcdeleg, RISCV_ISA_EXT_SMCDELEG), + __RISCV_ISA_EXT_DATA(smctr, RISCV_ISA_EXT_SMCTR), __RISCV_ISA_EXT_DATA(smstateen, RISCV_ISA_EXT_SMSTATEEN), __RISCV_ISA_EXT_DATA(smcntrpmf, RISCV_ISA_EXT_SMCNTRPMF), __RISCV_ISA_EXT_DATA(smcsrind, RISCV_ISA_EXT_SMCSRIND), @@ -305,6 +306,7 @@ const struct riscv_isa_ext_data riscv_isa_ext[] = { __RISCV_ISA_EXT_DATA(sscsrind, RISCV_ISA_EXT_SSCSRIND), __RISCV_ISA_EXT_DATA(ssccfg, RISCV_ISA_EXT_SSCCFG), __RISCV_ISA_EXT_DATA(sscofpmf, RISCV_ISA_EXT_SSCOFPMF), + __RISCV_ISA_EXT_DATA(ssctr, RISCV_ISA_EXT_SSCTR), __RISCV_ISA_EXT_DATA(sstc, RISCV_ISA_EXT_SSTC), __RISCV_ISA_EXT_DATA(svinval, RISCV_ISA_EXT_SVINVAL), __RISCV_ISA_EXT_DATA(svnapot, RISCV_ISA_EXT_SVNAPOT), From patchwork Wed May 29 18:53:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rajnesh Kanwal X-Patchwork-Id: 13679422 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 51AFCC27C4F for ; Wed, 29 May 2024 18:54:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=FDqJxnsVEWDpYWG6qQfCXY/ezyMyZmwN5f3+0QDH4AM=; b=zrd8RyPBut6gxm tTY2aOiGzV6k5DK0psj5v4UoGVJz9xCM6vrII7gdqR+HXVhRiLEiC7tI1M4iX8MOWw1NuYmW3fFfF Onmvpncw0If3z3eofO8sOq82+9lnwQR3HrYbQpCqxrwAoweSNVw2of6+Y8O9WNqjxY8a9y6eUo0N1 byyiQ+mzsbG1gRivOE5htKL0m+BLZG7v4A64a6lNwvccWIDOhunY5YZSVFICqNmODjx4v+C2op5sb IPL9OHDZyArXaoJGFRtW2E0eKAgk6ryDRx1YTEv0m7/Qjv5AV17fYRp3mqwuNt/wftVwZCpUglZFl 1tf8zBhoxdwNXPVbeStw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCORf-00000005GAL-0ZcR; Wed, 29 May 2024 18:54:43 +0000 Received: from mail-wr1-x42b.google.com ([2a00:1450:4864:20::42b]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCORX-00000005G6d-32v3 for linux-riscv@lists.infradead.org; Wed, 29 May 2024 18:54:38 +0000 Received: by mail-wr1-x42b.google.com with SMTP id ffacd0b85a97d-3550134ef25so36977f8f.1 for ; Wed, 29 May 2024 11:54:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1717008874; x=1717613674; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=b7vDEMF2Cse3GKKKWWJPFMdZWN9OHL0jv0BqXqFqZx0=; b=WhwYS/Ay/YiedcmHtA4JQyn5TvY1RXGRc0AeXI5VlJWDc+gLuTz1hCDvK+C+/SGvkJ RxGHAaYPZ6ptJtqshZ0yMqhVFq7eeIwCCkk3WBF5N5gujmQP9kysJD2B7N6ciFbVLX/a 4hzeZrFkf1QxFjdJC9dkW1vTytYBRs7kELP+xWvxWSN8WU1MzXPrrsIqjwKlwv93IjWb Fz5VB6iWP3TyCUbz+vvo/DO5WAwx0+GOB6qZTw2pyamQqGwhHzv2PEKXLlvl4KxIdWZv 4ATaO1P0Ar46C6BSQ3NY7aAbboTYC0rN9WEjNua2hAfh3ShtRZNb9zyNaXtybNsvRCWi IH1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1717008874; x=1717613674; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=b7vDEMF2Cse3GKKKWWJPFMdZWN9OHL0jv0BqXqFqZx0=; b=odQPARP+U9T6UUlf306wOBpBBPkfYUHAWuLFr5X1bnxM711wExM3j5pctvkLq0N/1E cK0ama1XnkgxpPCEL8c0OIiHUWIorARxJnXROZTMAyv0+aMhUWEGtOunzoTVz+BiGuz9 zJEKdH032dAXjQestTs20i2bkj3NcJ0pE60kLMDZhpitwBwaQTPaUz0lCGlHucwP/IA3 KqzG3wuiYBbVc2g0Zk/T5a3m1km2uhkvGx6b3DulxFLWDAMjpw06vl9Avg8NBIk/XZ8c suSWK3aNF7ctjOeCQEbFkg+8runoz1Gx6CKAxZlBrBk2b7CW5JAeKw4Vs39WMyvZu5bA 4tEw== X-Forwarded-Encrypted: i=1; AJvYcCX+1oxlP9E3XNu/exJwinbe5b074iYXkn3vmwoyUh+JEHJ/APQrXDE7Vd3/d5VO4+xwfUMDqlCTY+0O3Em0M/F8xjHT807I0ivBtv1TOETC X-Gm-Message-State: AOJu0Yx6N078lsyW1XCGSsg/YIxHCagHiWAetQd2FFEpgeQMtOOV9HDY xLRwHKsY0MHa0CfMwnzeCekQ4UjSbHtm7BseUCoqGNkMlLuIIfTq62s3EJhGRHU= X-Google-Smtp-Source: AGHT+IG0aFRXbEPemdv8ro3r0XN3H/rTAxOwWSQmMr3iD1+X74b6QY7y655sl+IdhOd5xfv/zlwFfw== X-Received: by 2002:a05:600c:1d9a:b0:421:2065:3799 with SMTP id 5b1f17b1804b1-42127929560mr208155e9.29.1717008873888; Wed, 29 May 2024 11:54:33 -0700 (PDT) Received: from rkanwal-XPS-15-9520.Home ([2a02:c7c:7527:ee00:7446:71c1:a41a:da9b]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4212706a23csm2787885e9.27.2024.05.29.11.54.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 May 2024 11:54:33 -0700 (PDT) From: Rajnesh Kanwal To: linux-kernel@vger.kernel.org Cc: linux-perf-users@vger.kernel.org, linux-riscv@lists.infradead.org, adrian.hunter@intel.com, alexander.shishkin@linux.intel.com, ajones@ventanamicro.com, anup@brainfault.org, acme@kernel.org, atishp@rivosinc.com, beeman@rivosinc.com, brauner@kernel.org, conor@kernel.org, heiko@sntech.de, irogers@google.com, mingo@redhat.com, james.clark@arm.com, renyu.zj@linux.alibaba.com, jolsa@kernel.org, jisheng.teoh@starfivetech.com, palmer@dabbelt.com, tech-control-transfer-records@lists.riscv.org, will@kernel.org, kaiwenxue1@gmail.com, Rajnesh Kanwal Subject: [PATCH RFC 4/6] riscv: perf: Add infrastructure for Control Transfer Record Date: Wed, 29 May 2024 19:53:35 +0100 Message-Id: <20240529185337.182722-5-rkanwal@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240529185337.182722-1-rkanwal@rivosinc.com> References: <20240529185337.182722-1-rkanwal@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240529_115435_831803_01E631B6 X-CRM114-Status: GOOD ( 17.75 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org To support Control Transfer Records (CTR) extension, we need to extend the riscv_pmu framework with some basic infrastructure for branch stack sampling. Subsequent patches will use this to add support for CTR in the riscv_pmu_dev driver. With CTR, the branches are stored into a hardware FIFO, which will be sampled by software when perf events overflow. A task may be context- switched between overflows, and to avoid leaking samples we need to clear the last task's records when a task is context-switched In. To do this we will be using the pmu::sched_task() callback added in this patch. Signed-off-by: Rajnesh Kanwal --- drivers/perf/riscv_pmu_common.c | 15 +++++++++++++++ drivers/perf/riscv_pmu_dev.c | 9 +++++++++ include/linux/perf/riscv_pmu.h | 16 ++++++++++++++++ 3 files changed, 40 insertions(+) diff --git a/drivers/perf/riscv_pmu_common.c b/drivers/perf/riscv_pmu_common.c index b4efdddb2ad9..e794675e4944 100644 --- a/drivers/perf/riscv_pmu_common.c +++ b/drivers/perf/riscv_pmu_common.c @@ -159,6 +159,19 @@ u64 riscv_pmu_ctr_get_width_mask(struct perf_event *event) return GENMASK_ULL(cwidth, 0); } +static void riscv_pmu_sched_task(struct perf_event_pmu_context *pmu_ctx, + bool sched_in) +{ + struct riscv_pmu *pmu; + + if (!pmu_ctx) + return; + + pmu = to_riscv_pmu(pmu_ctx->pmu); + if (pmu->sched_task) + pmu->sched_task(pmu_ctx, sched_in); +} + u64 riscv_pmu_event_update(struct perf_event *event) { struct riscv_pmu *rvpmu = to_riscv_pmu(event->pmu); @@ -406,6 +419,7 @@ struct riscv_pmu *riscv_pmu_alloc(void) for_each_possible_cpu(cpuid) { cpuc = per_cpu_ptr(pmu->hw_events, cpuid); cpuc->n_events = 0; + cpuc->ctr_users = 0; for (i = 0; i < RISCV_MAX_COUNTERS; i++) cpuc->events[i] = NULL; } @@ -419,6 +433,7 @@ struct riscv_pmu *riscv_pmu_alloc(void) .start = riscv_pmu_start, .stop = riscv_pmu_stop, .read = riscv_pmu_read, + .sched_task = riscv_pmu_sched_task, }; return pmu; diff --git a/drivers/perf/riscv_pmu_dev.c b/drivers/perf/riscv_pmu_dev.c index 5ca8a909f3ab..40ae5fc897a3 100644 --- a/drivers/perf/riscv_pmu_dev.c +++ b/drivers/perf/riscv_pmu_dev.c @@ -670,6 +670,14 @@ static void rvpmu_sbi_ctr_stop(struct perf_event *event, unsigned long flag) hwc->idx, sbi_err_map_linux_errno(ret.error)); } +static void pmu_sched_task(struct perf_event_pmu_context *pmu_ctx, + bool sched_in) +{ + struct riscv_pmu *pmu = to_riscv_pmu(pmu_ctx->pmu); + + /* Call CTR specific Sched hook. */ +} + static int rvpmu_sbi_find_num_ctrs(void) { struct sbiret ret; @@ -1494,6 +1502,7 @@ static int rvpmu_device_probe(struct platform_device *pdev) pmu->event_mapped = rvpmu_event_mapped; pmu->event_unmapped = rvpmu_event_unmapped; pmu->csr_index = rvpmu_csr_index; + pmu->sched_task = pmu_sched_task; ret = cpuhp_state_add_instance(CPUHP_AP_PERF_RISCV_STARTING, &pmu->node); if (ret) diff --git a/include/linux/perf/riscv_pmu.h b/include/linux/perf/riscv_pmu.h index 425edd6685a9..5a6b840018bd 100644 --- a/include/linux/perf/riscv_pmu.h +++ b/include/linux/perf/riscv_pmu.h @@ -33,6 +33,13 @@ #define RISCV_PMU_CYCLE_FIXED_CTR_MASK 0x01 #define RISCV_PMU_INSTRUCTION_FIXED_CTR_MASK 0x04 +#define MAX_BRANCH_RECORDS 256 + +struct branch_records { + struct perf_branch_stack branch_stack; + struct perf_branch_entry branch_entries[MAX_BRANCH_RECORDS]; +}; + struct cpu_hw_events { /* currently enabled events */ int n_events; @@ -44,6 +51,12 @@ struct cpu_hw_events { DECLARE_BITMAP(used_hw_ctrs, RISCV_MAX_COUNTERS); /* currently enabled firmware counters */ DECLARE_BITMAP(used_fw_ctrs, RISCV_MAX_COUNTERS); + + /* Saved branch records. */ + struct branch_records *branches; + + /* Active events requesting branch records */ + int ctr_users; }; struct riscv_pmu { @@ -64,10 +77,13 @@ struct riscv_pmu { void (*event_mapped)(struct perf_event *event, struct mm_struct *mm); void (*event_unmapped)(struct perf_event *event, struct mm_struct *mm); uint8_t (*csr_index)(struct perf_event *event); + void (*sched_task)(struct perf_event_pmu_context *ctx, bool sched_in); struct cpu_hw_events __percpu *hw_events; struct hlist_node node; struct notifier_block riscv_pm_nb; + + unsigned int ctr_depth; }; #define to_riscv_pmu(p) (container_of(p, struct riscv_pmu, pmu)) From patchwork Wed May 29 18:53:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rajnesh Kanwal X-Patchwork-Id: 13679426 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E5D3AC27C44 for ; Wed, 29 May 2024 18:54:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=emQtjzasNK82krtyO83n7U53sOJkwRiWHbtlPjvM2oc=; b=a8R9HOXiEt1sKY 0abwje+1DBKcmLKzYQaTRJ0YEdCh/KREBTKy1yf6NuNaY5PX4CzP2FEaXjwIK1IgFRL/Jz1R9AeZ+ xwmCKVcYv1Y5QPHM2PfFz0fLaomq42Y+j6CHyg2qPWF0DwME8h1ZoFzz1cjf4ZuRwVsQCmecZN8Y0 1QTJleXPVNbVPNyTpuIHIs4TiY9iq4YRpyuKBB2iBzFshVRR1LFSMtUrQGHHRzjKB6Oxa0M7MgZFq 1tnS7pMRnx7dxUvS0VsEL9k/nywUKT+YVmZba+X5rmGqVDA2GcedEd0TSpDDrQcTM9IeAPb8xC3Is +m5ajqSuO47i0HGfg0kg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCORo-00000005GEh-1rkD; Wed, 29 May 2024 18:54:52 +0000 Received: from mail-wm1-x336.google.com ([2a00:1450:4864:20::336]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCORc-00000005G7D-3Pn0 for linux-riscv@lists.infradead.org; Wed, 29 May 2024 18:54:43 +0000 Received: by mail-wm1-x336.google.com with SMTP id 5b1f17b1804b1-4211245e889so410975e9.2 for ; Wed, 29 May 2024 11:54:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1717008875; x=1717613675; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=0o+B1b3b9otKzfoRya637jHIfmbAgjTIaV6zgoFCmBw=; b=Gdl3M5x/ZbTFwqDdC/QI8e1P/8IYpk6V94UVQRzzdV1uWg/Twwp3B6uK95mSDeklgM 17gfAq3HuOLD+L2KktFO9xC87LzLBbaAHwR0VQIoQ2lR4pxs7W8sQ9g+VBGRMSTq4RNH q6CweGcMTB2EhFMsSh1hjw2xDBM1vLnNYouPYuS3ik9GZ4qhpGpeSXjCEf6zJK7szQLG L37h3koFv7CEFNvdxVbZ1wpe1wvHyQzuiAz5lgQHAuG2I8X/7+kKYn6NNO6288BAAtwD MR/GOwd/AAy2efBuxxkikeArCXyI7JUthR3wM0zYqfpqITBfjbBU/aHFYk+6xQm78FEW 4Fkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1717008875; x=1717613675; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0o+B1b3b9otKzfoRya637jHIfmbAgjTIaV6zgoFCmBw=; b=tbx0So8RTBB+cucx4+wvw8YSYf9qw2Dz5a9l5d7OuMIUVF3Y7+YPEnFNC8FMDBzmNj s7aa48uuRCpiYOkbvohFFog3+ns6OYnTC/J7+516VgSXDcEnsmhZUc0H3ZKIFEog9Js7 lVdYRkqcO/+geTPKysSpEvoFSaJzafGXhrL0XildSQtRwzqNwNzI+hCYezyLoN5rfiMS jAa8OOKwcJqr8E3xG9uiYstQdlp+Ggafz9rH6yXCdxP9noiSTLLpYaS7f3Carw4KX/f2 HWBQTOEC+D+dLzoH0HjBDWEilA8LtPVRJcTqQAgnAzQMFtP6c8nnf7gUkqpYOA0I/A9n 2wPQ== X-Forwarded-Encrypted: i=1; AJvYcCXEI+3L32cazZcFtmpb1vBjdn/wf+tMDD/XuIX36RY4x+8/u/s4wTKRzrrP9OLGmC9JMAixzZt5XIooFQEwLlr1Nn/4WMqgoxXyCczi5984 X-Gm-Message-State: AOJu0Yxc/XiM0Ldsxif6FxLjlnKnWUGAjp8nzhnYfged0lAT3aqiRPrd 5SwYTk+Hgau2Kh4gwXVv4u6RCHTPDB5YjOrBw652DBrIEp7NVjJJPzY8Ihpgl5U= X-Google-Smtp-Source: AGHT+IErFHVKwDzA+pRnaE1cowj8Cur30CsYYW0yRqy5Cyv1C3pRbjxr/J2rzgCC4GvWtUmPlzLsaw== X-Received: by 2002:a05:600c:a03:b0:416:2471:e102 with SMTP id 5b1f17b1804b1-4212792c381mr196775e9.37.1717008875165; Wed, 29 May 2024 11:54:35 -0700 (PDT) Received: from rkanwal-XPS-15-9520.Home ([2a02:c7c:7527:ee00:7446:71c1:a41a:da9b]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4212706a23csm2787885e9.27.2024.05.29.11.54.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 May 2024 11:54:34 -0700 (PDT) From: Rajnesh Kanwal To: linux-kernel@vger.kernel.org Cc: linux-perf-users@vger.kernel.org, linux-riscv@lists.infradead.org, adrian.hunter@intel.com, alexander.shishkin@linux.intel.com, ajones@ventanamicro.com, anup@brainfault.org, acme@kernel.org, atishp@rivosinc.com, beeman@rivosinc.com, brauner@kernel.org, conor@kernel.org, heiko@sntech.de, irogers@google.com, mingo@redhat.com, james.clark@arm.com, renyu.zj@linux.alibaba.com, jolsa@kernel.org, jisheng.teoh@starfivetech.com, palmer@dabbelt.com, tech-control-transfer-records@lists.riscv.org, will@kernel.org, kaiwenxue1@gmail.com, Rajnesh Kanwal Subject: [PATCH RFC 5/6] riscv: perf: Add driver for Control Transfer Records Ext. Date: Wed, 29 May 2024 19:53:36 +0100 Message-Id: <20240529185337.182722-6-rkanwal@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240529185337.182722-1-rkanwal@rivosinc.com> References: <20240529185337.182722-1-rkanwal@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240529_115440_968818_B0BC5191 X-CRM114-Status: GOOD ( 24.23 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org This adds support for CTR Ext defined in [0]. The extension allows to records a maximum for 256 last branch records. CTR extension depends on s[m|s]csrind and Sscofpmf extensions. Signed-off-by: Rajnesh Kanwal --- MAINTAINERS | 1 + drivers/perf/Kconfig | 11 + drivers/perf/Makefile | 1 + drivers/perf/riscv_ctr.c | 469 +++++++++++++++++++++++++++++++++ include/linux/perf/riscv_pmu.h | 33 +++ 5 files changed, 515 insertions(+) create mode 100644 drivers/perf/riscv_ctr.c diff --git a/MAINTAINERS b/MAINTAINERS index d6b42d5f62da..868e4b0808ab 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -19056,6 +19056,7 @@ M: Atish Patra R: Anup Patel L: linux-riscv@lists.infradead.org S: Supported +F: drivers/perf/riscv_ctr.c F: drivers/perf/riscv_pmu_common.c F: drivers/perf/riscv_pmu_dev.c F: drivers/perf/riscv_pmu_legacy.c diff --git a/drivers/perf/Kconfig b/drivers/perf/Kconfig index 3c37577b25f7..cca6598be739 100644 --- a/drivers/perf/Kconfig +++ b/drivers/perf/Kconfig @@ -110,6 +110,17 @@ config ANDES_CUSTOM_PMU If you don't know what to do here, say "Y". +config RISCV_CTR + bool "Enable support for Control Transfer Records (CTR)" + depends on PERF_EVENTS && RISCV_PMU + default y + help + Enable support for Control Transfer Records (CTR) which + allows recording branches, Jumps, Calls, returns etc taken in an + execution path. This also supports privilege based filtering. It + captures additional relevant information such as cycle count, + branch misprediction etc. + config ARM_PMU_ACPI depends on ARM_PMU && ACPI def_bool y diff --git a/drivers/perf/Makefile b/drivers/perf/Makefile index ba809cc069d5..364b1f66f410 100644 --- a/drivers/perf/Makefile +++ b/drivers/perf/Makefile @@ -16,6 +16,7 @@ obj-$(CONFIG_RISCV_PMU_COMMON) += riscv_pmu_common.o obj-$(CONFIG_RISCV_PMU_LEGACY) += riscv_pmu_legacy.o obj-$(CONFIG_RISCV_PMU) += riscv_pmu_dev.o obj-$(CONFIG_STARFIVE_STARLINK_PMU) += starfive_starlink_pmu.o +obj-$(CONFIG_RISCV_CTR) += riscv_ctr.o obj-$(CONFIG_THUNDERX2_PMU) += thunderx2_pmu.o obj-$(CONFIG_XGENE_PMU) += xgene_pmu.o obj-$(CONFIG_ARM_SPE_PMU) += arm_spe_pmu.o diff --git a/drivers/perf/riscv_ctr.c b/drivers/perf/riscv_ctr.c new file mode 100644 index 000000000000..95fda1edda4f --- /dev/null +++ b/drivers/perf/riscv_ctr.c @@ -0,0 +1,469 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Control transfer records extension Helpers. + * + * Copyright (C) 2024 Rivos Inc. + * + * Author: Rajnesh Kanwal + */ + +#define pr_fmt(fmt) "CTR: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define CTR_BRANCH_FILTERS_INH (CTRCTL_EXCINH | \ + CTRCTL_INTRINH | \ + CTRCTL_TRETINH | \ + CTRCTL_TKBRINH | \ + CTRCTL_INDCALL_INH | \ + CTRCTL_DIRCALL_INH | \ + CTRCTL_INDJUMP_INH | \ + CTRCTL_DIRJUMP_INH | \ + CTRCTL_CORSWAP_INH | \ + CTRCTL_RET_INH | \ + CTRCTL_INDOJUMP_INH | \ + CTRCTL_DIROJUMP_INH) + +#define CTR_BRANCH_ENABLE_BITS (CTRCTL_KERNEL_ENABLE | CTRCTL_U_ENABLE) + +/* Branch filters not-supported by CTR extension. */ +#define CTR_EXCLUDE_BRANCH_FILTERS (PERF_SAMPLE_BRANCH_ABORT_TX | \ + PERF_SAMPLE_BRANCH_IN_TX | \ + PERF_SAMPLE_BRANCH_PRIV_SAVE | \ + PERF_SAMPLE_BRANCH_NO_TX | \ + PERF_SAMPLE_BRANCH_COUNTERS) + +/* Branch filters supported by CTR extension. */ +#define CTR_ALLOWED_BRANCH_FILTERS (PERF_SAMPLE_BRANCH_USER | \ + PERF_SAMPLE_BRANCH_KERNEL | \ + PERF_SAMPLE_BRANCH_HV | \ + PERF_SAMPLE_BRANCH_ANY | \ + PERF_SAMPLE_BRANCH_ANY_CALL | \ + PERF_SAMPLE_BRANCH_ANY_RETURN | \ + PERF_SAMPLE_BRANCH_IND_CALL | \ + PERF_SAMPLE_BRANCH_COND | \ + PERF_SAMPLE_BRANCH_IND_JUMP | \ + PERF_SAMPLE_BRANCH_HW_INDEX | \ + PERF_SAMPLE_BRANCH_NO_FLAGS | \ + PERF_SAMPLE_BRANCH_NO_CYCLES | \ + PERF_SAMPLE_BRANCH_CALL_STACK | \ + PERF_SAMPLE_BRANCH_CALL | \ + PERF_SAMPLE_BRANCH_TYPE_SAVE) + +#define CTR_PERF_BRANCH_FILTERS (CTR_ALLOWED_BRANCH_FILTERS | \ + CTR_EXCLUDE_BRANCH_FILTERS) + +static u64 allowed_filters __read_mostly; + +struct ctr_regset { + unsigned long src; + unsigned long target; + unsigned long ctr_data; +}; + +static inline u64 get_ctr_src_reg(unsigned int ctr_idx) +{ + return csr_ind_read(CSR_IREG, CTR_ENTRIES_FIRST, ctr_idx); +} + +static inline u64 get_ctr_tgt_reg(unsigned int ctr_idx) +{ + return csr_ind_read(CSR_IREG2, CTR_ENTRIES_FIRST, ctr_idx); +} + +static inline u64 get_ctr_data_reg(unsigned int ctr_idx) +{ + return csr_ind_read(CSR_IREG3, CTR_ENTRIES_FIRST, ctr_idx); +} + +static inline bool ctr_record_valid(u64 ctr_src) +{ + return !!FIELD_GET(CTRSOURCE_VALID, ctr_src); +} + +static inline int ctr_get_mispredict(u64 ctr_target) +{ + return FIELD_GET(CTRTARGET_MISP, ctr_target); +} + +static inline unsigned int ctr_get_cycles(u64 ctr_data) +{ + const unsigned int cce = FIELD_GET(CTRDATA_CCE_MASK, ctr_data); + const unsigned int ccm = FIELD_GET(CTRDATA_CCM_MASK, ctr_data); + + if (ctr_data & CTRDATA_CCV) + return 0; + + /* Formula to calculate cycles from spec: (2^12 + CCM) << CCE-1 */ + if (cce > 0) + return (4096 + ccm) << (cce - 1); + + return FIELD_GET(CTRDATA_CCM_MASK, ctr_data); +} + +static inline unsigned int ctr_get_type(u64 ctr_data) +{ + return FIELD_GET(CTRDATA_TYPE_MASK, ctr_data); +} + +static inline unsigned int ctr_get_depth(u64 ctr_depth) +{ + /* Depth table from CTR Spec: 2.4 sctrdepth. + * + * sctrdepth.depth Depth + * 000 - 16 + * 001 - 32 + * 010 - 64 + * 011 - 128 + * 100 - 256 + * + * Depth = 16 * 2 ^ (ctrdepth.depth) + * or + * Depth = 16 << ctrdepth.depth. + */ + return 16 << FIELD_GET(SCTRDEPTH_MASK, ctr_depth); +} + +/* Reads CTR entry at idx and stores it in entry struct. */ +static bool capture_ctr_regset(struct ctr_regset *entry, unsigned int idx) +{ + entry->src = get_ctr_src_reg(idx); + + if (!ctr_record_valid(entry->src)) + return false; + + entry->src = entry->src & (~CTRSOURCE_VALID); + entry->target = get_ctr_tgt_reg(idx); + entry->ctr_data = get_ctr_data_reg(idx); + + return true; +} + +static u64 branch_type_to_ctr(int branch_type) +{ + u64 config = CTR_BRANCH_FILTERS_INH | CTRCTL_LCOFIFRZ; + + if (branch_type & PERF_SAMPLE_BRANCH_USER) + config |= CTRCTL_U_ENABLE; + + if (branch_type & PERF_SAMPLE_BRANCH_KERNEL) + config |= CTRCTL_KERNEL_ENABLE; + + if (branch_type & PERF_SAMPLE_BRANCH_HV) { + if (riscv_isa_extension_available(NULL, h)) + config |= CTRCTL_KERNEL_ENABLE; + } + + if (branch_type & PERF_SAMPLE_BRANCH_ANY) { + config &= ~CTR_BRANCH_FILTERS_INH; + return config; + } + + if (branch_type & PERF_SAMPLE_BRANCH_ANY_CALL) { + config &= ~CTRCTL_INDCALL_INH; + config &= ~CTRCTL_DIRCALL_INH; + config &= ~CTRCTL_EXCINH; + config &= ~CTRCTL_INTRINH; + } + + if (branch_type & PERF_SAMPLE_BRANCH_ANY_RETURN) + config &= ~(CTRCTL_RET_INH | CTRCTL_TRETINH); + + if (branch_type & PERF_SAMPLE_BRANCH_IND_CALL) + config &= ~CTRCTL_INDCALL_INH; + + if (branch_type & PERF_SAMPLE_BRANCH_COND) + config &= ~CTRCTL_TKBRINH; + + if (branch_type & PERF_SAMPLE_BRANCH_CALL_STACK) { + config &= ~(CTRCTL_INDCALL_INH | CTRCTL_DIRCALL_INH | + CTRCTL_RET_INH); + config |= CTRCTL_RASEMU; + } + + if (branch_type & PERF_SAMPLE_BRANCH_IND_JUMP) { + config &= ~CTRCTL_INDJUMP_INH; + config &= ~CTRCTL_INDOJUMP_INH; + } + + if (branch_type & PERF_SAMPLE_BRANCH_CALL) + config &= ~CTRCTL_DIRCALL_INH; + + return config; +} + +static const int ctr_perf_map[] = { + [CTRDATA_TYPE_NONE] = PERF_BR_UNKNOWN, + [CTRDATA_TYPE_EXCEPTION] = PERF_BR_SYSCALL, + [CTRDATA_TYPE_INTERRUPT] = PERF_BR_IRQ, + [CTRDATA_TYPE_TRAP_RET] = PERF_BR_ERET, + [CTRDATA_TYPE_NONTAKEN_BRANCH] = PERF_BR_COND, + [CTRDATA_TYPE_TAKEN_BRANCH] = PERF_BR_COND, + [CTRDATA_TYPE_RESERVED_6] = PERF_BR_UNKNOWN, + [CTRDATA_TYPE_RESERVED_7] = PERF_BR_UNKNOWN, + [CTRDATA_TYPE_INDIRECT_CALL] = PERF_BR_IND_CALL, + [CTRDATA_TYPE_DIRECT_CALL] = PERF_BR_CALL, + [CTRDATA_TYPE_INDIRECT_JUMP] = PERF_BR_UNCOND, + [CTRDATA_TYPE_DIRECT_JUMP] = PERF_BR_UNKNOWN, + [CTRDATA_TYPE_CO_ROUTINE_SWAP] = PERF_BR_UNKNOWN, + [CTRDATA_TYPE_RETURN] = PERF_BR_RET, + [CTRDATA_TYPE_OTHER_INDIRECT_JUMP] = PERF_BR_IND, + [CTRDATA_TYPE_OTHER_DIRECT_JUMP] = PERF_BR_UNKNOWN, +}; + +static void ctr_set_perf_entry_type(struct perf_branch_entry *entry, + u64 ctr_data) +{ + int ctr_type = ctr_get_type(ctr_data); + + entry->type = ctr_perf_map[ctr_type]; + if (entry->type == PERF_BR_UNKNOWN) + pr_warn("%d - unknown branch type captured\n", ctr_type); +} + +static void capture_ctr_flags(struct perf_branch_entry *entry, + struct perf_event *event, u64 ctr_data, + u64 ctr_target) +{ + if (branch_sample_type(event)) + ctr_set_perf_entry_type(entry, ctr_data); + + if (!branch_sample_no_cycles(event)) + entry->cycles = ctr_get_cycles(ctr_data); + + if (!branch_sample_no_flags(event)) { + entry->abort = 0; + entry->mispred = ctr_get_mispredict(ctr_target); + entry->predicted = !entry->mispred; + } + + if (branch_sample_priv(event)) + entry->priv = PERF_BR_PRIV_UNKNOWN; +} + + +static void ctr_regset_to_branch_entry(struct cpu_hw_events *cpuc, + struct perf_event *event, + struct ctr_regset *regset, + unsigned int idx) +{ + struct perf_branch_entry *entry = &cpuc->branches->branch_entries[idx]; + + perf_clear_branch_entry_bitfields(entry); + entry->from = regset->src; + entry->to = regset->target & (~CTRTARGET_MISP); + capture_ctr_flags(entry, event, regset->ctr_data, regset->target); +} + +static void ctr_read_entries(struct cpu_hw_events *cpuc, + struct perf_event *event, + unsigned int depth) +{ + struct ctr_regset entry = {}; + u64 ctr_ctl; + int i; + + ctr_ctl = csr_read_clear(CSR_CTRCTL, CTR_BRANCH_ENABLE_BITS); + + for (i = 0; i < depth; i++) { + if (!capture_ctr_regset(&entry, i)) + break; + + ctr_regset_to_branch_entry(cpuc, event, &entry, i); + } + + csr_set(CSR_CTRCTL, ctr_ctl & CTR_BRANCH_ENABLE_BITS); + + cpuc->branches->branch_stack.nr = i; + cpuc->branches->branch_stack.hw_idx = 0; +} + +bool riscv_pmu_ctr_valid(struct perf_event *event) +{ + u64 branch_type = event->attr.branch_sample_type; + + if (branch_type & ~allowed_filters) { + pr_debug_once("Requested branch filters not supported 0x%llx\n", + branch_type & ~allowed_filters); + return false; + } + + return true; +} + +void riscv_pmu_ctr_consume(struct cpu_hw_events *cpuc, struct perf_event *event) +{ + unsigned int depth = to_riscv_pmu(event->pmu)->ctr_depth; + + ctr_read_entries(cpuc, event, depth); + + /* Clear frozen bit. */ + csr_clear(CSR_SCTRSTATUS, SCTRSTATUS_FROZEN); +} + +static void riscv_pmu_ctr_clear(void) +{ + /* FIXME: Replace with sctrclr instruction once support is merged + * into toolchain. + */ + asm volatile(".4byte 0x10400073\n" ::: "memory"); + csr_write(CSR_SCTRSTATUS, 0); +} + +/* + * On context switch in, we need to make sure no samples from previous user + * are left in the CTR. + * + * On ctxswin, sched_in = true, called after the PMU has started + * On ctxswout, sched_in = false, called before the PMU is stopped + */ +void riscv_pmu_ctr_sched_task(struct perf_event_pmu_context *pmu_ctx, + bool sched_in) +{ + struct riscv_pmu *rvpmu = to_riscv_pmu(pmu_ctx->pmu); + struct cpu_hw_events *cpuc = this_cpu_ptr(rvpmu->hw_events); + + if (cpuc->ctr_users && sched_in) + riscv_pmu_ctr_clear(); +} + +void riscv_pmu_ctr_enable(struct perf_event *event) +{ + struct riscv_pmu *rvpmu = to_riscv_pmu(event->pmu); + struct cpu_hw_events *cpuc = this_cpu_ptr(rvpmu->hw_events); + u64 branch_type = event->attr.branch_sample_type; + u64 ctr; + + if (!cpuc->ctr_users++ && !event->total_time_running) + riscv_pmu_ctr_clear(); + + ctr = branch_type_to_ctr(branch_type); + csr_write(CSR_CTRCTL, ctr); + + perf_sched_cb_inc(event->pmu); +} + +void riscv_pmu_ctr_disable(struct perf_event *event) +{ + struct riscv_pmu *rvpmu = to_riscv_pmu(event->pmu); + struct cpu_hw_events *cpuc = this_cpu_ptr(rvpmu->hw_events); + + /* Clear CTRCTL to disable the recording. */ + csr_write(CSR_CTRCTL, 0); + + cpuc->ctr_users--; + WARN_ON_ONCE(cpuc->ctr_users < 0); + + perf_sched_cb_dec(event->pmu); +} + +/* + * Check for hardware supported perf filters here. To avoid missing + * any new added filter in perf, we do a BUILD_BUG_ON check, so make sure + * to update CTR_ALLOWED_BRANCH_FILTERS or CTR_EXCLUDE_BRANCH_FILTERS + * defines when adding support for it in below function. + */ +static void __init check_available_filters(void) +{ + u64 ctr_ctl; + + /* + * Ensure both perf branch filter allowed and exclude + * masks are always in sync with the generic perf ABI. + */ + BUILD_BUG_ON(CTR_PERF_BRANCH_FILTERS != (PERF_SAMPLE_BRANCH_MAX - 1)); + + allowed_filters = PERF_SAMPLE_BRANCH_USER | + PERF_SAMPLE_BRANCH_KERNEL | + PERF_SAMPLE_BRANCH_ANY | + PERF_SAMPLE_BRANCH_HW_INDEX | + PERF_SAMPLE_BRANCH_NO_FLAGS | + PERF_SAMPLE_BRANCH_NO_CYCLES | + PERF_SAMPLE_BRANCH_TYPE_SAVE; + + csr_write(CSR_CTRCTL, ~0); + ctr_ctl = csr_read(CSR_CTRCTL); + + if (riscv_isa_extension_available(NULL, h)) + allowed_filters |= PERF_SAMPLE_BRANCH_HV; + + if (ctr_ctl & (CTRCTL_INDCALL_INH | CTRCTL_DIRCALL_INH)) + allowed_filters |= PERF_SAMPLE_BRANCH_ANY_CALL; + + if (ctr_ctl & (CTRCTL_RET_INH | CTRCTL_TRETINH)) + allowed_filters |= PERF_SAMPLE_BRANCH_ANY_RETURN; + + if (ctr_ctl & CTRCTL_INDCALL_INH) + allowed_filters |= PERF_SAMPLE_BRANCH_IND_CALL; + + if (ctr_ctl & CTRCTL_TKBRINH) + allowed_filters |= PERF_SAMPLE_BRANCH_COND; + + if (ctr_ctl & CTRCTL_RASEMU) + allowed_filters |= PERF_SAMPLE_BRANCH_CALL_STACK; + + if (ctr_ctl & (CTRCTL_INDOJUMP_INH | CTRCTL_INDJUMP_INH)) + allowed_filters |= PERF_SAMPLE_BRANCH_IND_JUMP; + + if (ctr_ctl & CTRCTL_DIRCALL_INH) + allowed_filters |= PERF_SAMPLE_BRANCH_CALL; +} + +void riscv_pmu_ctr_starting_cpu(void) +{ + if (!riscv_isa_extension_available(NULL, SxCTR) || + !riscv_isa_extension_available(NULL, SSCOFPMF) || + !riscv_isa_extension_available(NULL, SxCSRIND)) + return; + + /* Set depth to maximum. */ + csr_write(CSR_SCTRDEPTH, SCTRDEPTH_MASK); +} + +void riscv_pmu_ctr_dying_cpu(void) +{ + if (!riscv_isa_extension_available(NULL, SxCTR) || + !riscv_isa_extension_available(NULL, SSCOFPMF) || + !riscv_isa_extension_available(NULL, SxCSRIND)) + return; + + /* Clear and reset CTR CSRs. */ + csr_write(CSR_SCTRDEPTH, 0); + csr_write(CSR_CTRCTL, 0); + riscv_pmu_ctr_clear(); +} + +void __init riscv_pmu_ctr_init(struct riscv_pmu *riscv_pmu) +{ + if (!riscv_isa_extension_available(NULL, SxCTR) || + !riscv_isa_extension_available(NULL, SSCOFPMF) || + !riscv_isa_extension_available(NULL, SxCSRIND)) + return; + + check_available_filters(); + + /* Set depth to maximum. */ + csr_write(CSR_SCTRDEPTH, SCTRDEPTH_MASK); + riscv_pmu->ctr_depth = ctr_get_depth(csr_read(CSR_SCTRDEPTH)); + + pr_info("Perf CTR available, with %d depth\n", riscv_pmu->ctr_depth); +} + +void __init riscv_pmu_ctr_finish(struct riscv_pmu *riscv_pmu) +{ + if (!riscv_pmu_ctr_supported(riscv_pmu)) + return; + + csr_write(CSR_SCTRDEPTH, 0); + csr_write(CSR_CTRCTL, 0); + riscv_pmu_ctr_clear(); + riscv_pmu->ctr_depth = 0; +} diff --git a/include/linux/perf/riscv_pmu.h b/include/linux/perf/riscv_pmu.h index 5a6b840018bd..455d2386936f 100644 --- a/include/linux/perf/riscv_pmu.h +++ b/include/linux/perf/riscv_pmu.h @@ -104,6 +104,39 @@ struct riscv_pmu *riscv_pmu_alloc(void); int riscv_pmu_get_hpm_info(u32 *hw_ctr_width, u32 *num_hw_ctr); #endif +static inline bool riscv_pmu_ctr_supported(struct riscv_pmu *pmu) +{ + return !!pmu->ctr_depth; +} + #endif /* CONFIG_RISCV_PMU_COMMON */ +#ifdef CONFIG_RISCV_CTR + +bool riscv_pmu_ctr_valid(struct perf_event *event); +void riscv_pmu_ctr_consume(struct cpu_hw_events *cpuc, struct perf_event *event); +void riscv_pmu_ctr_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in); +void riscv_pmu_ctr_enable(struct perf_event *event); +void riscv_pmu_ctr_disable(struct perf_event *event); +void riscv_pmu_ctr_dying_cpu(void); +void riscv_pmu_ctr_starting_cpu(void); +void riscv_pmu_ctr_init(struct riscv_pmu *riscv_pmu); +void riscv_pmu_ctr_finish(struct riscv_pmu *riscv_pmu); + +#else + +static inline bool riscv_pmu_ctr_valid(struct perf_event *event) { return false; } +static inline void riscv_pmu_ctr_consume(struct cpu_hw_events *cpuc, + struct perf_event *event) { } +static inline void riscv_pmu_ctr_sched_task(struct perf_event_pmu_context *, + bool sched_in) { } +static inline void riscv_pmu_ctr_enable(struct perf_event *event) { } +static inline void riscv_pmu_ctr_disable(struct perf_event *event) { } +static inline void riscv_pmu_ctr_dying_cpu(void) { } +static inline void riscv_pmu_ctr_starting_cpu(void) { } +static inline void riscv_pmu_ctr_init(struct riscv_pmu *riscv_pmu) { } +static inline void riscv_pmu_ctr_finish(struct riscv_pmu *riscv_pmu) { } + +#endif /* CONFIG_RISCV_CTR */ + #endif /* _RISCV_PMU_H */ From patchwork Wed May 29 18:53:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rajnesh Kanwal X-Patchwork-Id: 13679425 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2880DC25B75 for ; Wed, 29 May 2024 18:54:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=y9nclMZTCb/gMxsVPuYOeRxmqJbzDVcP/o5yAXX3UNw=; b=BZjutfimh6WpBx 1Um6N01JJDFuvcbCQ8viYZBKwggprLnjI+OrGTC/vfquD7CTNOpQxjkOeMVa7TZm+bEWfbiGopp28 u/GHD09Pn/sjG+ffTwAOL5yOs9Em6T4gB4THJkf4uDzInEwdsdPyPbl5CqhxJefjENUX3I/3Vpvht Wex7rH/yHZy3cjTGi/p50qgaMHu1q1jwk20PMJo2MhYzfTPz8nlqFmRQ2yib6Lmydkbe5wyKBnaAi epG+4zVeHJy/u4O/dKFEgrGoLdV4NzVHdjmaE9GdN4O2nH67GGXKFIT6RzicQbMT978zo1Vo86r/L qsGUqqHHwO3bsoHgKR1Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCORq-00000005GGF-0KN2; Wed, 29 May 2024 18:54:54 +0000 Received: from mail-wm1-x32b.google.com ([2a00:1450:4864:20::32b]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCORg-00000005G7z-1G1C for linux-riscv@lists.infradead.org; Wed, 29 May 2024 18:54:45 +0000 Received: by mail-wm1-x32b.google.com with SMTP id 5b1f17b1804b1-4202ca70289so459415e9.1 for ; Wed, 29 May 2024 11:54:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1717008876; x=1717613676; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=U+2U8h+2rvtM/KDIv8ZmxGGBqHz1DJ98r4cPzpRskE8=; b=Ldq+ClFnl/FAxbV3+arlpEa/zUkB9mX7qXlSqY8Q04gMaTLNT9efh4fFoLXkqwmGca gX6R3KlH0q2J8NO1bG+OeIq1nK06ltONVUZ744fkpMy3qZ4yIHVGR5b9g/Q09zHh4dvq EKr/IJ46IlLjYH06vMdBXrkqJScCCd2/RAcAsj+spDQBMEgP1GDG66mlNJk7nmgfOkCh wQawsvyfpatVFAF13dumZlDH5GsLQCSnvSjpVWSFgE/k/tWEYp8SlcBvTkKAMWDo900n KAOgszNpzhrGi5xcFH+/5byv81+YiIR6aHT56hHnWZdLWc6oazUq8ARPdNyPKYbFVgP8 C7+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1717008876; x=1717613676; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=U+2U8h+2rvtM/KDIv8ZmxGGBqHz1DJ98r4cPzpRskE8=; b=DMpGRpboOICmuHeXYnVr75Aly2PPhgZQGz5SeK4DKAU8zOG6S1OA1WW+/wIiNjccQZ kccBsjqlPhmztSZo8Ek/bH6B1ezzKjLAHkhExfywhPeWSBFxjirynMyty3qKijvOFFm9 c43NEoDIRCEWmx2P3QhTZK//Nokl5jFBA6keJGYAACbxFYpNu0NWdGueSd0/3ueO61y0 dMagYV3ZrRENseQi+B7thutwh0/rEY6j/Awbclbhk7tiQ4tQ7bdJ0kXAM62Sf8IQvFk/ /DVT6x+axDTkPICV+V79B5jKsyOy75fqe5f9l9FA/YPfXj9m2cq6fsXU0kwmShXwnl8Y 9BCg== X-Forwarded-Encrypted: i=1; AJvYcCU7BqEOibGtTn4gCSoSlopsmWhtplcRLpgwdqGQ1k11zq6Vyx0IMNmAU5S2a0ts6RVDwXrZe3Mx8ncWGDcfu50Y5MyJevsFSvXJY08gBaiD X-Gm-Message-State: AOJu0YznM5rCJHgMFrwh2iwJebZ8S+wWhyq0bkWIAtvkyOvLAJLVRdUV xrc4zUtsSLRG6dup7h6nmUdWD/sOmxDhl0MlElce/xoNuYyl3Lr0NjS/W5dNymQ= X-Google-Smtp-Source: AGHT+IHyilA5pn7bTk5wG0if56ayFdi//JSBq/3E0JiJr67lveFf/JA8ZYRTkFP5SiaOqO8femclbQ== X-Received: by 2002:a7b:c391:0:b0:420:1db0:53c1 with SMTP id 5b1f17b1804b1-421279375bamr150315e9.41.1717008876402; Wed, 29 May 2024 11:54:36 -0700 (PDT) Received: from rkanwal-XPS-15-9520.Home ([2a02:c7c:7527:ee00:7446:71c1:a41a:da9b]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4212706a23csm2787885e9.27.2024.05.29.11.54.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 May 2024 11:54:35 -0700 (PDT) From: Rajnesh Kanwal To: linux-kernel@vger.kernel.org Cc: linux-perf-users@vger.kernel.org, linux-riscv@lists.infradead.org, adrian.hunter@intel.com, alexander.shishkin@linux.intel.com, ajones@ventanamicro.com, anup@brainfault.org, acme@kernel.org, atishp@rivosinc.com, beeman@rivosinc.com, brauner@kernel.org, conor@kernel.org, heiko@sntech.de, irogers@google.com, mingo@redhat.com, james.clark@arm.com, renyu.zj@linux.alibaba.com, jolsa@kernel.org, jisheng.teoh@starfivetech.com, palmer@dabbelt.com, tech-control-transfer-records@lists.riscv.org, will@kernel.org, kaiwenxue1@gmail.com, Rajnesh Kanwal Subject: [PATCH RFC 6/6] riscv: perf: Integrate CTR Ext support in riscv_pmu_dev driver Date: Wed, 29 May 2024 19:53:37 +0100 Message-Id: <20240529185337.182722-7-rkanwal@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240529185337.182722-1-rkanwal@rivosinc.com> References: <20240529185337.182722-1-rkanwal@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240529_115444_381577_5FF0D913 X-CRM114-Status: GOOD ( 21.77 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org This integrates recently added CTR ext support in riscv_pmu_dev driver to enable branch stack sampling using PMU events. This mainly adds CTR enable/disable callbacks in rvpmu_ctr_stop() and rvpmu_ctr_start() function to start/stop branch recording along with the event. PMU overflow handler rvpmu_ovf_handler() is also updated to sample CTR entries in case of the overflow for the particular event programmed to records branches. The recorded entries are fed to core perf for further processing. Signed-off-by: Rajnesh Kanwal --- drivers/perf/riscv_pmu_common.c | 3 +- drivers/perf/riscv_pmu_dev.c | 77 +++++++++++++++++++++++++++------ 2 files changed, 65 insertions(+), 15 deletions(-) diff --git a/drivers/perf/riscv_pmu_common.c b/drivers/perf/riscv_pmu_common.c index e794675e4944..e1f3a33b479f 100644 --- a/drivers/perf/riscv_pmu_common.c +++ b/drivers/perf/riscv_pmu_common.c @@ -326,8 +326,7 @@ static int riscv_pmu_event_init(struct perf_event *event) u64 event_config = 0; uint64_t cmask; - /* driver does not support branch stack sampling */ - if (has_branch_stack(event)) + if (has_branch_stack(event) && !riscv_pmu_ctr_supported(rvpmu)) return -EOPNOTSUPP; hwc->flags = 0; diff --git a/drivers/perf/riscv_pmu_dev.c b/drivers/perf/riscv_pmu_dev.c index 40ae5fc897a3..1b2c04c35bed 100644 --- a/drivers/perf/riscv_pmu_dev.c +++ b/drivers/perf/riscv_pmu_dev.c @@ -675,7 +675,7 @@ static void pmu_sched_task(struct perf_event_pmu_context *pmu_ctx, { struct riscv_pmu *pmu = to_riscv_pmu(pmu_ctx->pmu); - /* Call CTR specific Sched hook. */ + riscv_pmu_ctr_sched_task(pmu_ctx, sched_in); } static int rvpmu_sbi_find_num_ctrs(void) @@ -935,17 +935,25 @@ static irqreturn_t rvpmu_ovf_handler(int irq, void *dev) hw_evt = &event->hw; riscv_pmu_event_update(event); perf_sample_data_init(&data, 0, hw_evt->last_period); - if (riscv_pmu_event_set_period(event)) { - /* - * Unlike other ISAs, RISC-V don't have to disable interrupts - * to avoid throttling here. As per the specification, the - * interrupt remains disabled until the OF bit is set. - * Interrupts are enabled again only during the start. - * TODO: We will need to stop the guest counters once - * virtualization support is added. - */ - perf_event_overflow(event, &data, regs); + if (!riscv_pmu_event_set_period(event)) + continue; + + if (needs_branch_stack(event)) { + riscv_pmu_ctr_consume(cpu_hw_evt, event); + perf_sample_save_brstack( + &data, event, + &cpu_hw_evt->branches->branch_stack, NULL); } + + /* + * Unlike other ISAs, RISC-V don't have to disable interrupts + * to avoid throttling here. As per the specification, the + * interrupt remains disabled until the OF bit is set. + * Interrupts are enabled again only during the start. + * TODO: We will need to stop the guest counters once + * virtualization support is added. + */ + perf_event_overflow(event, &data, regs); } rvpmu_start_overflow_mask(pmu, overflowed_ctrs); @@ -1103,10 +1111,12 @@ static void rvpmu_ctr_start(struct perf_event *event, u64 ival) else rvpmu_sbi_ctr_start(event, ival); - if ((hwc->flags & PERF_EVENT_FLAG_USER_ACCESS) && (hwc->flags & PERF_EVENT_FLAG_USER_READ_CNT)) rvpmu_set_scounteren((void *)event); + + if (needs_branch_stack(event)) + riscv_pmu_ctr_enable(event); } static void rvpmu_ctr_stop(struct perf_event *event, unsigned long flag) @@ -1128,6 +1138,9 @@ static void rvpmu_ctr_stop(struct perf_event *event, unsigned long flag) } else { rvpmu_sbi_ctr_stop(event, flag); } + + if (needs_branch_stack(event)) + riscv_pmu_ctr_disable(event); } static int rvpmu_find_ctrs(void) @@ -1161,6 +1174,9 @@ static int rvpmu_find_ctrs(void) static int rvpmu_event_map(struct perf_event *event, u64 *econfig) { + if (needs_branch_stack(event) && !riscv_pmu_ctr_valid(event)) + return -EOPNOTSUPP; + if (static_branch_likely(&riscv_pmu_cdeleg_available) && !pmu_sbi_is_fw_event(event)) return rvpmu_deleg_event_map(event, econfig); else @@ -1207,6 +1223,8 @@ static int rvpmu_starting_cpu(unsigned int cpu, struct hlist_node *node) enable_percpu_irq(riscv_pmu_irq, IRQ_TYPE_NONE); } + riscv_pmu_ctr_starting_cpu(); + return 0; } @@ -1218,6 +1236,7 @@ static int rvpmu_dying_cpu(unsigned int cpu, struct hlist_node *node) /* Disable all counters access for user mode now */ csr_write(CSR_SCOUNTEREN, 0x0); + riscv_pmu_ctr_dying_cpu(); return 0; } @@ -1331,6 +1350,29 @@ static void riscv_pmu_destroy(struct riscv_pmu *pmu) cpuhp_state_remove_instance(CPUHP_AP_PERF_RISCV_STARTING, &pmu->node); } +static int branch_records_alloc(struct riscv_pmu *pmu) +{ + struct branch_records __percpu *tmp_alloc_ptr; + struct branch_records *records; + struct cpu_hw_events *events; + int cpu; + + if (!riscv_pmu_ctr_supported(pmu)) + return 0; + + tmp_alloc_ptr = alloc_percpu_gfp(struct branch_records, GFP_KERNEL); + if (!tmp_alloc_ptr) + return -ENOMEM; + + for_each_possible_cpu(cpu) { + events = per_cpu_ptr(pmu->hw_events, cpu); + records = per_cpu_ptr(tmp_alloc_ptr, cpu); + events->branches = records; + } + + return 0; +} + static void rvpmu_event_init(struct perf_event *event) { /* @@ -1490,6 +1532,12 @@ static int rvpmu_device_probe(struct platform_device *pdev) pmu->pmu.attr_groups = riscv_cdeleg_pmu_attr_groups; else pmu->pmu.attr_groups = riscv_sbi_pmu_attr_groups; + + riscv_pmu_ctr_init(pmu); + ret = branch_records_alloc(pmu); + if (ret) + goto out_ctr_finish; + pmu->cmask = cmask; pmu->ctr_start = rvpmu_ctr_start; pmu->ctr_stop = rvpmu_ctr_stop; @@ -1506,7 +1554,7 @@ static int rvpmu_device_probe(struct platform_device *pdev) ret = cpuhp_state_add_instance(CPUHP_AP_PERF_RISCV_STARTING, &pmu->node); if (ret) - return ret; + goto out_ctr_finish; ret = riscv_pm_pmu_register(pmu); if (ret) @@ -1523,6 +1571,9 @@ static int rvpmu_device_probe(struct platform_device *pdev) out_unregister: riscv_pmu_destroy(pmu); +out_ctr_finish: + riscv_pmu_ctr_finish(pmu); + out_free: kfree(pmu); return ret;