From patchwork Tue Jan 14 22:57:37 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Patra X-Patchwork-Id: 13939669 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CA5D2E77188 for ; Tue, 14 Jan 2025 23:15:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=8bYgqzdVcJEs+5Om17+8npB3p9rTqd8el94+SSiomKA=; b=E7R3t3ttnQQjIZ19NdFXsap1Al hhNekY4jII4gdErLw8s/eylVRY4BYVcWC1rd18Ak5Se6+LHeo7CousyZtzSf4Bn3NHUfJH9lb3unn pS7ZaW8LWfUibESIyOTxv5GYOhtCd+YMgIZ3HvxfxnYpkXMEwuty6gJHgTetCYnvAEIOVOD+TZ/qe FVlUue9ZyAkPH7nZvbXyyL9EeVV0+hcTf/9+3EjPyYeOlfE8pQVp3sdDaSFWxDn1+FlezKSbXSFDF fyPzJswBoVCEOpq+XUrJxLnjiDd2IoJsRAcHQGfHvkvq+zf773ddseaCg6FF9bnnTb5POd4tX6rYl /5EiYYrQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tXq7T-0000000A4KS-3Cuf; Tue, 14 Jan 2025 23:14:47 +0000 Received: from mail-pl1-x633.google.com ([2607:f8b0:4864:20::633]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tXprr-0000000A00s-34rQ for linux-arm-kernel@lists.infradead.org; Tue, 14 Jan 2025 22:58:42 +0000 Received: by mail-pl1-x633.google.com with SMTP id d9443c01a7336-2163bd70069so110772445ad.0 for ; Tue, 14 Jan 2025 14:58:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1736895519; x=1737500319; darn=lists.infradead.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=8bYgqzdVcJEs+5Om17+8npB3p9rTqd8el94+SSiomKA=; b=Y61RfaEyEtmB7EGkCrSSTLIctQUMNQEIQdNL/vunungQvR/chyrrtzyXOIjlrVfZG8 Hrd5bbm5zARQ2fgR8rn5m7LiFMqr2Dx1Ll19dCNCt6Q+ZR0A77Fldrhvk7mtup+WtvTI zlo+/8WZfT+LLmcPxgB6PNsPvm5GPhKWAi1BQd5rf4wEGrlNuUPUx3lJ32Cy5nsSPAO0 9IFOo3eVnsBD5etlETRq/qCzfhsqhe5j+j4L/3bHSz3ylljNdRTGztWCRun0/wi+FuBr 5yiLmnzxeWms9ElVa8UIfVjcUS+CMLBNQDMlQxWsveJmtqbztz062ugK3kcFHOoAc4nE SzZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736895519; x=1737500319; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8bYgqzdVcJEs+5Om17+8npB3p9rTqd8el94+SSiomKA=; b=TCeGA0xD1rKjiNAvwLKz1M6j2UbJlW7vD8uCsY448AuWwmt1IC1RlFJLl/OP8z73ss WngRzegv7GIeDSJqjgCGIoAGdFHtiOlDn279xeIKZosLk7vvraBS4WE3mghzRlZpM+0t FrzZfcvS0S1Le7rA2zETm8PxtInDCMmL4qynd3P4f3Zc0YTcdL4sHi1N9rzX5i43kGnm kT6MiPr7sW9TTrnL3/EyBGgfHxIRJgqWNXXNl+HrHusR76FkwyxJrs0Kp1GbiYu9a33v Qmf3IlKEfZckJ8RjCqXfUYEZsYuKT0d5N6m3WHmqAf8UkmY0NkpzOdpQloSlaDO7ScA/ 8EwA== X-Forwarded-Encrypted: i=1; AJvYcCWbiVGaOzsnbVsywzXdNuIDvivRDY0Vj+Xs8W2unEUtjtfu0AXSX/Wa7MUkjiGxGMaRpPet8dAm0F0wY+jXrRtk@lists.infradead.org X-Gm-Message-State: AOJu0YwO8AZoFr0e29B+bpAJnNDLXLlhzw7nUq6sYQ8ZD+FcJSaZwFgb L+gh+HU6QTQ1A5aEDf+AFiVQRsJdNsXZBvMgcQ51RcHSlQg+aXrij+kP1jlhUR0= X-Gm-Gg: ASbGncuNqzwPDFzY6fiP28TNhe2qxyuty4GrjBjREwo69ZJZQGORXCZCx/ma1vkT+PB D224NxnfYRKM9H3iaPYPTwqBxpOamB/yvFpJY9+covDCn4csMXPwLQwYV8MiHAPXH3QvQ9TVZV1 TB3O+cveG200e6fvQUqmqW1e7dNSORpJMB59eQbAFD5OXDFPpYkSr1l2YkbSpPwV6JrGZsF85ED wM9j4MEjzxgSsS7ZPrPNys7V18r70Rz/4YlRDPYmeyg7ANQC1hze6iFzetpDeNPA5D5ng== X-Google-Smtp-Source: AGHT+IH9ogDBtRPiI1E6tTGHRsPWCReutJMs+52UHs8LJjUAyGy5atztuKvytDsOF0kiZ6ZXmGsVKw== X-Received: by 2002:a17:902:d511:b0:215:6489:cfbf with SMTP id d9443c01a7336-21a83f48cc0mr401663355ad.11.1736895519126; Tue, 14 Jan 2025 14:58:39 -0800 (PST) Received: from atishp.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21a9f10df7asm71746105ad.47.2025.01.14.14.58.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Jan 2025 14:58:38 -0800 (PST) From: Atish Patra Date: Tue, 14 Jan 2025 14:57:37 -0800 Subject: [PATCH v2 12/21] RISC-V: perf: Modify the counter discovery mechanism MIME-Version: 1.0 Message-Id: <20250114-counter_delegation-v2-12-8ba74cdb851b@rivosinc.com> References: <20250114-counter_delegation-v2-0-8ba74cdb851b@rivosinc.com> In-Reply-To: <20250114-counter_delegation-v2-0-8ba74cdb851b@rivosinc.com> To: Paul Walmsley , Palmer Dabbelt , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Anup Patel , Atish Patra , Will Deacon , Mark Rutland , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , weilin.wang@intel.com Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Palmer Dabbelt , Conor Dooley , devicetree@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-perf-users@vger.kernel.org, Atish Patra X-Mailer: b4 0.15-dev-13183 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250114_145839_833922_B09D1910 X-CRM114-Status: GOOD ( 28.46 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org If both counter delegation and SBI PMU is present, the counter delegation will be used for hardware pmu counters while the SBI PMU will be used for firmware counters. Thus, the driver has to probe the counters info via SBI PMU to distinguish the firmware counters. The hybrid scheme also requires improvements of the informational logging messages to indicate the user about underlying interface used for each use case. Signed-off-by: Atish Patra --- drivers/perf/riscv_pmu_dev.c | 118 ++++++++++++++++++++++++++++++++----------- 1 file changed, 88 insertions(+), 30 deletions(-) diff --git a/drivers/perf/riscv_pmu_dev.c b/drivers/perf/riscv_pmu_dev.c index b69654554288..c7adda948b5d 100644 --- a/drivers/perf/riscv_pmu_dev.c +++ b/drivers/perf/riscv_pmu_dev.c @@ -66,6 +66,10 @@ static bool sbi_v2_available; static DEFINE_STATIC_KEY_FALSE(sbi_pmu_snapshot_available); #define sbi_pmu_snapshot_available() \ static_branch_unlikely(&sbi_pmu_snapshot_available) +static DEFINE_STATIC_KEY_FALSE(riscv_pmu_sbi_available); +static DEFINE_STATIC_KEY_FALSE(riscv_pmu_cdeleg_available); +static bool cdeleg_available; +static bool sbi_available; static struct attribute *riscv_arch_formats_attr[] = { &format_attr_event.attr, @@ -88,7 +92,8 @@ static int sysctl_perf_user_access __read_mostly = SYSCTL_USER_ACCESS; /* * This structure is SBI specific but counter delegation also require counter - * width, csr mapping. Reuse it for now. + * width, csr mapping. Reuse it for now we can have firmware counters for + * platfroms with counter delegation support. * RISC-V doesn't have heterogeneous harts yet. This need to be part of * per_cpu in case of harts with different pmu counters */ @@ -100,6 +105,8 @@ static unsigned int riscv_pmu_irq; /* Cache the available counters in a bitmask */ static unsigned long cmask; +/* Cache the available firmware counters in another bitmask */ +static unsigned long firmware_cmask; struct sbi_pmu_event_data { union { @@ -778,35 +785,49 @@ static int rvpmu_sbi_find_num_ctrs(void) return sbi_err_map_linux_errno(ret.error); } -static int rvpmu_sbi_get_ctrinfo(int nctr, unsigned long *mask) +static int rvpmu_deleg_find_ctrs(void) +{ + /* TODO */ + return -1; +} + +static int rvpmu_sbi_get_ctrinfo(int nsbi_ctr, int ndeleg_ctr) { struct sbiret ret; - int i, num_hw_ctr = 0, num_fw_ctr = 0; + int i, num_hw_ctr = 0, num_fw_ctr = 0, num_ctr = 0; union sbi_pmu_ctr_info cinfo; - pmu_ctr_list = kcalloc(nctr, sizeof(*pmu_ctr_list), GFP_KERNEL); - if (!pmu_ctr_list) - return -ENOMEM; - - for (i = 0; i < nctr; i++) { + for (i = 0; i < nsbi_ctr; i++) { ret = sbi_ecall(SBI_EXT_PMU, SBI_EXT_PMU_COUNTER_GET_INFO, i, 0, 0, 0, 0, 0); if (ret.error) /* The logical counter ids are not expected to be contiguous */ continue; - *mask |= BIT(i); - cinfo.value = ret.value; if (cinfo.type == SBI_PMU_CTR_TYPE_FW) num_fw_ctr++; - else + + if (!cdeleg_available) { num_hw_ctr++; - pmu_ctr_list[i].value = cinfo.value; + cmask |= BIT(i); + pmu_ctr_list[i].value = cinfo.value; + } else if (cinfo.type == SBI_PMU_CTR_TYPE_FW) { + /* Track firmware counters in a different mask */ + firmware_cmask |= BIT(i); + pmu_ctr_list[i].value = cinfo.value; + } + } - pr_info("%d firmware and %d hardware counters\n", num_fw_ctr, num_hw_ctr); + if (cdeleg_available) { + pr_info("%d firmware and %d hardware counters\n", num_fw_ctr, ndeleg_ctr); + num_ctr = num_fw_ctr + ndeleg_ctr; + } else { + pr_info("%d firmware and %d hardware counters\n", num_fw_ctr, num_hw_ctr); + num_ctr = nsbi_ctr; + } - return 0; + return num_ctr; } static inline void rvpmu_sbi_stop_all(struct riscv_pmu *pmu) @@ -1067,16 +1088,33 @@ static void rvpmu_ctr_stop(struct perf_event *event, unsigned long flag) /* TODO: Counter delegation implementation */ } -static int rvpmu_find_num_ctrs(void) +static int rvpmu_find_ctrs(void) { - return rvpmu_sbi_find_num_ctrs(); - /* TODO: Counter delegation implementation */ -} + int num_sbi_counters = 0, num_deleg_counters = 0, num_counters = 0; -static int rvpmu_get_ctrinfo(int nctr, unsigned long *mask) -{ - return rvpmu_sbi_get_ctrinfo(nctr, mask); - /* TODO: Counter delegation implementation */ + /* + * We don't know how many firmware counters available. Just allocate + * for maximum counters driver can support. The default is 64 anyways. + */ + pmu_ctr_list = kcalloc(RISCV_MAX_COUNTERS, sizeof(*pmu_ctr_list), + GFP_KERNEL); + if (!pmu_ctr_list) + return -ENOMEM; + + if (cdeleg_available) + num_deleg_counters = rvpmu_deleg_find_ctrs(); + + /* This is required for firmware counters even if the above is true */ + if (sbi_available) + num_sbi_counters = rvpmu_sbi_find_num_ctrs(); + + if (num_sbi_counters >= RISCV_MAX_COUNTERS || num_deleg_counters >= RISCV_MAX_COUNTERS) + return -ENOSPC; + + /* cache all the information about counters now */ + num_counters = rvpmu_sbi_get_ctrinfo(num_sbi_counters, num_deleg_counters); + + return num_counters; } static int rvpmu_event_map(struct perf_event *event, u64 *econfig) @@ -1377,12 +1415,21 @@ static int rvpmu_device_probe(struct platform_device *pdev) int ret = -ENODEV; int num_counters; - pr_info("SBI PMU extension is available\n"); + if (cdeleg_available) { + pr_info("hpmcounters will use the counter delegation ISA extension\n"); + if (sbi_available) + pr_info("Firmware counters will be use SBI PMU extension\n"); + else + pr_info("Firmware counters will be not available as SBI PMU extension is not present\n"); + } else if (sbi_available) { + pr_info("Both hpmcounters and firmware counters will use SBI PMU extension\n"); + } + pmu = riscv_pmu_alloc(); if (!pmu) return -ENOMEM; - num_counters = rvpmu_find_num_ctrs(); + num_counters = rvpmu_find_ctrs(); if (num_counters < 0) { pr_err("SBI PMU extension doesn't provide any counters\n"); goto out_free; @@ -1394,9 +1441,6 @@ static int rvpmu_device_probe(struct platform_device *pdev) pr_info("SBI returned more than maximum number of counters. Limiting the number of counters to %d\n", num_counters); } - /* cache all the information about counters now */ - if (rvpmu_get_ctrinfo(num_counters, &cmask)) - goto out_free; ret = rvpmu_setup_irqs(pmu, pdev); if (ret < 0) { @@ -1486,13 +1530,27 @@ static int __init rvpmu_devinit(void) int ret; struct platform_device *pdev; - if (sbi_spec_version < sbi_mk_version(0, 3) || - !sbi_probe_extension(SBI_EXT_PMU)) { - return 0; + if (sbi_spec_version >= sbi_mk_version(0, 3) && + sbi_probe_extension(SBI_EXT_PMU)) { + static_branch_enable(&riscv_pmu_sbi_available); + sbi_available = true; } if (sbi_spec_version >= sbi_mk_version(2, 0)) sbi_v2_available = true; + /* + * We need all three extensions to be present to access the counters + * in S-mode via Supervisor Counter delegation. + */ + if (riscv_isa_extension_available(NULL, SSCCFG) && + riscv_isa_extension_available(NULL, SMCDELEG) && + riscv_isa_extension_available(NULL, SSCSRIND)) { + static_branch_enable(&riscv_pmu_cdeleg_available); + cdeleg_available = true; + } + + if (!(sbi_available || cdeleg_available)) + return 0; ret = cpuhp_setup_state_multi(CPUHP_AP_PERF_RISCV_STARTING, "perf/riscv/pmu:starting",