From patchwork Thu Apr 13 16:17:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 13210442 X-Patchwork-Delegate: palmer@dabbelt.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 369F9C77B61 for ; Thu, 13 Apr 2023 16:18:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=uqjYzmQgPxwzHQtoyygokJaK4h53GOdZ2lOw9pQogwA=; b=r2euAHjqx7cqLD FD4zX2NFIrZHube9zW1o029kgZt9BEKd7ACMtFc/xiMVJARBMjMTiCmfD3c28Zu8jCF7iCH3MkxzB It2qCHm/gXzcyi2Qp0D7xlNvEiCtNLXbzILa3KdORLbBiuHLqfyDJI0aBjaq4daExuAls+m7eUWXN /KOsZE8ppJb0xGpdJyTxG6fTWOJCOFL6yAqhp37GiYs3G0LmuBt3Sb8TvWA3i4IMpWspj8HNphKVV Pn8PLXInJl6VqPttORc7OVI2WIx0+Q+bftnLLDEd/6sVeCovPxkuFEjXic/jyHi+bX+L0qOoZEj37 JNWgfZMV3no7656uy7pw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pmzee-006dfW-2F; Thu, 13 Apr 2023 16:18:36 +0000 Received: from mail-wr1-x435.google.com ([2a00:1450:4864:20::435]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pmzeb-006dcK-2t for linux-riscv@lists.infradead.org; Thu, 13 Apr 2023 16:18:35 +0000 Received: by mail-wr1-x435.google.com with SMTP id v6so14751756wrv.8 for ; Thu, 13 Apr 2023 09:18:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681402711; x=1683994711; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=6zC284Mx5M3d9WrVkxHAYpIwsSHj0B1udyIbm0+qXIw=; b=jnYAXcdc2F/ZUYjwGbFde5j1fnjVc7M0pT4u8eeWu2p7a9wNVttC/D6yh195iVKys4 mxVhroni/klSct1CM/u8HtxLFdwyp8gxBeK52u97EoUsZcYEPG2gbhqDiF3zDqXCerCB NiYRItU+QsjrhYXfzgriIL7Bq8c+QDOLPiydToeJ3LjmuvIfa/zHFQnEagnw96dZk03u pd3KVMh0PwwJ+kCSQtIFpnMRaviqT8YGPxin0RGwxSiSveSxjexCjxUGU1uXkFwYZmqb xTjttfIuMCcu+Ru3hr7RR5/IDoy+4Vxv8/vqM7xpir2d0R6HwPiK+R5voQ85on0tT3/z t3Cg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681402711; x=1683994711; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=6zC284Mx5M3d9WrVkxHAYpIwsSHj0B1udyIbm0+qXIw=; b=Tfdn1csFSPbyWVS65Xy/nw9sdevWYgwXpHNC2/k9tCzGvnCva5ZzmfURUEefViTWam bE/2qOP0C60xF/FIAWxF53h6RyuRnm+NgUChy1AQND0EacRE2EEP2sylh3lQrjUmcLbZ 8ILWxnAbBKY9yLDQv+ANpKQjJn2SBwElmI/uUfwpMzGwzmRp7tud5eibaiLxpW+i6XpA rzAtB30I5IRVqohnDzLYXTbI8ArVUlbGpBygjFaiN72MvKav5f2ulGiT83LPej4AiUhE AgHoL/v6Gefe+dh3xB8Gjob+usigDrYjq61k69HYNME/prpEV5LdO6k9MOTB0psGPARj zkbw== X-Gm-Message-State: AAQBX9fWdN3vl7V5i+wF0wOqsDjrt1wk9f4bfmyCyozfGPw3KWz5mFQx yR8wo99rp1jPAJd+E5XZMZ9zfQ== X-Google-Smtp-Source: AKy350Z58ybJnlDv5ztRusjd7HwtKvOy6HCP/IazeXuo09NcPyvJkiRUBITxW1O8s8n9bpTCu5u2/Q== X-Received: by 2002:adf:e352:0:b0:2ef:b1bd:786 with SMTP id n18-20020adfe352000000b002efb1bd0786mr1600755wrj.13.1681402711478; Thu, 13 Apr 2023 09:18:31 -0700 (PDT) Received: from alex-rivos.ba.rivosinc.com (amontpellier-656-1-456-62.w92-145.abo.wanadoo.fr. [92.145.124.62]) by smtp.gmail.com with ESMTPSA id x17-20020a5d4911000000b002e55cc69169sm1598882wrq.38.2023.04.13.09.18.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Apr 2023 09:18:31 -0700 (PDT) From: Alexandre Ghiti To: Jonathan Corbet , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Paul Walmsley , Palmer Dabbelt , Albert Ou , Atish Patra , Anup Patel , Will Deacon , Rob Herring , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-riscv@lists.infradead.org, linux-arm-kernel@lists.infradead.org Cc: Alexandre Ghiti Subject: [PATCH 1/4] perf: Fix wrong comment about default event_idx Date: Thu, 13 Apr 2023 18:17:22 +0200 Message-Id: <20230413161725.195417-2-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230413161725.195417-1-alexghiti@rivosinc.com> References: <20230413161725.195417-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230413_091833_933027_0107118B X-CRM114-Status: GOOD ( 11.70 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org event_idx default implementation returns 0, not idx + 1. Signed-off-by: Alexandre Ghiti --- include/linux/perf_event.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index d5628a7b5eaa..56fe43b20966 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -442,7 +442,8 @@ struct pmu { /* * Will return the value for perf_event_mmap_page::index for this event, - * if no implementation is provided it will default to: event->hw.idx + 1. + * if no implementation is provided it will default to 0 (see + * perf_event_idx_default). */ int (*event_idx) (struct perf_event *event); /*optional */ From patchwork Thu Apr 13 16:17:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 13210447 X-Patchwork-Delegate: palmer@dabbelt.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 76470C77B6E for ; Thu, 13 Apr 2023 16:19:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=rSqYUWPmni54J6PPwrDNyRKRAC1v6nBuSSZJttuSYrI=; b=O7sL7L3snH5XU2 uF+0ytssVxSHyQHj/F2fwj33nOLATBIluwAaJ1uzZldHSzlpmC0nBVlCo7udQTn9HMEPHG1ZUyLcP ZZENc7To1+lwe0GTOufY7Y7lBCSZGDW9OxwvnQJH2B5fu5LcrGGW0m4z4iyhY+BKhNfu1fsn6xQVK X0rjW8UNUYExCmU5Ku9jKLDR10SjiaCr7QsQ4ONsuxauJmGPv1ItFNBPqJgm6MsZnVsXL132vuXID +mLs1VQxvo/0Oe0Pc+yBwze5rhRGVpHGwuboMRkV1WL+SYFKPf7eMsLBUf/IhUGrn+KCN2sLHyq+5 b6ASkMdHqOCQTi9TKJcw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pmzfe-006eAb-0F; Thu, 13 Apr 2023 16:19:38 +0000 Received: from mail-wm1-x32b.google.com ([2a00:1450:4864:20::32b]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pmzfa-006e7t-2n for linux-riscv@lists.infradead.org; Thu, 13 Apr 2023 16:19:37 +0000 Received: by mail-wm1-x32b.google.com with SMTP id v14-20020a05600c470e00b003f06520825fso13983395wmo.0 for ; Thu, 13 Apr 2023 09:19:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681402772; x=1683994772; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7K1lqgu2UMJSXGJkQ/CNFe9i2Bxz4FKmXNvI/1jCqDo=; b=15siB7iyaLI7ctUMscCZfFX7080U7A2snA5d/ylfDn2u0Pb2PAujpho+y2aaI8sDES L9dYvDnEKV28S0BEUMUYMff3pHCziTgOwLvMBYdNsHgsE8onnhngFHIWkp5qQQO5NnA6 HDYnzLU3zZPRt1RMj1UYXdjtAX/E5HB9ziYTRiqybX3T5i4kqboxQTZPLFIprCtCV73A 9GV4d8UEvAJ1foOmAeHQQEhFUH6dhZ9W9TJRDXGt9YiynihCT59rnmSu6FrNGrgf2+BQ gu/1A9UIhojnFwn5Bjj/eN+sgmpT8UeQR4Nt78j6/1Qvs0RG4TE2rOrliYct6yfHNMxO y5rA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681402772; x=1683994772; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7K1lqgu2UMJSXGJkQ/CNFe9i2Bxz4FKmXNvI/1jCqDo=; b=C4kj1XrkS8q45P/jTLsEne+FJ/CVT8tg4hKBy3ohlisXnGg24ymxwLipZDCqok8NGP PaE/1oxdLQ6NvRl72QMiWgOPzq9bkO36LXNtCEI2ikfMshDPiVGQ/piv6b7q3faEUX15 0mHEjk9NT3kqAFDiYDoQJT8pYOQz/u6Y28f5xEw9esNOw556OsQdp0BHk3Ax8aB32Zdk xePdTHCB9cWpgiwjdCTKlRRfMg5Hw5Q4sI7QL677fGSFtF4eGASFjl1jpCyaZkBRwOjg 7VBacz6nAKB076kGySCfcsM08ESRukTQZ3tKDD+hnxlVXYE9y2tLEwQJul4dqWgKacdH 9v3Q== X-Gm-Message-State: AAQBX9fk1DnztCoqiCLkck/RbYDeUD2buTAOPsppjhqYMARw18deT/ET aqr7sNyGljRRlk3cc7Y5g+2naA== X-Google-Smtp-Source: AKy350bbu+EgoDrMFvfMpZKlXMYBBCuVXkA66tWJ2iTh3msfJsJnFkge5XkpmCS78VFL7H6Vq98hpQ== X-Received: by 2002:a05:600c:201:b0:3ef:76dc:4b80 with SMTP id 1-20020a05600c020100b003ef76dc4b80mr1984394wmi.9.1681402772524; Thu, 13 Apr 2023 09:19:32 -0700 (PDT) Received: from alex-rivos.ba.rivosinc.com (amontpellier-656-1-456-62.w92-145.abo.wanadoo.fr. [92.145.124.62]) by smtp.gmail.com with ESMTPSA id bl18-20020adfe252000000b002d7a75a2c20sm1573011wrb.80.2023.04.13.09.19.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Apr 2023 09:19:32 -0700 (PDT) From: Alexandre Ghiti To: Jonathan Corbet , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Paul Walmsley , Palmer Dabbelt , Albert Ou , Atish Patra , Anup Patel , Will Deacon , Rob Herring , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-riscv@lists.infradead.org, linux-arm-kernel@lists.infradead.org Cc: Alexandre Ghiti Subject: [PATCH 2/4] include: riscv: Fix wrong include guard in riscv_pmu.h Date: Thu, 13 Apr 2023 18:17:23 +0200 Message-Id: <20230413161725.195417-3-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230413161725.195417-1-alexghiti@rivosinc.com> References: <20230413161725.195417-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230413_091934_907531_E71186C4 X-CRM114-Status: GOOD ( 11.64 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The current include guard prevents the inclusion of asm/perf_event.h which uses the same include guard: fix the one in riscv_pmu.h so that it matches the file name. Signed-off-by: Alexandre Ghiti Reviewed-by: Conor Dooley --- include/linux/perf/riscv_pmu.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/include/linux/perf/riscv_pmu.h b/include/linux/perf/riscv_pmu.h index 43fc892aa7d9..9f70d94942e0 100644 --- a/include/linux/perf/riscv_pmu.h +++ b/include/linux/perf/riscv_pmu.h @@ -6,8 +6,8 @@ * */ -#ifndef _ASM_RISCV_PERF_EVENT_H -#define _ASM_RISCV_PERF_EVENT_H +#ifndef _RISCV_PMU_H +#define _RISCV_PMU_H #include #include @@ -81,4 +81,4 @@ int riscv_pmu_get_hpm_info(u32 *hw_ctr_width, u32 *num_hw_ctr); #endif /* CONFIG_RISCV_PMU */ -#endif /* _ASM_RISCV_PERF_EVENT_H */ +#endif /* _RISCV_PMU_H */ From patchwork Thu Apr 13 16:17:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 13210448 X-Patchwork-Delegate: palmer@dabbelt.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F141FC77B61 for ; Thu, 13 Apr 2023 16:20:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Fx5PcLeC0+h+RRa9833afpv2TQNwIZgF9dHKuZGndAM=; b=fUJMa7p6qzgkXB SvCl4rC/9Lql/dK5WY3U1sia9mpWrthiRXUe7SdbzZl8JNdlC75yyBnzMKO2GcXgdbvv/F+9yqpVy h7rXFhFXW/ZjZChll5tL5vy6W8WonQd90m2Fcx9zB6JNbVDiXVTVzNYJv1pa3WTtD3NTErRVWL/0U gDxw25RrHiQE+k690poptquVJ7KAQgSVyRxoJ3D45Nz1qXNaBZSdHc7mcv7vNpJVkbeSkSnQsHyKl EXC/i1TffycIKfjRSGQ59JwOdOG98mux7FBZHOZUrWrKNBleJ/IeiEJE29hOLwnPNhuQncRrdUBkF rKGeCYzEgAQyIFcPr6UA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pmzge-006ect-1I; Thu, 13 Apr 2023 16:20:40 +0000 Received: from mail-wm1-x330.google.com ([2a00:1450:4864:20::330]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pmzgZ-006eZg-2e for linux-riscv@lists.infradead.org; Thu, 13 Apr 2023 16:20:38 +0000 Received: by mail-wm1-x330.google.com with SMTP id q5so8896030wmo.4 for ; Thu, 13 Apr 2023 09:20:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681402833; x=1683994833; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hbhrF1vaRRoJzYyea1JBFcTA2VAH3WkbBcvliTXkyWM=; b=xVRpVSn8CzSugrtbdPwe0szyiwDqMkR+UEllz4W7naM2gmqv/Mjyn0tKhvPZYVFy5B qr7sBJ8yOjlJ7nfDYo+ia+PpSt/Cr4cMPH2Otf/r4hvuEXhldV8NkSKpARdKTc9lj0q8 UyeAxkjzT9B3qvstiOW4L4Iu/x4tV+F3KeldSXVb+gMF5feNEXAVY/IiZUN0E06bBf6E MTOzvMFWYX11zNGN7XMXfnV9f5u6Sdp8SmuzAJAkd80wMkQwfswo2ikmUIHm4fLfLcb+ OwbsxPWDKcdl/YA0HMvx62LAYuGXYdbz6xIExRTw8/ntNeRjdkzZfbQzrpdFj2AyhHtW QEMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681402833; x=1683994833; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hbhrF1vaRRoJzYyea1JBFcTA2VAH3WkbBcvliTXkyWM=; b=kBqq1POgbZeYRRpW010/lMkI3oOYL3BeS5NMxjcG13O2QVB1vk1964uBYPlCfwrpC0 Tq4mYJQ+bdZo2+v12GqK5HaeYxWB269VbQQP1+0CQs7xv9QU1hl0Mepy4RQzK+N6Hoiq 2ZPrd2jR5X4CQcgyrutmzQzFDrXTkmsWNU85Cqv1+rBpWNp7J1Ry7553i2OP0SyBjpXs 4PjrKrb8BxbiWzg2R5zrUE+n92tpzsFKCj5uHTPCQoWXffB+x+RgzncvJvlZmCYLgcQd pDSWRzXxg2JrxXm5o/oRMO/02vhqMhmBF45JLM7e0owrRavuXUTA5Pfez0fM8H7i7lKd ghFQ== X-Gm-Message-State: AAQBX9f5/i2BwjfCouP1CcsJSp6k2mz86TF6gpnliViTKc3N7Rs0YlQO J1MNzemFZr4GbP6Ki4JnOKzfqQ== X-Google-Smtp-Source: AKy350Zp5PeohDBdd7Xv3HthFjOkMMVG4oUVL5Rpyjw5IcUHxehnqFrVF/5LhGA4zzMyI3xHwOYFNg== X-Received: by 2002:a05:600c:2244:b0:3f0:7e63:e034 with SMTP id a4-20020a05600c224400b003f07e63e034mr2268188wmm.29.1681402833535; Thu, 13 Apr 2023 09:20:33 -0700 (PDT) Received: from alex-rivos.ba.rivosinc.com (amontpellier-656-1-456-62.w92-145.abo.wanadoo.fr. [92.145.124.62]) by smtp.gmail.com with ESMTPSA id p5-20020a05600c358500b003f09fd301ddsm5004185wmq.1.2023.04.13.09.20.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Apr 2023 09:20:33 -0700 (PDT) From: Alexandre Ghiti To: Jonathan Corbet , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Paul Walmsley , Palmer Dabbelt , Albert Ou , Atish Patra , Anup Patel , Will Deacon , Rob Herring , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-riscv@lists.infradead.org, linux-arm-kernel@lists.infradead.org Cc: Alexandre Ghiti Subject: [PATCH 3/4] riscv: Make legacy counter enum match the HW numbering Date: Thu, 13 Apr 2023 18:17:24 +0200 Message-Id: <20230413161725.195417-4-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230413161725.195417-1-alexghiti@rivosinc.com> References: <20230413161725.195417-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230413_092035_905034_D2F89B1D X-CRM114-Status: GOOD ( 11.08 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org RISCV_PMU_LEGACY_INSTRET used to be set to 1 whereas the offset of this hardware counter from CSR_CYCLE is actually 2: make this offset match the real hw offset so that we can directly expose those values to userspace. Signed-off-by: Alexandre Ghiti --- drivers/perf/riscv_pmu_legacy.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/drivers/perf/riscv_pmu_legacy.c b/drivers/perf/riscv_pmu_legacy.c index ca9e20bfc7ac..0d8c9d8849ee 100644 --- a/drivers/perf/riscv_pmu_legacy.c +++ b/drivers/perf/riscv_pmu_legacy.c @@ -12,8 +12,11 @@ #include #include -#define RISCV_PMU_LEGACY_CYCLE 0 -#define RISCV_PMU_LEGACY_INSTRET 1 +enum { + RISCV_PMU_LEGACY_CYCLE, + RISCV_PMU_LEGACY_TIME, + RISCV_PMU_LEGACY_INSTRET +}; static bool pmu_init_done; From patchwork Thu Apr 13 16:17:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 13210449 X-Patchwork-Delegate: palmer@dabbelt.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9173CC77B61 for ; Thu, 13 Apr 2023 16:21:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=rpKEzI9Fi+Ae90n9hV8Sb/Tm/9ezyfkvIIvknSTyYEo=; b=xDXyCDTB+79knl vNJ2be1//zlV7ntyVCrY4AHucJejuztJMoEMU7qjdYNKLRQVbJxYxwYv6eMm7no/Jr+NH9l4Swn9f qn8Cw9U3IMx/MGU5g8RfxKp4bPePlpJSb38K1tpq5LG9vBZ0ggmv24HtHcjdAFDVY3yw+xP7fTN43 K2qvaKrIoeqC85gzQvYhuLrI3X5PsYYn1WmWu7xt661TpZkI62EoSNqy2Cg82pOFjJDnJKARCidTa Xv41L/hbIUySGuW+yk3donhQ/UrRLQzoDNyFYmd0lmbo3SZPnF2NVtCi5JzD484IEydmVAk5OZd39 KSNUHwqQLQVn2kuAJsqQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pmzhd-006f0m-1R; Thu, 13 Apr 2023 16:21:41 +0000 Received: from mail-wr1-x433.google.com ([2a00:1450:4864:20::433]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pmzhZ-006eye-0W for linux-riscv@lists.infradead.org; Thu, 13 Apr 2023 16:21:39 +0000 Received: by mail-wr1-x433.google.com with SMTP id i3so5458384wrc.4 for ; Thu, 13 Apr 2023 09:21:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681402895; x=1683994895; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=E1fA6CDd78niyqln6Gu6KuEtFWyF8BCngFGkehjm11k=; b=KsuWtbuKE6kV5riIBT3dHBEvITKwn281TlmrvffMdwlKcOhpeCDLvKYlIo3gt41x+e gNOFiaGcZ4rPAtX8YwlpGRPAzSCF4xoumcC9a+Kd40rTBex44R94Y8Sq1Uutmd0TW95X JW0e2/wtHgBR6UJGxDpLoC2AEBxuKNejIm1wj7GgfNiw8AAj/xHXWSUJsdEzZ8xfPvCo 5OSnbFlkH/ITAYKGe6SVZp7KIWRSRQkMcTOblIlmpgI5hYjTY2zx8kyn5+gb6VhfsdkR qRfF0rfBrjBBqivnoWdt+FoSXymtU5o3Cyg7IjfOndDhP8WIOUb5bdLPURXyYiZvCzoN +XkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681402895; x=1683994895; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=E1fA6CDd78niyqln6Gu6KuEtFWyF8BCngFGkehjm11k=; b=LKfPo+xoLcc6wkfKMuE4yjN2DJaKxKCoQQhOcXtQMRNh1VepdP4zmWO2G8A+IrVlmO Hp/b9Jwkl8Tq5RU9rfVm4w11wvbJNxmBsvp9FBjSC6rKuz+tItY6aGs/6nrgjoBVpRPe eApEu79mzU8AcqdD8R9qjZ49FiLVQVhpnQaJUWFcHHnnRnvOsOw4523jdFUW+iG3Q7fw O4ALWHKGbLVGhMVXD1pvg8UWCXqrFDvS87G4ffrW9TTH6R1PJqccwUlC/IGd5Hfi4Vt4 SeitqcyuU7ZXPo0w9T4rublsitN0POXmMcJGOnlx2pPXc3UXMWwwxlou2UriKajiulKU heAw== X-Gm-Message-State: AAQBX9cG5LpLwe+9+j9wWHaKn/ph+iVZPhQbVd4iKnUcJid7VGhVwUbj tE9Xgd5rAVXqVebs5nb5V3MhlPyPpULSel7yfZU= X-Google-Smtp-Source: AKy350aMwTZAwm4ZoDHtxvOSlybdkNBxcexPxlBqyyK+uY1vBQgdEWdN6bipTzNuqAJSyEGnwLw6DQ== X-Received: by 2002:a5d:6087:0:b0:2ce:a8d6:7fc0 with SMTP id w7-20020a5d6087000000b002cea8d67fc0mr1755693wrt.54.1681402894711; Thu, 13 Apr 2023 09:21:34 -0700 (PDT) Received: from alex-rivos.ba.rivosinc.com (amontpellier-656-1-456-62.w92-145.abo.wanadoo.fr. [92.145.124.62]) by smtp.gmail.com with ESMTPSA id h7-20020a05600c314700b003f07ef4e3e0sm10541452wmo.0.2023.04.13.09.21.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Apr 2023 09:21:34 -0700 (PDT) From: Alexandre Ghiti To: Jonathan Corbet , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Paul Walmsley , Palmer Dabbelt , Albert Ou , Atish Patra , Anup Patel , Will Deacon , Rob Herring , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-riscv@lists.infradead.org, linux-arm-kernel@lists.infradead.org Cc: Alexandre Ghiti Subject: [PATCH 4/4] riscv: Enable perf counters user access only through perf Date: Thu, 13 Apr 2023 18:17:25 +0200 Message-Id: <20230413161725.195417-5-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230413161725.195417-1-alexghiti@rivosinc.com> References: <20230413161725.195417-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230413_092137_208099_A8BD38DF X-CRM114-Status: GOOD ( 34.39 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org We used to unconditionnally expose the cycle and instret csrs to userspace, which gives rise to security concerns. So only allow access to hw counters from userspace through the perf framework which will handle context switchs, per-task events...etc. But as we cannot break userspace, we give the user the choice to go back to the previous behaviour by setting the sysctl perf_user_access. We also introduce a means to directly map the hardware counters to userspace, thus avoiding the need for syscalls whenever an application wants to access counters values. Note that arch_perf_update_userpage is a copy of arm64 code. Signed-off-by: Alexandre Ghiti --- Documentation/admin-guide/sysctl/kernel.rst | 23 +++- arch/riscv/include/asm/perf_event.h | 3 + arch/riscv/kernel/Makefile | 2 +- arch/riscv/kernel/perf_event.c | 65 +++++++++++ drivers/perf/riscv_pmu.c | 42 ++++++++ drivers/perf/riscv_pmu_legacy.c | 17 +++ drivers/perf/riscv_pmu_sbi.c | 113 ++++++++++++++++++-- include/linux/perf/riscv_pmu.h | 3 + tools/lib/perf/mmap.c | 65 +++++++++++ 9 files changed, 322 insertions(+), 11 deletions(-) create mode 100644 arch/riscv/kernel/perf_event.c diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst index 4b7bfea28cd7..02b2a40a3647 100644 --- a/Documentation/admin-guide/sysctl/kernel.rst +++ b/Documentation/admin-guide/sysctl/kernel.rst @@ -941,16 +941,31 @@ enabled, otherwise writing to this file will return ``-EBUSY``. The default value is 8. -perf_user_access (arm64 only) -================================= +perf_user_access (arm64 and riscv only) +======================================= + +Controls user space access for reading perf event counters. -Controls user space access for reading perf event counters. When set to 1, -user space can read performance monitor counter registers directly. +arm64 +===== The default value is 0 (access disabled). +When set to 1, user space can read performance monitor counter registers +directly. See Documentation/arm64/perf.rst for more information. +riscv +===== + +When set to 0, user access is disabled. + +When set to 1, user space can read performance monitor counter registers +directly only through perf, any direct access without perf intervention will +trigger an illegal instruction. + +The default value is 2, it enables the legacy mode, that is user space has +direct access to cycle, time and insret CSRs only. pid_max ======= diff --git a/arch/riscv/include/asm/perf_event.h b/arch/riscv/include/asm/perf_event.h index d42c901f9a97..9fdfdd9dc92d 100644 --- a/arch/riscv/include/asm/perf_event.h +++ b/arch/riscv/include/asm/perf_event.h @@ -9,5 +9,8 @@ #define _ASM_RISCV_PERF_EVENT_H #include + +#define PERF_EVENT_FLAG_LEGACY 1 + #define perf_arch_bpf_user_pt_regs(regs) (struct user_regs_struct *)regs #endif /* _ASM_RISCV_PERF_EVENT_H */ diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile index aa22f87faeae..9ae951b07847 100644 --- a/arch/riscv/kernel/Makefile +++ b/arch/riscv/kernel/Makefile @@ -70,7 +70,7 @@ obj-$(CONFIG_DYNAMIC_FTRACE) += mcount-dyn.o obj-$(CONFIG_TRACE_IRQFLAGS) += trace_irq.o -obj-$(CONFIG_PERF_EVENTS) += perf_callchain.o +obj-$(CONFIG_PERF_EVENTS) += perf_callchain.o perf_event.o obj-$(CONFIG_HAVE_PERF_REGS) += perf_regs.o obj-$(CONFIG_RISCV_SBI) += sbi.o ifeq ($(CONFIG_RISCV_SBI), y) diff --git a/arch/riscv/kernel/perf_event.c b/arch/riscv/kernel/perf_event.c new file mode 100644 index 000000000000..4a75ab628bfb --- /dev/null +++ b/arch/riscv/kernel/perf_event.c @@ -0,0 +1,65 @@ +// SPDX-License-Identifier: GPL-2.0-only +#include +#include + +void arch_perf_update_userpage(struct perf_event *event, + struct perf_event_mmap_page *userpg, u64 now) +{ + struct riscv_pmu *rvpmu = to_riscv_pmu(event->pmu); + struct clock_read_data *rd; + unsigned int seq; + u64 ns; + + userpg->cap_user_time = 0; + userpg->cap_user_time_zero = 0; + userpg->cap_user_time_short = 0; + userpg->cap_user_rdpmc = + !!(event->hw.flags & PERF_EVENT_FLAG_USER_READ_CNT); + + /* + * The counters are 64-bit but the priv spec doesn't mandate all the + * bits to be implemented: that's why, counter width can vary based on + * the cpu vendor. + */ + userpg->pmc_width = rvpmu->ctr_get_width(event->hw.idx) + 1; + + do { + rd = sched_clock_read_begin(&seq); + + userpg->time_mult = rd->mult; + userpg->time_shift = rd->shift; + userpg->time_zero = rd->epoch_ns; + userpg->time_cycles = rd->epoch_cyc; + userpg->time_mask = rd->sched_clock_mask; + + /* + * Subtract the cycle base, such that software that + * doesn't know about cap_user_time_short still 'works' + * assuming no wraps. + */ + ns = mul_u64_u32_shr(rd->epoch_cyc, rd->mult, rd->shift); + userpg->time_zero -= ns; + + } while (sched_clock_read_retry(seq)); + + userpg->time_offset = userpg->time_zero - now; + + /* + * time_shift is not expected to be greater than 31 due to + * the original published conversion algorithm shifting a + * 32-bit value (now specifies a 64-bit value) - refer + * perf_event_mmap_page documentation in perf_event.h. + */ + if (userpg->time_shift == 32) { + userpg->time_shift = 31; + userpg->time_mult >>= 1; + } + + /* + * Internal timekeeping for enabled/running/stopped times + * is always computed with the sched_clock. + */ + userpg->cap_user_time = 1; + userpg->cap_user_time_zero = 1; + userpg->cap_user_time_short = 1; +} diff --git a/drivers/perf/riscv_pmu.c b/drivers/perf/riscv_pmu.c index ebca5eab9c9b..12675ee1123c 100644 --- a/drivers/perf/riscv_pmu.c +++ b/drivers/perf/riscv_pmu.c @@ -171,6 +171,8 @@ int riscv_pmu_event_set_period(struct perf_event *event) local64_set(&hwc->prev_count, (u64)-left); + perf_event_update_userpage(event); + return overflow; } @@ -283,6 +285,43 @@ static int riscv_pmu_event_init(struct perf_event *event) return 0; } +static int riscv_pmu_event_idx(struct perf_event *event) +{ + struct riscv_pmu *rvpmu = to_riscv_pmu(event->pmu); + + if (!(event->hw.flags & PERF_EVENT_FLAG_USER_READ_CNT)) + return 0; + + /* + * cycle and instret can either be retrieved from their fixed counters + * or from programmable counters, the latter being the preferred way + * since cycle and instret counters do not support sampling. + */ + + return rvpmu->csr_index(event) + 1; +} + +static void riscv_pmu_event_mapped(struct perf_event *event, struct mm_struct *mm) +{ + /* + * The user mmapped the event to directly access it: this is where + * we determine based on sysctl_perf_user_access if we grant userspace + * the direct access to this event. That means that within the same + * task, some events may be directly accessible and some other may not, + * if the user changes the value of sysctl_perf_user_accesss in the + * meantime. + */ + struct riscv_pmu *rvpmu = to_riscv_pmu(event->pmu); + + event->hw.flags |= rvpmu->event_flags(event); + perf_event_update_userpage(event); +} + +static void riscv_pmu_event_unmapped(struct perf_event *event, struct mm_struct *mm) +{ + event->hw.flags &= ~PERF_EVENT_FLAG_USER_READ_CNT; +} + struct riscv_pmu *riscv_pmu_alloc(void) { struct riscv_pmu *pmu; @@ -307,6 +346,9 @@ struct riscv_pmu *riscv_pmu_alloc(void) } pmu->pmu = (struct pmu) { .event_init = riscv_pmu_event_init, + .event_mapped = riscv_pmu_event_mapped, + .event_unmapped = riscv_pmu_event_unmapped, + .event_idx = riscv_pmu_event_idx, .add = riscv_pmu_add, .del = riscv_pmu_del, .start = riscv_pmu_start, diff --git a/drivers/perf/riscv_pmu_legacy.c b/drivers/perf/riscv_pmu_legacy.c index 0d8c9d8849ee..35c4c9097a0f 100644 --- a/drivers/perf/riscv_pmu_legacy.c +++ b/drivers/perf/riscv_pmu_legacy.c @@ -74,6 +74,21 @@ static void pmu_legacy_ctr_start(struct perf_event *event, u64 ival) local64_set(&hwc->prev_count, initial_val); } +static uint8_t pmu_legacy_csr_index(struct perf_event *event) +{ + return event->hw.idx; +} + +static int pmu_legacy_event_flags(struct perf_event *event) +{ + /* In legacy mode, the first 3 CSRs are available. */ + if (event->attr.config != PERF_COUNT_HW_CPU_CYCLES && + event->attr.config != PERF_COUNT_HW_INSTRUCTIONS) + return 0; + + return PERF_EVENT_FLAG_USER_READ_CNT; +} + /* * This is just a simple implementation to allow legacy implementations * compatible with new RISC-V PMU driver framework. @@ -94,6 +109,8 @@ static void pmu_legacy_init(struct riscv_pmu *pmu) pmu->ctr_get_width = NULL; pmu->ctr_clear_idx = NULL; pmu->ctr_read = pmu_legacy_read_ctr; + pmu->event_flags = pmu_legacy_event_flags; + pmu->csr_index = pmu_legacy_csr_index; perf_pmu_register(&pmu->pmu, "cpu", PERF_TYPE_RAW); } diff --git a/drivers/perf/riscv_pmu_sbi.c b/drivers/perf/riscv_pmu_sbi.c index 70cb50fd41c2..af7f3128b6b8 100644 --- a/drivers/perf/riscv_pmu_sbi.c +++ b/drivers/perf/riscv_pmu_sbi.c @@ -24,6 +24,10 @@ #include #include +#define SYSCTL_NO_USER_ACCESS 0 +#define SYSCTL_USER_ACCESS 1 +#define SYSCTL_LEGACY 2 + PMU_FORMAT_ATTR(event, "config:0-47"); PMU_FORMAT_ATTR(firmware, "config:63"); @@ -43,6 +47,9 @@ static const struct attribute_group *riscv_pmu_attr_groups[] = { NULL, }; +/* Allow legacy access by default */ +static int sysctl_perf_user_access __read_mostly = SYSCTL_LEGACY; + /* * RISC-V doesn't have heterogeneous harts yet. This need to be part of * per_cpu in case of harts with different pmu counters @@ -301,6 +308,11 @@ int riscv_pmu_get_hpm_info(u32 *hw_ctr_width, u32 *num_hw_ctr) } EXPORT_SYMBOL_GPL(riscv_pmu_get_hpm_info); +static uint8_t pmu_sbi_csr_index(struct perf_event *event) +{ + return pmu_ctr_list[event->hw.idx].csr - CSR_CYCLE; +} + static unsigned long pmu_sbi_get_filter_flags(struct perf_event *event) { unsigned long cflags = 0; @@ -329,18 +341,30 @@ static int pmu_sbi_ctr_get_idx(struct perf_event *event) struct cpu_hw_events *cpuc = this_cpu_ptr(rvpmu->hw_events); struct sbiret ret; int idx; - uint64_t cbase = 0; + uint64_t cbase = 0, cmask = rvpmu->cmask; unsigned long cflags = 0; cflags = pmu_sbi_get_filter_flags(event); + + /* In legacy mode, we have to force the fixed counters for those events */ + if (hwc->flags & PERF_EVENT_FLAG_LEGACY) { + if (event->attr.config == PERF_COUNT_HW_CPU_CYCLES) { + cflags |= SBI_PMU_CFG_FLAG_SKIP_MATCH; + cmask = 1; + } else if (event->attr.config == PERF_COUNT_HW_INSTRUCTIONS) { + cflags |= SBI_PMU_CFG_FLAG_SKIP_MATCH; + cmask = 1UL << (CSR_INSTRET - CSR_CYCLE); + } + } + /* retrieve the available counter index */ #if defined(CONFIG_32BIT) ret = sbi_ecall(SBI_EXT_PMU, SBI_EXT_PMU_COUNTER_CFG_MATCH, cbase, - rvpmu->cmask, cflags, hwc->event_base, hwc->config, + cmask, cflags, hwc->event_base, hwc->config, hwc->config >> 32); #else ret = sbi_ecall(SBI_EXT_PMU, SBI_EXT_PMU_COUNTER_CFG_MATCH, cbase, - rvpmu->cmask, cflags, hwc->event_base, hwc->config, 0); + cmask, cflags, hwc->event_base, hwc->config, 0); #endif if (ret.error) { pr_debug("Not able to find a counter for event %lx config %llx\n", @@ -490,6 +514,11 @@ static void pmu_sbi_ctr_start(struct perf_event *event, u64 ival) if (ret.error && (ret.error != SBI_ERR_ALREADY_STARTED)) pr_err("Starting counter idx %d failed with error %d\n", hwc->idx, sbi_err_map_linux_errno(ret.error)); + + if (!(event->hw.flags & PERF_EVENT_FLAG_LEGACY) && + event->hw.flags & PERF_EVENT_FLAG_USER_READ_CNT) + csr_write(CSR_SCOUNTEREN, + csr_read(CSR_SCOUNTEREN) | (1 << pmu_sbi_csr_index(event))); } static void pmu_sbi_ctr_stop(struct perf_event *event, unsigned long flag) @@ -497,6 +526,11 @@ static void pmu_sbi_ctr_stop(struct perf_event *event, unsigned long flag) struct sbiret ret; struct hw_perf_event *hwc = &event->hw; + if (!(event->hw.flags & PERF_EVENT_FLAG_LEGACY) && + event->hw.flags & PERF_EVENT_FLAG_USER_READ_CNT) + csr_write(CSR_SCOUNTEREN, + csr_read(CSR_SCOUNTEREN) & ~(1 << pmu_sbi_csr_index(event))); + ret = sbi_ecall(SBI_EXT_PMU, SBI_EXT_PMU_COUNTER_STOP, hwc->idx, 1, flag, 0, 0, 0); if (ret.error && (ret.error != SBI_ERR_ALREADY_STOPPED) && flag != SBI_PMU_STOP_FLAG_RESET) @@ -704,10 +738,13 @@ static int pmu_sbi_starting_cpu(unsigned int cpu, struct hlist_node *node) struct cpu_hw_events *cpu_hw_evt = this_cpu_ptr(pmu->hw_events); /* - * Enable the access for CYCLE, TIME, and INSTRET CSRs from userspace, - * as is necessary to maintain uABI compatibility. + * We keep enabling userspace access to CYCLE, TIME and INSRET via the + * legacy option but that will be removed in the future. */ - csr_write(CSR_SCOUNTEREN, 0x7); + if (sysctl_perf_user_access == SYSCTL_LEGACY) + csr_write(CSR_SCOUNTEREN, 0x7); + else + csr_write(CSR_SCOUNTEREN, 0x2); /* Stop all the counters so that they can be enabled from perf */ pmu_sbi_stop_all(pmu); @@ -851,6 +888,66 @@ static void riscv_pmu_destroy(struct riscv_pmu *pmu) cpuhp_state_remove_instance(CPUHP_AP_PERF_RISCV_STARTING, &pmu->node); } +static int pmu_sbi_event_flags(struct perf_event *event) +{ + if (sysctl_perf_user_access == SYSCTL_NO_USER_ACCESS) + return 0; + + /* In legacy mode, the first 3 CSRs are available. */ + if (sysctl_perf_user_access == SYSCTL_LEGACY) { + int flags = PERF_EVENT_FLAG_LEGACY; + + if (event->attr.config == PERF_COUNT_HW_CPU_CYCLES || + event->attr.config == PERF_COUNT_HW_INSTRUCTIONS) + flags |= PERF_EVENT_FLAG_USER_READ_CNT; + + return flags; + } + + return PERF_EVENT_FLAG_USER_READ_CNT; +} + +static void riscv_pmu_update_counter_access(void *info) +{ + if (sysctl_perf_user_access == SYSCTL_LEGACY) + csr_write(CSR_SCOUNTEREN, 0x7); + else + csr_write(CSR_SCOUNTEREN, 0x2); +} + +static int riscv_pmu_proc_user_access_handler(struct ctl_table *table, + int write, void *buffer, + size_t *lenp, loff_t *ppos) +{ + int prev = sysctl_perf_user_access; + int ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos); + + /* + * Test against the previous value since we clear SCOUNTEREN when + * sysctl_perf_user_access is set to SYSCTL_USER_ACCESS, but we should + * not do that if that was already the case. + */ + if (ret || !write || prev == sysctl_perf_user_access) + return ret; + + on_each_cpu(riscv_pmu_update_counter_access, (void *)&prev, 1); + + return 0; +} + +static struct ctl_table sbi_pmu_sysctl_table[] = { + { + .procname = "perf_user_access", + .data = &sysctl_perf_user_access, + .maxlen = sizeof(unsigned int), + .mode = 0644, + .proc_handler = riscv_pmu_proc_user_access_handler, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_TWO, + }, + { } +}; + static int pmu_sbi_device_probe(struct platform_device *pdev) { struct riscv_pmu *pmu = NULL; @@ -888,6 +985,8 @@ static int pmu_sbi_device_probe(struct platform_device *pdev) pmu->ctr_get_width = pmu_sbi_ctr_get_width; pmu->ctr_clear_idx = pmu_sbi_ctr_clear_idx; pmu->ctr_read = pmu_sbi_ctr_read; + pmu->event_flags = pmu_sbi_event_flags; + pmu->csr_index = pmu_sbi_csr_index; ret = cpuhp_state_add_instance(CPUHP_AP_PERF_RISCV_STARTING, &pmu->node); if (ret) @@ -901,6 +1000,8 @@ static int pmu_sbi_device_probe(struct platform_device *pdev) if (ret) goto out_unregister; + register_sysctl("kernel", sbi_pmu_sysctl_table); + return 0; out_unregister: diff --git a/include/linux/perf/riscv_pmu.h b/include/linux/perf/riscv_pmu.h index 9f70d94942e0..ba19634d815c 100644 --- a/include/linux/perf/riscv_pmu.h +++ b/include/linux/perf/riscv_pmu.h @@ -12,6 +12,7 @@ #include #include #include +#include #ifdef CONFIG_RISCV_PMU @@ -55,6 +56,8 @@ struct riscv_pmu { void (*ctr_start)(struct perf_event *event, u64 init_val); void (*ctr_stop)(struct perf_event *event, unsigned long flag); int (*event_map)(struct perf_event *event, u64 *config); + int (*event_flags)(struct perf_event *event); + uint8_t (*csr_index)(struct perf_event *event); struct cpu_hw_events __percpu *hw_events; struct hlist_node node; diff --git a/tools/lib/perf/mmap.c b/tools/lib/perf/mmap.c index 0d1634cedf44..18f2abb1584a 100644 --- a/tools/lib/perf/mmap.c +++ b/tools/lib/perf/mmap.c @@ -392,6 +392,71 @@ static u64 read_perf_counter(unsigned int counter) static u64 read_timestamp(void) { return read_sysreg(cntvct_el0); } +#elif defined(__riscv) && __riscv_xlen == 64 + +#define CSR_CYCLE 0xc00 +#define CSR_TIME 0xc01 +#define CSR_CYCLEH 0xc80 + +#define csr_read(csr) \ +({ \ + register unsigned long __v; \ + __asm__ __volatile__ ("csrr %0, " #csr \ + : "=r" (__v) : \ + : "memory"); \ + __v; \ +}) + +static unsigned long csr_read_num(int csr_num) +{ +#define switchcase_csr_read(__csr_num, __val) {\ + case __csr_num: \ + __val = csr_read(__csr_num); \ + break; } +#define switchcase_csr_read_2(__csr_num, __val) {\ + switchcase_csr_read(__csr_num + 0, __val) \ + switchcase_csr_read(__csr_num + 1, __val)} +#define switchcase_csr_read_4(__csr_num, __val) {\ + switchcase_csr_read_2(__csr_num + 0, __val) \ + switchcase_csr_read_2(__csr_num + 2, __val)} +#define switchcase_csr_read_8(__csr_num, __val) {\ + switchcase_csr_read_4(__csr_num + 0, __val) \ + switchcase_csr_read_4(__csr_num + 4, __val)} +#define switchcase_csr_read_16(__csr_num, __val) {\ + switchcase_csr_read_8(__csr_num + 0, __val) \ + switchcase_csr_read_8(__csr_num + 8, __val)} +#define switchcase_csr_read_32(__csr_num, __val) {\ + switchcase_csr_read_16(__csr_num + 0, __val) \ + switchcase_csr_read_16(__csr_num + 16, __val)} + + unsigned long ret = 0; + + switch (csr_num) { + switchcase_csr_read_32(CSR_CYCLE, ret) + switchcase_csr_read_32(CSR_CYCLEH, ret) + default : + break; + } + + return ret; +#undef switchcase_csr_read_32 +#undef switchcase_csr_read_16 +#undef switchcase_csr_read_8 +#undef switchcase_csr_read_4 +#undef switchcase_csr_read_2 +#undef switchcase_csr_read +} + +static u64 read_perf_counter(unsigned int counter) +{ + return csr_read_num(CSR_CYCLE + counter); +} + +static u64 read_timestamp(void) +{ + return csr_read_num(CSR_TIME); +} + #else static u64 read_perf_counter(unsigned int counter __maybe_unused) { return 0; } static u64 read_timestamp(void) { return 0; }