From patchwork Fri May 26 21:53:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ian Rogers X-Patchwork-Id: 13257426 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BA3EAC7EE2F for ; Fri, 26 May 2023 23:05:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:To:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=i075boAiSRxCPFcQ6LceZxpUw1nAhL/EimvpKm+0cxw=; b=hZJBI2k0co2weu BBS/x84Gpz4YmzdY6KF7n9LbkyYRIwO/zb2fNf+IhFGRC2xBY2qdRrijrFkfGuhObhPHqg82BH+bL SxN1x64ST4l5tTLrTuyZ53xdKOcB9oLJx2E9Cbh5QrSFsTOc0bPfWjQDMBVXqeKWe9fOXensnfE0V f+mkQvRP3LWBH71jSKDypu0ENXq4VmEPEzDJ0DTqEa3J6il8ZF+z8dVq6omjqXSO6t5gQSjDwkHYX XEBG7FxI6y10E6OJ9pfOgz/uI6RRgRG0NIw0Emqjf4gASPtFrySKsFvUZu7mtrSdaBgMXblvWnrB3 e5mX3Ro+inzK/YyR+H7A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q2gVF-004ErA-2U; Fri, 26 May 2023 23:05:45 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q2fOi-00439r-02 for linux-arm-kernel@lists.infradead.org; Fri, 26 May 2023 21:54:57 +0000 Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-bac6a453dd5so1641848276.2 for ; Fri, 26 May 2023 14:54:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685138095; x=1687730095; h=to:from:subject:references:mime-version:message-id:in-reply-to:date :from:to:cc:subject:date:message-id:reply-to; bh=nSaQksCEVnEDaXN3VKZfEAQXs7LM72JmREmXfmsPZ08=; b=mpP88j8PaNXTEL2D/+IpKFmopIS2Jq0caewR5A8NBTRADf+MTFgSjeECfUiLSS9rYm m40GWUI314gxo3O3wklFV96hkIXsp1VaK+czkj/Y5uCDp2vHGhV6qof9FfmNf1mi1dgH RokewExO6LMD3dtpK3+RHiNaix0rS4Fwoj4mIhRYWY27HluY/6WoU0WVzSaLajlpytUd ASlCV9Q0OcS1ZEDW4arvXlyJ/Q/m75osCkAKBQFcw1Pfwvol0tsOjVlA3H1wejyRdW/q YiSy+uNreunl0RRHYKz1oypYV3iX7xEC+hnwHlqCHqMJYwHNtuFa1vU9Mfsa+jqjK6bE ZjVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685138095; x=1687730095; h=to:from:subject:references:mime-version:message-id:in-reply-to:date :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=nSaQksCEVnEDaXN3VKZfEAQXs7LM72JmREmXfmsPZ08=; b=WNyKJoTJmg3C2pmEsLQNI2itN6fgTBLQne5LtbJBVtxezaAjsH/sMjTPVqRV2J+m8R 9mZ/DEeqrA3k4kY1zKv02LXEaOj5KMKYn966DR8ZHi2H4rb6ox7zJTZ0HsA39DZWSCMF qV/MrO77JlAZLYRhxZ8fUwY0A2zY0it27i8HEdoAiUxwbJwHxKRFn448VuuR/XlsBCgV Zk+ZpOgVJdkSqICjaEpZUPsKkrK3IAJk6+ul5gn2AxNRY6Tdr9ZUptRovsluEbhASrIi 7Fns/csj8x6ywRPvH5JZpqjRrVf96/Ep9k5UTOy3Wvcyt8vGBbQTI/mSUiuNGRjP4DYD 7+mQ== X-Gm-Message-State: AC+VfDzVE3igIRG6Js8ZLZwuZcQOVKmT6vGnEEyFBoT6bdCZHj0Z2LVP 3vRJfn9SKUqLa1zxJqTeDrjGCyJIn5za X-Google-Smtp-Source: ACHHUZ5cqJSlzhtTQe/Z6EyPf1csdAOirFufe+QgZ1XxjJnGG1GzNlbcbw7Hn7rdzirV8Ufp79bECpAC8F+J X-Received: from irogers.svl.corp.google.com ([2620:15c:2d4:203:3b4e:312c:644:a642]) (user=irogers job=sendgmr) by 2002:a25:dc87:0:b0:ba8:45d6:dd8a with SMTP id y129-20020a25dc87000000b00ba845d6dd8amr1699689ybe.9.1685138094890; Fri, 26 May 2023 14:54:54 -0700 (PDT) Date: Fri, 26 May 2023 14:53:53 -0700 In-Reply-To: <20230526215410.2435674-1-irogers@google.com> Message-Id: <20230526215410.2435674-19-irogers@google.com> Mime-Version: 1.0 References: <20230526215410.2435674-1-irogers@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Subject: [PATCH v4 18/35] perf x86: Iterate hybrid PMUs as core PMUs From: Ian Rogers To: Suzuki K Poulose , Mike Leach , Leo Yan , John Garry , Will Deacon , James Clark , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter , Kajol Jain , Jing Zhang , Kan Liang , Zhengjun Xing , Ravi Bangoria , Madhavan Srinivasan , Athira Rajeev , Ming Wang , Huacai Chen , Sandipan Das , Dmitrii Dolgov <9erthalion6@gmail.com>, Sean Christopherson , Ali Saidi , Rob Herring , Thomas Richter , Kang Minchul , linux-kernel@vger.kernel.org, coresight@lists.linaro.org, linux-arm-kernel@lists.infradead.org, linux-perf-users@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230526_145456_047527_06662242 X-CRM114-Status: GOOD ( 19.26 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Rather than iterating over a separate hybrid list, iterate all PMUs with the hybrid ones having is_core as true. Signed-off-by: Ian Rogers Reviewed-by: Kan Liang --- tools/perf/arch/x86/tests/hybrid.c | 2 +- tools/perf/arch/x86/util/evlist.c | 25 +++++++++++++++++-------- tools/perf/arch/x86/util/perf_regs.c | 14 ++++++++++---- 3 files changed, 28 insertions(+), 13 deletions(-) diff --git a/tools/perf/arch/x86/tests/hybrid.c b/tools/perf/arch/x86/tests/hybrid.c index 941a9edfed4e..944bd1b4bab6 100644 --- a/tools/perf/arch/x86/tests/hybrid.c +++ b/tools/perf/arch/x86/tests/hybrid.c @@ -3,7 +3,7 @@ #include "debug.h" #include "evlist.h" #include "evsel.h" -#include "pmu-hybrid.h" +#include "pmu.h" #include "tests/tests.h" static bool test_config(const struct evsel *evsel, __u64 expected_config) diff --git a/tools/perf/arch/x86/util/evlist.c b/tools/perf/arch/x86/util/evlist.c index 1b6065841fb0..03f7eb4cf0a4 100644 --- a/tools/perf/arch/x86/util/evlist.c +++ b/tools/perf/arch/x86/util/evlist.c @@ -4,7 +4,6 @@ #include "util/evlist.h" #include "util/parse-events.h" #include "util/event.h" -#include "util/pmu-hybrid.h" #include "topdown.h" #include "evsel.h" @@ -12,9 +11,6 @@ static int ___evlist__add_default_attrs(struct evlist *evlist, struct perf_event_attr *attrs, size_t nr_attrs) { - struct perf_cpu_map *cpus; - struct evsel *evsel, *n; - struct perf_pmu *pmu; LIST_HEAD(head); size_t i = 0; @@ -25,15 +21,24 @@ static int ___evlist__add_default_attrs(struct evlist *evlist, return evlist__add_attrs(evlist, attrs, nr_attrs); for (i = 0; i < nr_attrs; i++) { + struct perf_pmu *pmu = NULL; + if (attrs[i].type == PERF_TYPE_SOFTWARE) { - evsel = evsel__new(attrs + i); + struct evsel *evsel = evsel__new(attrs + i); + if (evsel == NULL) goto out_delete_partial_list; list_add_tail(&evsel->core.node, &head); continue; } - perf_pmu__for_each_hybrid_pmu(pmu) { + while ((pmu = perf_pmu__scan(pmu)) != NULL) { + struct perf_cpu_map *cpus; + struct evsel *evsel; + + if (!pmu->is_core) + continue; + evsel = evsel__new(attrs + i); if (evsel == NULL) goto out_delete_partial_list; @@ -51,8 +56,12 @@ static int ___evlist__add_default_attrs(struct evlist *evlist, return 0; out_delete_partial_list: - __evlist__for_each_entry_safe(&head, n, evsel) - evsel__delete(evsel); + { + struct evsel *evsel, *n; + + __evlist__for_each_entry_safe(&head, n, evsel) + evsel__delete(evsel); + } return -1; } diff --git a/tools/perf/arch/x86/util/perf_regs.c b/tools/perf/arch/x86/util/perf_regs.c index 0ed177991ad0..26abc159fc0e 100644 --- a/tools/perf/arch/x86/util/perf_regs.c +++ b/tools/perf/arch/x86/util/perf_regs.c @@ -10,7 +10,6 @@ #include "../../../util/debug.h" #include "../../../util/event.h" #include "../../../util/pmu.h" -#include "../../../util/pmu-hybrid.h" const struct sample_reg sample_reg_masks[] = { SMPL_REG(AX, PERF_REG_X86_AX), @@ -286,7 +285,6 @@ uint64_t arch__intr_reg_mask(void) .disabled = 1, .exclude_kernel = 1, }; - struct perf_pmu *pmu; int fd; /* * In an unnamed union, init it here to build on older gcc versions @@ -294,12 +292,20 @@ uint64_t arch__intr_reg_mask(void) attr.sample_period = 1; if (perf_pmu__has_hybrid()) { + struct perf_pmu *pmu = NULL; + __u64 type = PERF_TYPE_RAW; + /* * The same register set is supported among different hybrid PMUs. * Only check the first available one. */ - pmu = list_first_entry(&perf_pmu__hybrid_pmus, typeof(*pmu), hybrid_list); - attr.config |= (__u64)pmu->type << PERF_PMU_TYPE_SHIFT; + while ((pmu = perf_pmu__scan(pmu)) != NULL) { + if (pmu->is_core) { + type = pmu->type; + break; + } + } + attr.config |= type << PERF_PMU_TYPE_SHIFT; } event_attr_init(&attr);