From patchwork Tue Sep 22 03:13:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wei Li X-Patchwork-Id: 11791461 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2C37F6CB for ; Tue, 22 Sep 2020 03:16:17 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F2EB723A32 for ; Tue, 22 Sep 2020 03:16:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="h96GdJ20" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F2EB723A32 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=ybkgyvGz4YjyvcxfRiSU2TAMelIjV07q1dWpKIRfbaA=; b=h96GdJ20pxSpvQRKv9W2F7j/f wP7xEypGXtWfmS2uoCt1jc1I0PzArVgqNOcQLmDV8c+3QZdy6PHcosCZzs2Ehv7cKinzcYuaKypiC 3nZ0XokjlbLOoCSZt29RnTi34VQRLotbCma70dge7a1ZPygZce3hcekvtXdw33lR3DrXqMIbpQot/ 36QmZaBnrEuf6vejhDqJd9OQM7XbBkKFXWrQ8ZiN7bu1xBfl61fEmUvLcN7RXehvWsZX4kiWlthK4 qjq3cFUzEmwDVfeS4KrZQcwu+K+zop8FeS2KkA6y49FM92P2zyyELwwsCvx1nMijmr86u38ZePPD3 wd4rdz2EA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kKYlS-0006LX-05; Tue, 22 Sep 2020 03:14:46 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35] helo=huawei.com) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kKYlJ-0006J2-NH for linux-arm-kernel@lists.infradead.org; Tue, 22 Sep 2020 03:14:39 +0000 Received: from DGGEMS409-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 97E30C202CA77BABC662; Tue, 22 Sep 2020 11:14:31 +0800 (CST) Received: from euler.huawei.com (10.175.124.27) by DGGEMS409-HUB.china.huawei.com (10.3.19.209) with Microsoft SMTP Server id 14.3.487.0; Tue, 22 Sep 2020 11:14:21 +0800 From: Wei Li To: Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , "Namhyung Kim" , Andi Kleen , Alexey Budankov , Adrian Hunter Subject: [PATCH 2/2] perf stat: Unbreak perf stat with armv8_pmu events Date: Tue, 22 Sep 2020 11:13:46 +0800 Message-ID: <20200922031346.15051-3-liwei391@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200922031346.15051-1-liwei391@huawei.com> References: <20200922031346.15051-1-liwei391@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.124.27] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200921_231438_010716_BFBDBE2D X-CRM114-Status: GOOD ( 14.00 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- 0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.35 listed in wl.mailspike.net] -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at https://www.dnswl.org/, low trust [45.249.212.35 listed in list.dnswl.org] -0.0 SPF_HELO_PASS SPF: HELO matches SPF record -0.0 SPF_PASS SPF: sender matches SPF record 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Zijlstra , huawei.libin@huawei.com, Ingo Molnar , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org After the segfault is fixed, perf-stat with armv8_pmu events with a workload is still broken: [root@localhost hulk]# tools/perf/perf stat -e armv8_pmuv3_0/ll_cache_rd/,armv8_pmuv3_0/ll_cache_miss_rd/ ls > /dev/null Performance counter stats for 'ls': armv8_pmuv3_0/ll_cache_rd/ (0.00%) armv8_pmuv3_0/ll_cache_miss_rd/ (0.00%) 0.002052670 seconds time elapsed 0.000000000 seconds user 0.002086000 seconds sys In fact, while the event will be opened per-thread, create_perf_stat_counter() is called as many times as the count of cpu in the evlist's cpumap, and lost all the file descriptors except the last one. If this counter is not scheduled during the period of time, it will be "not counted". Add the process to don't open the needless events in such situation. Fixes: 4804e0111662 ("perf stat: Use affinity for opening events") Signed-off-by: Wei Li --- tools/perf/builtin-stat.c | 36 +++++++++++++++++++++++------------- 1 file changed, 23 insertions(+), 13 deletions(-) diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c index 6e6ceacce634..9a43b3de26d1 100644 --- a/tools/perf/builtin-stat.c +++ b/tools/perf/builtin-stat.c @@ -712,6 +712,7 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx) struct affinity affinity; int i, cpu; bool second_pass = false; + bool open_per_thread = false; if (forks) { if (perf_evlist__prepare_workload(evsel_list, &target, argv, is_pipe, @@ -726,16 +727,17 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx) perf_evlist__set_leader(evsel_list); if (!(target__has_cpu(&target) && !target__has_per_thread(&target))) - evsel_list->core.open_per_thread = true; + evsel_list->core.open_per_thread = open_per_thread = true; if (affinity__setup(&affinity) < 0) return -1; evlist__for_each_cpu (evsel_list, i, cpu) { - affinity__set(&affinity, cpu); + if (!open_per_thread) + affinity__set(&affinity, cpu); evlist__for_each_entry(evsel_list, counter) { - if (evsel__cpu_iter_skip(counter, cpu)) + if (!open_per_thread && evsel__cpu_iter_skip(counter, cpu)) continue; if (counter->reset_group || counter->errored) continue; @@ -753,7 +755,8 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx) if ((errno == EINVAL || errno == EBADF) && counter->leader != counter && counter->weak_group) { - perf_evlist__reset_weak_group(evsel_list, counter, false); + perf_evlist__reset_weak_group(evsel_list, counter, + open_per_thread); assert(counter->reset_group); second_pass = true; continue; @@ -773,6 +776,9 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx) } counter->supported = true; } + + if (open_per_thread) + break; } if (second_pass) { @@ -782,20 +788,22 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx) */ evlist__for_each_cpu(evsel_list, i, cpu) { - affinity__set(&affinity, cpu); - /* First close errored or weak retry */ - evlist__for_each_entry(evsel_list, counter) { - if (!counter->reset_group && !counter->errored) - continue; - if (evsel__cpu_iter_skip_no_inc(counter, cpu)) - continue; - perf_evsel__close_cpu(&counter->core, counter->cpu_iter); + if (!open_per_thread) { + affinity__set(&affinity, cpu); + /* First close errored or weak retry */ + evlist__for_each_entry(evsel_list, counter) { + if (!counter->reset_group && !counter->errored) + continue; + if (evsel__cpu_iter_skip_no_inc(counter, cpu)) + continue; + perf_evsel__close_cpu(&counter->core, counter->cpu_iter); + } } /* Now reopen weak */ evlist__for_each_entry(evsel_list, counter) { if (!counter->reset_group && !counter->errored) continue; - if (evsel__cpu_iter_skip(counter, cpu)) + if (!open_per_thread && evsel__cpu_iter_skip(counter, cpu)) continue; if (!counter->reset_group) continue; @@ -817,6 +825,8 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx) } counter->supported = true; } + if (open_per_thread) + break; } } affinity__cleanup(&affinity);