From patchwork Sat Jun 4 04:28:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leo Yan X-Patchwork-Id: 12869572 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8D5ADC43334 for ; Sat, 4 Jun 2022 04:35:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=0jwAW5R4ffy2HBE9un2np0Eadc1niGftS5cii1HC+s8=; b=Ectbt88Ujdwb2c 9Cr0J22gE74+1MTPuLIu0MLEoxAj0DOpPnFRye/rurIHkwG22JF7v7964ibDZ0syQ0b4KTqJZ5DER lB3wAQ0JvWHDiHayI1rqDaDmMmE+U9znTEyziR8eodgtyIuVChCVFB0Sdi4AO2oT6uaNi9DaD0pt0 +SCFkOLNzL4+Vs9SVzPedd3D0szVd43gTF8IskCVsb2oGf83YJIiW4MotmtohDagDsyrG2PNaD+Ms xQ8LumDc4bSXSbZfMvR7aNEaxEjBPRVOxq81cbkO/hcrdmNPz+HijjG2hEUWpcjvAXlxbw4nNEFUw glZpkO7duoXl2jEwQWgw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nxLTr-00AGBf-CQ; Sat, 04 Jun 2022 04:33:44 +0000 Received: from mail-pj1-x1034.google.com ([2607:f8b0:4864:20::1034]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nxLQO-00AENX-Ml for linux-arm-kernel@lists.infradead.org; Sat, 04 Jun 2022 04:30:10 +0000 Received: by mail-pj1-x1034.google.com with SMTP id n10so8706111pjh.5 for ; Fri, 03 Jun 2022 21:30:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=tX6DcZqaWhHC2FThTQuomyIQfm0jVlHpS7Xh9jPWIIc=; b=AZ3dohV9xoxAZWUaNphnUVdw1qUrXVBD0w/9kay823oTOnbaLokrZShzlzaKodq4iW IomG1m1OksgZ+Pztbe8guTVZYDeb39oqZKVDlcI6EABnHro54SPrXDxzWY4i5yWLItRY LkCcQazZ+GXrE0120hjEawD7ylFIKWu1blFICZ9hJSwMxZIyzO/Bo5ss5lR4LOOcljzc p2/1GIB+TdCVcZB94uVaw6WyfzqDYBMRZlz8bUMOO4zYlyCIMeRmN4cOJok5iB3C9Zl3 33kJJObLpXSIldfFudp4dZYIiRLGqBeuW3ND0v0jvt1kPA7zrWqxor10iRG37k8a15w2 SLpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tX6DcZqaWhHC2FThTQuomyIQfm0jVlHpS7Xh9jPWIIc=; b=InBOlyGodHXmBjgI4G/6Y2p66AFr4E8CP4jjoLrWdVkGKok7iDjLC7AKofeHO4jcnz Vj5kjqcX1mdyW7TxE2lZUEeguUXvSdfhFTc298qHiaIr5XRyckyPrK97mXsNifihp6cM nPR7Wvu3BnPklJygF3EOyQKWCTHEU2gfW5S1oRIi5MlSgYAk+25yNMyakgjJT0T1/mUI x5Rn0/GPG5/CV3DYCqWiAf0w2oZ2F+hNpbRMkLi+u7vLp4waYu9Y0qT+vxSSrZIgHsGj hEHreOYJwW/Xbeu703wopwjHJ5IDGmnkhB7HS73Bdb3aeFEkguDctXZioizyNHAdNrne erVw== X-Gm-Message-State: AOAM5315QqNP4D2JWHBsrViNQRglBpoHPdWtTe+D0eZPNuMEg8rZxioV 4CTqAuIWM1/nZIulc/l4sXDoSQ== X-Google-Smtp-Source: ABdhPJyK9d+eXYs9Ol4VhNnw7xljw2sUTCLRg1Pz+M3+z0i98ArqaSsxgBTO+c96VgNUJlUoBLC0Xw== X-Received: by 2002:a17:90a:4a03:b0:1df:4583:cb26 with SMTP id e3-20020a17090a4a0300b001df4583cb26mr49131046pjh.173.1654317007870; Fri, 03 Jun 2022 21:30:07 -0700 (PDT) Received: from leo-build-box.lan (ec2-54-67-95-58.us-west-1.compute.amazonaws.com. [54.67.95.58]) by smtp.gmail.com with ESMTPSA id w24-20020a1709027b9800b00163d4c3ffabsm6152916pll.304.2022.06.03.21.30.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Jun 2022 21:30:07 -0700 (PDT) From: Leo Yan To: Arnaldo Carvalho de Melo , Peter Zijlstra , Ingo Molnar , Mark Rutland , Jiri Olsa , Namhyung Kim , Ian Rogers , John Garry , Will Deacon , James Clark , German Gomez , Ali Saidi , Joe Mario , Adam Li , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Leo Yan Subject: [PATCH v5 15/17] perf c2c: Sort on peer snooping for load operations Date: Sat, 4 Jun 2022 12:28:18 +0800 Message-Id: <20220604042820.2270916-16-leo.yan@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220604042820.2270916-1-leo.yan@linaro.org> References: <20220604042820.2270916-1-leo.yan@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220603_213008_817619_C837243A X-CRM114-Status: GOOD ( 17.70 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This patch adds a new option 'peer' so can sort on the cache hit for peer snooping. For displaying with option 'peer', the "Shared Data Cache Line Table" and "Shared Cache Line Distribution Pareto" both sort with the metrics "tot_peer". As result, we can get the 'peer' display: # perf c2c report -d peer --coalesce tid,pid,iaddr,dso -N --stdio ================================================= Shared Data Cache Line Table ================================================= # # ----------- Cacheline ---------- Peer ------- Load Peer ------- Total Total Total --------- Stores -------- ----- Core Load Hit ----- - LLC Load Hit -- - RMT Load Hit -- --- Load Dram ---- # Index Address Node PA cnt Snoop Total Local Remote records Loads Stores L1Hit L1Miss N/A FB L1 L2 LclHit LclHitm RmtHit RmtHitm Lcl Rmt # ..... .................. .... ...... ....... ....... ....... ....... ....... ....... ....... ....... ....... ....... ....... ....... ....... ........ ....... ........ ....... ........ ........ # 0 0xaaaac17d6000 N/A 0 100.00% 99 99 0 18851 18851 0 0 0 0 0 18752 0 99 0 0 0 0 0 ================================================= Shared Cache Line Distribution Pareto ================================================= # # -- Peer Snoop -- ------- Store Refs ------ --------- Data address --------- ---------- cycles ---------- Total cpu Shared # Num Rmt Lcl L1 Hit L1 Miss N/A Offset Node PA cnt Pid Tid Code address rmt peer lcl peer load records cnt Symbol Object Source:Line Node{cpus %peers %stores} # ..... ....... ....... ....... ....... ....... .................. .... ...... ....... ................. .................. ........ ........ ........ ....... ........ ...................... ................ ............... .... # ---------------------------------------------------------------------- 0 0 99 0 0 0 0xaaaac17d6000 ---------------------------------------------------------------------- 0.00% 3.03% 0.00% 0.00% 0.00% 0x20 N/A 0 3603 3603:memstress 0xaaaac17c25ac 0 376 41 9314 2 [.] 0x00000000000025ac memstress memstress[25ac] 0{ 2 100.0% n/a} 0.00% 3.03% 0.00% 0.00% 0.00% 0x20 N/A 0 3603 3606:memstress 0xaaaac17c25ac 0 375 44 9155 1 [.] 0x00000000000025ac memstress memstress[25ac] 0{ 1 100.0% n/a} 0.00% 48.48% 0.00% 0.00% 0.00% 0x29 N/A 0 3603 3606:memstress 0xaaaac17c3e88 0 180 170 65 1 [.] 0x0000000000003e88 memstress memstress[3e88] 0{ 1 100.0% n/a} 0.00% 45.45% 0.00% 0.00% 0.00% 0x29 N/A 0 3603 3603:memstress 0xaaaac17c3e88 0 180 175 70 2 [.] 0x0000000000003e88 memstress memstress[3e88] 0{ 2 100.0% n/a} Signed-off-by: Leo Yan Acked-by: Ian Rogers Tested-by: Ali Saidi Reviewed-by: Ali Saidi --- tools/perf/builtin-c2c.c | 135 ++++++++++++++++++++++++++++----------- 1 file changed, 99 insertions(+), 36 deletions(-) diff --git a/tools/perf/builtin-c2c.c b/tools/perf/builtin-c2c.c index 8b7c1fd35380..f7a961e55a92 100644 --- a/tools/perf/builtin-c2c.c +++ b/tools/perf/builtin-c2c.c @@ -118,6 +118,7 @@ enum { DISPLAY_LCL_HITM, DISPLAY_RMT_HITM, DISPLAY_TOT_HITM, + DISPLAY_SNP_PEER, DISPLAY_MAX, }; @@ -125,6 +126,7 @@ static const char *display_str[DISPLAY_MAX] = { [DISPLAY_LCL_HITM] = "Local HITMs", [DISPLAY_RMT_HITM] = "Remote HITMs", [DISPLAY_TOT_HITM] = "Total HITMs", + [DISPLAY_SNP_PEER] = "Peer Snoop", }; static const struct option c2c_options[] = { @@ -822,6 +824,11 @@ static double percent_costly_snoop(struct c2c_hist_entry *c2c_he) case DISPLAY_TOT_HITM: st = stats->tot_hitm; tot = total->tot_hitm; + break; + case DISPLAY_SNP_PEER: + st = stats->tot_peer; + tot = total->tot_peer; + break; default: break; } @@ -1229,6 +1236,10 @@ node_entry(struct perf_hpp_fmt *fmt __maybe_unused, struct perf_hpp *hpp, ret = display_metrics(hpp, stats->tot_hitm, c2c_he->stats.tot_hitm); break; + case DISPLAY_SNP_PEER: + ret = display_metrics(hpp, stats->tot_peer, + c2c_he->stats.tot_peer); + break; default: break; } @@ -1609,6 +1620,7 @@ static struct c2c_header percent_costly_snoop_header[] = { [DISPLAY_LCL_HITM] = HEADER_BOTH("Lcl", "Hitm"), [DISPLAY_RMT_HITM] = HEADER_BOTH("Rmt", "Hitm"), [DISPLAY_TOT_HITM] = HEADER_BOTH("Tot", "Hitm"), + [DISPLAY_SNP_PEER] = HEADER_BOTH("Peer", "Snoop"), }; static struct c2c_dimension dim_percent_costly_snoop = { @@ -2107,6 +2119,10 @@ static bool he__display(struct hist_entry *he, struct c2c_stats *stats) he->filtered = filter_display(c2c_he->stats.tot_hitm, stats->tot_hitm); break; + case DISPLAY_SNP_PEER: + he->filtered = filter_display(c2c_he->stats.tot_peer, + stats->tot_peer); + break; default: break; } @@ -2135,6 +2151,8 @@ static inline bool is_valid_hist_entry(struct hist_entry *he) case DISPLAY_TOT_HITM: has_record = !!c2c_he->stats.tot_hitm; break; + case DISPLAY_SNP_PEER: + has_record = !!c2c_he->stats.tot_peer; default: break; } @@ -2224,7 +2242,10 @@ static int resort_cl_cb(struct hist_entry *he, void *arg __maybe_unused) } static struct c2c_header header_node_0 = HEADER_LOW("Node"); -static struct c2c_header header_node_1 = HEADER_LOW("Node{cpus %hitms %stores}"); +static struct c2c_header header_node_1_hitms_stores = + HEADER_LOW("Node{cpus %hitms %stores}"); +static struct c2c_header header_node_1_peers_stores = + HEADER_LOW("Node{cpus %peers %stores}"); static struct c2c_header header_node_2 = HEADER_LOW("Node{cpu list}"); static void setup_nodes_header(void) @@ -2234,7 +2255,10 @@ static void setup_nodes_header(void) dim_node.header = header_node_0; break; case 1: - dim_node.header = header_node_1; + if (c2c.display == DISPLAY_SNP_PEER) + dim_node.header = header_node_1_peers_stores; + else + dim_node.header = header_node_1_hitms_stores; break; case 2: dim_node.header = header_node_2; @@ -2308,13 +2332,14 @@ static int setup_nodes(struct perf_session *session) } #define HAS_HITMS(__h) ((__h)->stats.lcl_hitm || (__h)->stats.rmt_hitm) +#define HAS_PEER(__h) ((__h)->stats.lcl_peer || (__h)->stats.rmt_peer) static int resort_shared_cl_cb(struct hist_entry *he, void *arg __maybe_unused) { struct c2c_hist_entry *c2c_he; c2c_he = container_of(he, struct c2c_hist_entry, he); - if (HAS_HITMS(c2c_he)) { + if (HAS_HITMS(c2c_he) || HAS_PEER(c2c_he)) { c2c.shared_clines++; c2c_add_stats(&c2c.shared_clines_stats, &c2c_he->stats); } @@ -2447,13 +2472,22 @@ static void print_pareto(FILE *out) int ret; const char *cl_output; - cl_output = "cl_num," - "cl_rmt_hitm," - "cl_lcl_hitm," - "cl_stores_l1hit," - "cl_stores_l1miss," - "cl_stores_na," - "dcacheline"; + if (c2c.display != DISPLAY_SNP_PEER) + cl_output = "cl_num," + "cl_rmt_hitm," + "cl_lcl_hitm," + "cl_stores_l1hit," + "cl_stores_l1miss," + "cl_stores_na," + "dcacheline"; + else + cl_output = "cl_num," + "cl_rmt_peer," + "cl_lcl_peer," + "cl_stores_l1hit," + "cl_stores_l1miss," + "cl_stores_na," + "dcacheline"; perf_hpp_list__init(&hpp_list); ret = hpp_list__parse(&hpp_list, cl_output, NULL); @@ -2852,6 +2886,8 @@ static int setup_display(const char *str) c2c.display = DISPLAY_RMT_HITM; else if (!strcmp(display, "lcl")) c2c.display = DISPLAY_LCL_HITM; + else if (!strcmp(display, "peer")) + c2c.display = DISPLAY_SNP_PEER; else { pr_err("failed: unknown display type: %s\n", str); return -1; @@ -2898,10 +2934,12 @@ static int build_cl_output(char *cl_sort, bool no_source) } if (asprintf(&c2c.cl_output, - "%s%s%s%s%s%s%s%s%s%s", + "%s%s%s%s%s%s%s%s%s%s%s%s", c2c.use_stdio ? "cl_num_empty," : "", - "percent_rmt_hitm," - "percent_lcl_hitm," + c2c.display == DISPLAY_SNP_PEER ? "percent_rmt_peer," + "percent_lcl_peer," : + "percent_rmt_hitm," + "percent_lcl_hitm,", "percent_stores_l1hit," "percent_stores_l1miss," "percent_stores_na," @@ -2909,8 +2947,10 @@ static int build_cl_output(char *cl_sort, bool no_source) add_pid ? "pid," : "", add_tid ? "tid," : "", add_iaddr ? "iaddr," : "", - "mean_rmt," - "mean_lcl," + c2c.display == DISPLAY_SNP_PEER ? "mean_rmt_peer," + "mean_lcl_peer," : + "mean_rmt," + "mean_lcl,", "mean_load," "tot_recs," "cpucnt,", @@ -2931,6 +2971,7 @@ static int build_cl_output(char *cl_sort, bool no_source) static int setup_coalesce(const char *coalesce, bool no_source) { const char *c = coalesce ?: coalesce_default; + const char *sort_str = NULL; if (asprintf(&c2c.cl_sort, "offset,%s", c) < 0) return -ENOMEM; @@ -2938,12 +2979,16 @@ static int setup_coalesce(const char *coalesce, bool no_source) if (build_cl_output(c2c.cl_sort, no_source)) return -1; - if (asprintf(&c2c.cl_resort, "offset,%s", - c2c.display == DISPLAY_TOT_HITM ? - "tot_hitm" : - c2c.display == DISPLAY_RMT_HITM ? - "rmt_hitm,lcl_hitm" : - "lcl_hitm,rmt_hitm") < 0) + if (c2c.display == DISPLAY_TOT_HITM) + sort_str = "tot_hitm"; + else if (c2c.display == DISPLAY_RMT_HITM) + sort_str = "rmt_hitm,lcl_hitm"; + else if (c2c.display == DISPLAY_LCL_HITM) + sort_str = "lcl_hitm,rmt_hitm"; + else if (c2c.display == DISPLAY_SNP_PEER) + sort_str = "tot_peer"; + + if (asprintf(&c2c.cl_resort, "offset,%s", sort_str) < 0) return -ENOMEM; pr_debug("coalesce sort fields: %s\n", c2c.cl_sort); @@ -2989,7 +3034,7 @@ static int perf_c2c__report(int argc, const char **argv) "print_type,threshold[,print_limit],order,sort_key[,branch],value", callchain_help, &parse_callchain_opt, callchain_default_opt), - OPT_STRING('d', "display", &display, "Switch HITM output type", "lcl,rmt"), + OPT_STRING('d', "display", &display, "Switch HITM output type", "tot,lcl,rmt,peer"), OPT_STRING('c', "coalesce", &coalesce, "coalesce fields", "coalesce fields: pid,tid,iaddr,dso"), OPT_BOOLEAN('f', "force", &symbol_conf.force, "don't complain, do it"), @@ -3084,20 +3129,36 @@ static int perf_c2c__report(int argc, const char **argv) goto out_mem2node; } - output_str = "cl_idx," - "dcacheline," - "dcacheline_node," - "dcacheline_count," - "percent_costly_snoop," - "tot_hitm,lcl_hitm,rmt_hitm," - "tot_recs," - "tot_loads," - "tot_stores," - "stores_l1hit,stores_l1miss,stores_na," - "ld_fbhit,ld_l1hit,ld_l2hit," - "ld_lclhit,lcl_hitm," - "ld_rmthit,rmt_hitm," - "dram_lcl,dram_rmt"; + if (c2c.display != DISPLAY_SNP_PEER) + output_str = "cl_idx," + "dcacheline," + "dcacheline_node," + "dcacheline_count," + "percent_costly_snoop," + "tot_hitm,lcl_hitm,rmt_hitm," + "tot_recs," + "tot_loads," + "tot_stores," + "stores_l1hit,stores_l1miss,stores_na," + "ld_fbhit,ld_l1hit,ld_l2hit," + "ld_lclhit,lcl_hitm," + "ld_rmthit,rmt_hitm," + "dram_lcl,dram_rmt"; + else + output_str = "cl_idx," + "dcacheline," + "dcacheline_node," + "dcacheline_count," + "percent_costly_snoop," + "tot_peer,lcl_peer,rmt_peer," + "tot_recs," + "tot_loads," + "tot_stores," + "stores_l1hit,stores_l1miss,stores_na," + "ld_fbhit,ld_l1hit,ld_l2hit," + "ld_lclhit,lcl_hitm," + "ld_rmthit,rmt_hitm," + "dram_lcl,dram_rmt"; if (c2c.display == DISPLAY_TOT_HITM) sort_str = "tot_hitm"; @@ -3105,6 +3166,8 @@ static int perf_c2c__report(int argc, const char **argv) sort_str = "rmt_hitm"; else if (c2c.display == DISPLAY_LCL_HITM) sort_str = "lcl_hitm"; + else if (c2c.display == DISPLAY_SNP_PEER) + sort_str = "tot_peer"; c2c_hists__reinit(&c2c.hists, output_str, sort_str);