From patchwork Thu Dec 29 20:40:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Namhyung Kim X-Patchwork-Id: 13083819 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9824BC3DA79 for ; Thu, 29 Dec 2022 20:44:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234167AbiL2UoB (ORCPT ); Thu, 29 Dec 2022 15:44:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52974 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234126AbiL2Unf (ORCPT ); Thu, 29 Dec 2022 15:43:35 -0500 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 90F6218381; Thu, 29 Dec 2022 12:41:04 -0800 (PST) Received: by mail-pj1-x102f.google.com with SMTP id u4-20020a17090a518400b00223f7eba2c4so19905010pjh.5; Thu, 29 Dec 2022 12:41:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:sender:from:to:cc:subject:date:message-id:reply-to; bh=giBOQ9TT79Q9Ar9UFHe+CfFhm/Y3dEEkbsA9TNwaa/I=; b=E+uyxeyPPYSvY01lhhcegEwzc9fU9oCG6NlW4KvnjJgXWBv2kwB2A+gGWUGZ9yy227 bPt45bNkMa7ZKIGCPH07qOOAfC5Yd/x2NIsNLT+CwrQ+HGA0bxJ+cQ7D1/AaZ2rwyQBd Q6C7oIhVbymoUXzZdB7fqm/G7P6Dm6QZTf7v1xN7lF+TkNH5LqL/Afl8SEKqTFmvZNb+ NOhKDi8twLJTvWOjQR0cG0CCEwLDgOK0cZqM4MZM0INshYm5wKgDgkB4ece3SLXx1gGR OMLpcrek/QlH2dj8wyFPKVunTOrtBibkFZ/k6FNK6X9QZKZVCER0uXQ3Qr+6wIrWHOiu 5uNA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:sender:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=giBOQ9TT79Q9Ar9UFHe+CfFhm/Y3dEEkbsA9TNwaa/I=; b=Jpkf2bzxSlLkMc4OBE25q4rjq9/++YqHYtJIjXqMLP/8TLAr/J5tOoqUZYtbL+oY97 Om47ukAu3+2FJCzDvI7kDA1F9dI2hEs1A+G6gsMpeK9MruQ9B+wajYCtFL4hmW/uZIPQ BikVD1688tx/Hsz5lz5n7j/Dti2hU3Toabw69C9Czqf/A3HOqixs+zWhQmqSI5R8eJF9 qK9XVXZf1y7+joHLWoE//Dh5mrROi2MF0M8R0VIg6qja9s5mGfvzGb19NaQMoaT519kU v5M1CCx7kwwDW2bvufE9Rc/KBvkmlLp2inYDDIApct3lGpEtO0oxL/LxE4NWK1dRTS/y g2gg== X-Gm-Message-State: AFqh2krrRtHTLni9JF3EsLEFvepPHzgji2DZyVbMAPKuwBxA0IGdqBPs +oySmepZWFmaviXqeQuXQlbzf/8TlIJVBQ== X-Google-Smtp-Source: AMrXdXsASjKS0bhWV9+n4cxTGdfOHm2L7MEZwJtaWn6YFgT9qDB9bJDzGZiQk/vC2Xki7WWdphGAMA== X-Received: by 2002:a17:903:32ca:b0:192:7847:b047 with SMTP id i10-20020a17090332ca00b001927847b047mr16600357plr.54.1672346463931; Thu, 29 Dec 2022 12:41:03 -0800 (PST) Received: from balhae.hsd1.ca.comcast.net ([2601:647:6780:ff0:d74d:9c28:5a9d:f5b9]) by smtp.gmail.com with ESMTPSA id q15-20020a17090311cf00b001754fa42065sm13413824plh.143.2022.12.29.12.41.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 29 Dec 2022 12:41:03 -0800 (PST) Sender: Namhyung Kim From: Namhyung Kim To: Peter Zijlstra , Ingo Molnar Cc: LKML , Arnaldo Carvalho de Melo , Jiri Olsa , Kan Liang , Ravi Bangoria , bpf@vger.kernel.org Subject: [PATCH 1/3] perf/core: Change the layout of perf_sample_data Date: Thu, 29 Dec 2022 12:40:59 -0800 Message-Id: <20221229204101.1099430-1-namhyung@kernel.org> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org The layout of perf_sample_data is designed to minimize cache-line access. The perf_sample_data_init() used to initialize a couple of fields unconditionally so they were placed together at the head. But it's changed now to set the fields according to the actual sample_type flags. The main user (the perf tools) sets the IP, TID, TIME, PERIOD always. Also group relevant fields like addr, phys_addr and data_page_size. Suggested-by: Peter Zijlstra Signed-off-by: Namhyung Kim --- include/linux/perf_event.h | 34 +++++++++++++++++++--------------- 1 file changed, 19 insertions(+), 15 deletions(-) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index c6a3bac76966..dd565306f479 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -1098,47 +1098,51 @@ extern u64 perf_event_read_value(struct perf_event *event, struct perf_sample_data { /* - * Fields set by perf_sample_data_init(), group so as to - * minimize the cachelines touched. + * Fields set by perf_sample_data_init() unconditionally, + * group so as to minimize the cachelines touched. */ u64 sample_flags; u64 period; /* - * The other fields, optionally {set,used} by - * perf_{prepare,output}_sample(). + * Fields commonly set by __perf_event_header__init_id(), + * group so as to minimize the cachelines touched. */ - struct perf_branch_stack *br_stack; - union perf_sample_weight weight; - union perf_mem_data_src data_src; - u64 txn; - u64 addr; - struct perf_raw_record *raw; - u64 type; - u64 ip; struct { u32 pid; u32 tid; } tid_entry; u64 time; u64 id; - u64 stream_id; struct { u32 cpu; u32 reserved; } cpu_entry; + + /* + * The other fields, optionally {set,used} by + * perf_{prepare,output}_sample(). + */ + u64 ip; struct perf_callchain_entry *callchain; - u64 aux_size; + struct perf_raw_record *raw; + struct perf_branch_stack *br_stack; + union perf_sample_weight weight; + union perf_mem_data_src data_src; + u64 txn; struct perf_regs regs_user; struct perf_regs regs_intr; u64 stack_user_size; - u64 phys_addr; + u64 stream_id; u64 cgroup; + u64 addr; + u64 phys_addr; u64 data_page_size; u64 code_page_size; + u64 aux_size; } ____cacheline_aligned; /* default value for data source */ From patchwork Thu Dec 29 20:41:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Namhyung Kim X-Patchwork-Id: 13083820 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A60F9C3DA7D for ; Thu, 29 Dec 2022 20:44:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234171AbiL2UoC (ORCPT ); Thu, 29 Dec 2022 15:44:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56946 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234180AbiL2Unf (ORCPT ); Thu, 29 Dec 2022 15:43:35 -0500 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CBF6E17E3F; Thu, 29 Dec 2022 12:41:05 -0800 (PST) Received: by mail-pl1-x62a.google.com with SMTP id n4so20000275plp.1; Thu, 29 Dec 2022 12:41:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=kmwkCEHetXBe+z8HVKhwpOa0Sm3E12lwSvVQJBdivlE=; b=DWmE/DvUb0iixZ+Th3zuF8bwmY9akzqdB/4FdduhAVq4NwGLoBzxEhfqob9cupg1JW p3wZ4OHU0jop3IyMcTNbdKJ2yqbJp0f2FCyJe/EruYMtQbbukFEsjP3UDv5Z1HAVwOax 9ISc8TQoeHCEPx1s5X8Ojhw3HeGRgcUKqMbyx7NVj7481fWt3KXWVE3WpUuTFJOY+vds gB7La4TBvquno1QnEsDhak45zE0pshfAYKzVgq0lempVxsG6uVPS8Bx1KR9szIgCc35w zyD3/inqYKNnr6ReMNbMX4LWGnwgNBiv3Lpqm1Lx2JFDBwBFDEnQ3VBcyReDs3aZfXXF hrAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=kmwkCEHetXBe+z8HVKhwpOa0Sm3E12lwSvVQJBdivlE=; b=UgL4m3g+fQ5jBWz16y7E45U/dMZakgVZVY5bt840vBhhLcIHeSSpX86CqzF1i2SZQp Y17H7MCoW1aYtSHwkKlOlfiXI40Ktw9sxWCWg2ZkYqXSwshJQIHZr3KNEzn9ww69QfIc OboJVagagbvzqRKB525FXW0trIMQx9gYp6qywoC2b33HPQiwqy9K9nJx+uWDqXttBarS uS0pn99+s6juEKeOm+iZrCqHWpT3FCnNbfrB4O1FqvhJDxK9ltYsKGT8Ra9EgCbmQNVJ I5fsjpP9UqI5iYPzO7THe3ZkTZwPOnSU2VGj0zQpWLELeGfgMKJtxolU9S/5lsVi2qST Q8qg== X-Gm-Message-State: AFqh2kohiuOQPJ9jXNymtMxEFmkHRwTnpb5EovWTP/J4o7/SOfFln/sU a7Z6W1LnyUxOD6WqtOfs/+hhqv1izL2iYQ== X-Google-Smtp-Source: AMrXdXsQ+NzRgHrMa6ZvQpt/dV6ntrSNImWz2vMyTxIHEW4gFaXt4g5lE3lKDtLr0mMf0igr0gynaQ== X-Received: by 2002:a17:902:8302:b0:192:73c9:11bd with SMTP id bd2-20020a170902830200b0019273c911bdmr14247538plb.23.1672346465287; Thu, 29 Dec 2022 12:41:05 -0800 (PST) Received: from balhae.hsd1.ca.comcast.net ([2601:647:6780:ff0:d74d:9c28:5a9d:f5b9]) by smtp.gmail.com with ESMTPSA id q15-20020a17090311cf00b001754fa42065sm13413824plh.143.2022.12.29.12.41.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 29 Dec 2022 12:41:04 -0800 (PST) Sender: Namhyung Kim From: Namhyung Kim To: Peter Zijlstra , Ingo Molnar Cc: LKML , Arnaldo Carvalho de Melo , Jiri Olsa , Kan Liang , Ravi Bangoria , bpf@vger.kernel.org Subject: [PATCH 2/3] perf/core: Set data->sample_flags in perf_prepare_sample() Date: Thu, 29 Dec 2022 12:41:00 -0800 Message-Id: <20221229204101.1099430-2-namhyung@kernel.org> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog In-Reply-To: <20221229204101.1099430-1-namhyung@kernel.org> References: <20221229204101.1099430-1-namhyung@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org The perf_prepare_sample() sets the perf_sample_data according to the attr->sample_type before copying it to the ring buffer. But BPF also wants to access the sample data so it needs to prepare the sample even before the regular path. That means the perf_prepare_sample() can be called more than once. Set the data->sample_flags consistently so that it can indicate which fields are set already and skip them if sets. Mostly it's just a matter of checking filtered_sample_type which is a bitmask for unset bits in the attr->sample_type. But some of sample data is implied by others even if it's not in the attr->sample_type (like PERF_SAMPLE_ADDR for PERF_SAMPLE_PHYS_ADDR). So they need to check data->sample_flags separately. Also some of them like callchain, user regs/stack and aux data require more calculations. Protect them using the data->sample_flags to avoid the duplicate work. Signed-off-by: Namhyung Kim --- Maybe we don't need this change to prevent duplication in favor of the next patch using the data->saved_size. But I think it's still useful to set data->sample_flags consistently. Anyway it's up to you. kernel/events/core.c | 86 ++++++++++++++++++++++++++++++++------------ 1 file changed, 63 insertions(+), 23 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index eacc3702654d..70bff8a04583 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7582,14 +7582,21 @@ void perf_prepare_sample(struct perf_event_header *header, filtered_sample_type = sample_type & ~data->sample_flags; __perf_event_header__init_id(header, data, event, filtered_sample_type); - if (sample_type & (PERF_SAMPLE_IP | PERF_SAMPLE_CODE_PAGE_SIZE)) - data->ip = perf_instruction_pointer(regs); + if (sample_type & (PERF_SAMPLE_IP | PERF_SAMPLE_CODE_PAGE_SIZE)) { + /* attr.sample_type may not have PERF_SAMPLE_IP */ + if (!(data->sample_flags & PERF_SAMPLE_IP)) { + data->ip = perf_instruction_pointer(regs); + data->sample_flags |= PERF_SAMPLE_IP; + } + } if (sample_type & PERF_SAMPLE_CALLCHAIN) { int size = 1; - if (filtered_sample_type & PERF_SAMPLE_CALLCHAIN) + if (filtered_sample_type & PERF_SAMPLE_CALLCHAIN) { data->callchain = perf_callchain(event, regs); + data->sample_flags |= PERF_SAMPLE_CALLCHAIN; + } size += data->callchain->nr; @@ -7634,8 +7641,13 @@ void perf_prepare_sample(struct perf_event_header *header, header->size += size; } - if (sample_type & (PERF_SAMPLE_REGS_USER | PERF_SAMPLE_STACK_USER)) - perf_sample_regs_user(&data->regs_user, regs); + if (sample_type & (PERF_SAMPLE_REGS_USER | PERF_SAMPLE_STACK_USER)) { + /* attr.sample_type may not have PERF_SAMPLE_REGS_USER */ + if (!(data->sample_flags & PERF_SAMPLE_REGS_USER)) { + perf_sample_regs_user(&data->regs_user, regs); + data->sample_flags |= PERF_SAMPLE_REGS_USER; + } + } if (sample_type & PERF_SAMPLE_REGS_USER) { /* regs dump ABI info */ @@ -7656,11 +7668,18 @@ void perf_prepare_sample(struct perf_event_header *header, * in case new sample type is added, because we could eat * up the rest of the sample size. */ - u16 stack_size = event->attr.sample_stack_user; u16 size = sizeof(u64); + u16 stack_size; + + if (filtered_sample_type & PERF_SAMPLE_STACK_USER) { + stack_size = event->attr.sample_stack_user; + stack_size = perf_sample_ustack_size(stack_size, header->size, + data->regs_user.regs); - stack_size = perf_sample_ustack_size(stack_size, header->size, - data->regs_user.regs); + data->stack_user_size = stack_size; + data->sample_flags |= PERF_SAMPLE_STACK_USER; + } + stack_size = data->stack_user_size; /* * If there is something to dump, add space for the dump @@ -7670,29 +7689,40 @@ void perf_prepare_sample(struct perf_event_header *header, if (stack_size) size += sizeof(u64) + stack_size; - data->stack_user_size = stack_size; header->size += size; } - if (filtered_sample_type & PERF_SAMPLE_WEIGHT_TYPE) + if (filtered_sample_type & PERF_SAMPLE_WEIGHT_TYPE) { data->weight.full = 0; + data->sample_flags |= PERF_SAMPLE_WEIGHT_TYPE; + } - if (filtered_sample_type & PERF_SAMPLE_DATA_SRC) + if (filtered_sample_type & PERF_SAMPLE_DATA_SRC) { data->data_src.val = PERF_MEM_NA; + data->sample_flags |= PERF_SAMPLE_DATA_SRC; + } - if (filtered_sample_type & PERF_SAMPLE_TRANSACTION) + if (filtered_sample_type & PERF_SAMPLE_TRANSACTION) { data->txn = 0; + data->sample_flags |= PERF_SAMPLE_TRANSACTION; + } if (sample_type & (PERF_SAMPLE_ADDR | PERF_SAMPLE_PHYS_ADDR | PERF_SAMPLE_DATA_PAGE_SIZE)) { - if (filtered_sample_type & PERF_SAMPLE_ADDR) + /* attr.sample_type may not have PERF_SAMPLE_ADDR */ + if (!(data->sample_flags & PERF_SAMPLE_ADDR)) { data->addr = 0; + data->sample_flags |= PERF_SAMPLE_ADDR; + } } if (sample_type & PERF_SAMPLE_REGS_INTR) { /* regs dump ABI info */ int size = sizeof(u64); - perf_sample_regs_intr(&data->regs_intr, regs); + if (filtered_sample_type & PERF_SAMPLE_REGS_INTR) { + perf_sample_regs_intr(&data->regs_intr, regs); + data->sample_flags |= PERF_SAMPLE_REGS_INTR; + } if (data->regs_intr.regs) { u64 mask = event->attr.sample_regs_intr; @@ -7703,17 +7733,19 @@ void perf_prepare_sample(struct perf_event_header *header, header->size += size; } - if (sample_type & PERF_SAMPLE_PHYS_ADDR && - filtered_sample_type & PERF_SAMPLE_PHYS_ADDR) + if (filtered_sample_type & PERF_SAMPLE_PHYS_ADDR) { data->phys_addr = perf_virt_to_phys(data->addr); + data->sample_flags |= PERF_SAMPLE_PHYS_ADDR; + } #ifdef CONFIG_CGROUP_PERF - if (sample_type & PERF_SAMPLE_CGROUP) { + if (filtered_sample_type & PERF_SAMPLE_CGROUP) { struct cgroup *cgrp; /* protected by RCU */ cgrp = task_css_check(current, perf_event_cgrp_id, 1)->cgroup; data->cgroup = cgroup_id(cgrp); + data->sample_flags |= PERF_SAMPLE_CGROUP; } #endif @@ -7722,11 +7754,15 @@ void perf_prepare_sample(struct perf_event_header *header, * require PERF_SAMPLE_ADDR, kernel implicitly retrieve the data->addr, * but the value will not dump to the userspace. */ - if (sample_type & PERF_SAMPLE_DATA_PAGE_SIZE) + if (filtered_sample_type & PERF_SAMPLE_DATA_PAGE_SIZE) { data->data_page_size = perf_get_page_size(data->addr); + data->sample_flags |= PERF_SAMPLE_DATA_PAGE_SIZE; + } - if (sample_type & PERF_SAMPLE_CODE_PAGE_SIZE) + if (filtered_sample_type & PERF_SAMPLE_CODE_PAGE_SIZE) { data->code_page_size = perf_get_page_size(data->ip); + data->sample_flags |= PERF_SAMPLE_CODE_PAGE_SIZE; + } if (sample_type & PERF_SAMPLE_AUX) { u64 size; @@ -7739,10 +7775,14 @@ void perf_prepare_sample(struct perf_event_header *header, * Make sure this doesn't happen by using up to U16_MAX bytes * per sample in total (rounded down to 8 byte boundary). */ - size = min_t(size_t, U16_MAX - header->size, - event->attr.aux_sample_size); - size = rounddown(size, 8); - size = perf_prepare_sample_aux(event, data, size); + if (filtered_sample_type & PERF_SAMPLE_AUX) { + size = min_t(size_t, U16_MAX - header->size, + event->attr.aux_sample_size); + size = rounddown(size, 8); + perf_prepare_sample_aux(event, data, size); + data->sample_flags |= PERF_SAMPLE_AUX; + } + size = data->aux_size; WARN_ON_ONCE(size + header->size > U16_MAX); header->size += size; From patchwork Thu Dec 29 20:41:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Namhyung Kim X-Patchwork-Id: 13083821 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 302AAC3DA79 for ; Thu, 29 Dec 2022 20:44:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234175AbiL2UoE (ORCPT ); Thu, 29 Dec 2022 15:44:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56950 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234182AbiL2Ung (ORCPT ); Thu, 29 Dec 2022 15:43:36 -0500 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3B68A18387; Thu, 29 Dec 2022 12:41:07 -0800 (PST) Received: by mail-pl1-x62a.google.com with SMTP id jn22so19944749plb.13; Thu, 29 Dec 2022 12:41:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=5+7CR58SPvhZSbvxiF3pO3ff849ICg9QIOe5C8t8+9o=; b=XrHqS0Trcpa4Hn59n/py1EEYcBPf8OPeyq9pDLIzTLmc/xTM376UuDQEjqJbvCAU0O pORf3gMmzYLe6R6hrjbjjIGN5qmJV4O2/BKkPVmwM1z7ByVSh2bcGKH43sI0Gobopw16 DQsgYebDdH0VDoKCNIrLtkyH+2Q7Um0LAte1tKWw4mMaVoHkVzs7TYSsDcypOIyKxMsc s+mhNPoFfCm4+dmMbj0DhoNlvqk+b03iA2p6FWQIRkkzMk0/yqOdxHqbOxN0MLvyMgfM 0mq1aQp1ceQpBx+NBXusEz9OjzyOZSHvnCd5XwR2EQi1iHi0LMRMHORnfmfF6weed7il PDkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=5+7CR58SPvhZSbvxiF3pO3ff849ICg9QIOe5C8t8+9o=; b=Os/5xXIiOKmcwQRYrYcs8XXmF5+63gcNt4j34xkT3kzUmfvoMVlrbpLJVEnsUKwq/V IlRCIPlNAQDi42qFkwumrIcO/98y0ld6wy4Syp0bFGUyVzK45fb/4tjrS/i84n8VHxxB us8rcqySRCbsRpI2biI6XsDis2IlRBTjolZg9Ba83J9H6yp2dM9rKeIolgpYgSykBwcc 38cUFzHuel3HFUSX7SAkv3hxXSAOVuDYKnYGJrdeMpck8ODMvS8FLALzw2FYBVkLDV/K IJqX5DhdJ5ccZWZ2jCgTRtFr16d3UfkBhu48s3NcT8Io3LQp+YHuP09Ymx7WRVTZ6wvP N8GA== X-Gm-Message-State: AFqh2kq4RAeXIjVzzgkKUNRwzAxKx/I9SoYe2rdmmvteUHXA6tCPBtAz /jYWrZypjptcSYsOlqyPnTVm1iTcq2CyCw== X-Google-Smtp-Source: AMrXdXuamyUtbCEYIAxU87Kwz446hazHbxIhbYjn3tc/7GQegmmiAhB4yDaqhA2edKHhkxWXcOAjag== X-Received: by 2002:a17:903:31d5:b0:191:4539:d2c1 with SMTP id v21-20020a17090331d500b001914539d2c1mr30581759ple.47.1672346466671; Thu, 29 Dec 2022 12:41:06 -0800 (PST) Received: from balhae.hsd1.ca.comcast.net ([2601:647:6780:ff0:d74d:9c28:5a9d:f5b9]) by smtp.gmail.com with ESMTPSA id q15-20020a17090311cf00b001754fa42065sm13413824plh.143.2022.12.29.12.41.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 29 Dec 2022 12:41:06 -0800 (PST) Sender: Namhyung Kim From: Namhyung Kim To: Peter Zijlstra , Ingo Molnar Cc: LKML , Arnaldo Carvalho de Melo , Jiri Olsa , Kan Liang , Ravi Bangoria , bpf@vger.kernel.org Subject: [PATCH 3/3] perf/core: Save calculated sample data size Date: Thu, 29 Dec 2022 12:41:01 -0800 Message-Id: <20221229204101.1099430-3-namhyung@kernel.org> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog In-Reply-To: <20221229204101.1099430-1-namhyung@kernel.org> References: <20221229204101.1099430-1-namhyung@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org To avoid duplicate work in perf_prepare_sample(), save the final header size in data->saved_size. It's initialized to 0 and set to an actual value at the end of perf_prepare_sample(). If it sees a non-zero value that means it's the second time of the call and it knows the sample data is populated already. So update the header size with the data->saved_size and bail out. Signed-off-by: Namhyung Kim --- include/linux/perf_event.h | 2 ++ kernel/events/core.c | 11 +++++++++++ 2 files changed, 13 insertions(+) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index dd565306f479..ccde631a0cb4 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -1103,6 +1103,7 @@ struct perf_sample_data { */ u64 sample_flags; u64 period; + u64 saved_size; /* * Fields commonly set by __perf_event_header__init_id(), @@ -1158,6 +1159,7 @@ static inline void perf_sample_data_init(struct perf_sample_data *data, /* remaining struct members initialized in perf_prepare_sample() */ data->sample_flags = PERF_SAMPLE_PERIOD; data->period = period; + data->saved_size = 0; if (addr) { data->addr = addr; diff --git a/kernel/events/core.c b/kernel/events/core.c index 70bff8a04583..dac4d76e2877 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7575,6 +7575,15 @@ void perf_prepare_sample(struct perf_event_header *header, header->misc = 0; header->misc |= perf_misc_flags(regs); + /* + * If it called perf_prepare_sample() already, it set the all data fields + * and recorded the final size to data->saved_size. + */ + if (data->saved_size) { + header->size = data->saved_size; + return; + } + /* * Clear the sample flags that have already been done by the * PMU driver. @@ -7796,6 +7805,8 @@ void perf_prepare_sample(struct perf_event_header *header, * do here next. */ WARN_ON_ONCE(header->size & 7); + + data->saved_size = header->size; } static __always_inline int