From patchwork Thu Sep 8 21:41:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Namhyung Kim X-Patchwork-Id: 12970676 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C18CDC38145 for ; Thu, 8 Sep 2022 21:41:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229778AbiIHVlK (ORCPT ); Thu, 8 Sep 2022 17:41:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49836 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229456AbiIHVlJ (ORCPT ); Thu, 8 Sep 2022 17:41:09 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46738B5A4C; Thu, 8 Sep 2022 14:41:08 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id t6-20020a17090a950600b0020063f8f964so4267019pjo.0; Thu, 08 Sep 2022 14:41:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:sender:from:to:cc:subject:date; bh=eGQY+H5WfxC+y557urS6FwwfOdBMAu7BMS1ieVjO4Iw=; b=D9gqSo+u4rtkE4LaC9xMiObjHRE7+M7KwYS/IULrCSngWrWOLBXmbdz0B86WaY8biw d1zCvVbup5nPOcneeXNavehLMLDJh1o9OGMax6tLOmtbXMAXV0fJZP6fI8EVn9p2tFar GJ5uJuiQD0jqw8HQAL95b5BQk9uFyKmq/PR+f8G6mG5vsQZmWSbqfSLSFZdmOaSE34w0 1Stdgh1XhD2Hvk36W7jeP0d61a2nkz802LGkqG1MZHWXgcQgTMSsL054pxyWCpRJBHET 04wkizfo8V/Pk5wS0L9SijaNujIYXJSxdi81b7f3mgvBojUoyAKYVlLqTLog5vLFPFw6 ZBLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:sender:x-gm-message-state:from:to:cc:subject:date; bh=eGQY+H5WfxC+y557urS6FwwfOdBMAu7BMS1ieVjO4Iw=; b=dpim6einAgAef1suY8CZJaS5gxS8puQl6URmQfLpswHCGIFFBmf9R60wN2hMsCQ+At J9cgjr8C8+EEPswwHKxxIhN7BaAmJGMyFJ3Q0Zl40z5ri4xJ6Dddz5HOglwcSe2GJiIe To6FIuHSMzMgdbbMnRah7K7uG+bS9tiojEfaLelfGkY8fQtOh2fDufw8LTy0ZcjFpTWS viehzaH94RxNd7wOsV7oo9jSmHldU1u7XbpxrZDR+KBwhUABJgRmFmru0FhLgARd9ZpE bq/DpQd/7VvfmjHe4+P9HhNtIeyMlaSFuB602Jr0lRSpTbxJnaEhCQOvCs22Zv5BZC04 FSOA== X-Gm-Message-State: ACgBeo2e8WPMkCc/B8vN72Bjx8xOphAYQOXP09NSt/OvLh8VIcUdyK4i GoHkZB6DE24pevJcBmu/9y4= X-Google-Smtp-Source: AA6agR7axbIBiSt0i7RCKzWxbQGsQIQu2g57+M1Rz8geiy9fntEr7K/FscLUAshqNh+IAofplRD/+A== X-Received: by 2002:a17:903:248:b0:172:7520:db04 with SMTP id j8-20020a170903024800b001727520db04mr10897000plh.99.1662673267686; Thu, 08 Sep 2022 14:41:07 -0700 (PDT) Received: from youngsil.svl.corp.google.com ([2620:15c:2d4:203:b77b:e812:1879:ec2f]) by smtp.gmail.com with ESMTPSA id d28-20020aa797bc000000b00540d75197e5sm90435pfq.47.2022.09.08.14.41.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 08 Sep 2022 14:41:07 -0700 (PDT) Sender: Namhyung Kim From: Namhyung Kim To: Peter Zijlstra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Song Liu Cc: Ingo Molnar , Mark Rutland , Alexander Shishkin , Arnaldo Carvalho de Melo , Jiri Olsa , LKML , Martin KaFai Lau , Yonghong Song , John Fastabend , KP Singh , Hao Luo , Stanislav Fomichev , bpf@vger.kernel.org, Kan Liang , Ravi Bangoria Subject: [PATCH 1/3] perf: Use sample_flags for callchain Date: Thu, 8 Sep 2022 14:41:02 -0700 Message-Id: <20220908214104.3851807-1-namhyung@kernel.org> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org So that it can call perf_callchain() only if needed. Historically it used __PERF_SAMPLE_CALLCHAIN_EARLY but we can do that with sample_flags in the struct perf_sample_data. Signed-off-by: Namhyung Kim Reviewed-by: Kan Liang --- arch/x86/events/amd/ibs.c | 4 +++- arch/x86/events/intel/ds.c | 8 ++++++-- kernel/events/core.c | 2 +- 3 files changed, 10 insertions(+), 4 deletions(-) diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c index c251bc44c088..dab094166693 100644 --- a/arch/x86/events/amd/ibs.c +++ b/arch/x86/events/amd/ibs.c @@ -798,8 +798,10 @@ static int perf_ibs_handle_irq(struct perf_ibs *perf_ibs, struct pt_regs *iregs) * recorded as part of interrupt regs. Thus we need to use rip from * interrupt regs while unwinding call stack. */ - if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN) + if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN) { data.callchain = perf_callchain(event, iregs); + data.sample_flags |= PERF_SAMPLE_CALLCHAIN; + } throttle = perf_event_overflow(event, &data, ®s); out: diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c index a5275c235c2a..4ba6ab6d0d92 100644 --- a/arch/x86/events/intel/ds.c +++ b/arch/x86/events/intel/ds.c @@ -1546,8 +1546,10 @@ static void setup_pebs_fixed_sample_data(struct perf_event *event, * previous PMI context or an (I)RET happened between the record and * PMI. */ - if (sample_type & PERF_SAMPLE_CALLCHAIN) + if (sample_type & PERF_SAMPLE_CALLCHAIN) { data->callchain = perf_callchain(event, iregs); + data->sample_flags |= PERF_SAMPLE_CALLCHAIN; + } /* * We use the interrupt regs as a base because the PEBS record does not @@ -1719,8 +1721,10 @@ static void setup_pebs_adaptive_sample_data(struct perf_event *event, * previous PMI context or an (I)RET happened between the record and * PMI. */ - if (sample_type & PERF_SAMPLE_CALLCHAIN) + if (sample_type & PERF_SAMPLE_CALLCHAIN) { data->callchain = perf_callchain(event, iregs); + data->sample_flags |= PERF_SAMPLE_CALLCHAIN; + } *regs = *iregs; /* The ip in basic is EventingIP */ diff --git a/kernel/events/core.c b/kernel/events/core.c index 15d27b14c827..b8af9fdbf26f 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7323,7 +7323,7 @@ void perf_prepare_sample(struct perf_event_header *header, if (sample_type & PERF_SAMPLE_CALLCHAIN) { int size = 1; - if (!(sample_type & __PERF_SAMPLE_CALLCHAIN_EARLY)) + if (filtered_sample_type & PERF_SAMPLE_CALLCHAIN) data->callchain = perf_callchain(event, regs); size += data->callchain->nr; From patchwork Thu Sep 8 21:41:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Namhyung Kim X-Patchwork-Id: 12970677 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6134EC6FA82 for ; Thu, 8 Sep 2022 21:41:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229952AbiIHVlL (ORCPT ); Thu, 8 Sep 2022 17:41:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49930 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229831AbiIHVlL (ORCPT ); Thu, 8 Sep 2022 17:41:11 -0400 Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com [IPv6:2607:f8b0:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2E6A3B5A4C; Thu, 8 Sep 2022 14:41:10 -0700 (PDT) Received: by mail-pl1-x62c.google.com with SMTP id t3so7448ply.2; Thu, 08 Sep 2022 14:41:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date; bh=Cx155zhJoQZQUWzdh9NR2U3rudlWDBVhPxOXZDiL5yo=; b=P6Ur1qmON20NVvaQhIIM7UPGsLK5uwXwzma+SSwrHm9T7V3vM4jke4HwN+CDS5z1ms QMJbC9+qrUsXFTmKOlI/mLBlxU600k1Z3/IzItURVo7LXSZEGxZIEmFCr/laWzskjBHy tTefqpKNd6HuV/BRty3GJm9lo5w1HXh2DQfzL+7K2kw9n3O1fTN9POCoYFq9dYta28Ws nXVDtwzlxgWlyiUpjhJKR9vp9hPYFbn+SvR7zsejFUpK18kkdHQH2VSS5r2CGP1w9XBg zio3faDX3vpeKaIRTvzkU5+5ST7sqz0sFyfeY07KEatl5h1mr8ALOhtx9qwMH+zoiJMx tBow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date; bh=Cx155zhJoQZQUWzdh9NR2U3rudlWDBVhPxOXZDiL5yo=; b=AG9jHZLmX969qYRoU9ZA33DuvCXciYe81zfrAgN2wf3zb3YRnCCD5wUnxjIXLLQDyl BVkzoxYtJaVtrZTIAill09SL8IygiigmxlvvJ7IQX9pWBHHKfsy2cWIaWd7r7eHQ0wcH Bb/VU5UVVMQnPvsPW21FJ9qb8hZgrsiFUfsw+I9c2g54Ie3GkmYjUKhQERHWCpCd5K2e dj6t8Dg0O+cVcFQ9KSRHe/PbJTZELafGhJQ1g3COR+1AK+dLoDqOKYomwRxgLGmLufC3 X6ftuTRjnkXcOWNOTgdudRoPvmnxFc5clLhpGBChT3U1PWOqxEXZYrMRNG9DySu5PnZM u3lA== X-Gm-Message-State: ACgBeo2+uw1bMHlk3M5+bQvNJ/PjconS3ivivkyjq65L0VZPzbz28orU BS6KqPhq7lMUKCgopAe6urI= X-Google-Smtp-Source: AA6agR6WZQ+hqd5KwvMnBidkwAHqjo8ySag1qtBzmRxLHrSJyP6pymQkVYR23fR70Zgbmv1kXzB/2Q== X-Received: by 2002:a17:90a:908:b0:200:14d8:1ff9 with SMTP id n8-20020a17090a090800b0020014d81ff9mr6160652pjn.16.1662673269388; Thu, 08 Sep 2022 14:41:09 -0700 (PDT) Received: from youngsil.svl.corp.google.com ([2620:15c:2d4:203:b77b:e812:1879:ec2f]) by smtp.gmail.com with ESMTPSA id d28-20020aa797bc000000b00540d75197e5sm90435pfq.47.2022.09.08.14.41.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 08 Sep 2022 14:41:08 -0700 (PDT) Sender: Namhyung Kim From: Namhyung Kim To: Peter Zijlstra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Song Liu Cc: Ingo Molnar , Mark Rutland , Alexander Shishkin , Arnaldo Carvalho de Melo , Jiri Olsa , LKML , Martin KaFai Lau , Yonghong Song , John Fastabend , KP Singh , Hao Luo , Stanislav Fomichev , bpf@vger.kernel.org, Kan Liang , Ravi Bangoria Subject: [PATCH 2/3] perf/bpf: Always use perf callchains if exist Date: Thu, 8 Sep 2022 14:41:03 -0700 Message-Id: <20220908214104.3851807-2-namhyung@kernel.org> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog In-Reply-To: <20220908214104.3851807-1-namhyung@kernel.org> References: <20220908214104.3851807-1-namhyung@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net If the perf_event has PERF_SAMPLE_CALLCHAIN, BPF can use it for stack trace. The problematic cases like PEBS and IBS already handled in the PMU driver and they filled the callchain info in the sample data. For others, we can call perf_callchain() before the BPF handler. Signed-off-by: Namhyung Kim Reviewed-by: Stanislav Fomichev --- kernel/bpf/stackmap.c | 4 ++-- kernel/events/core.c | 12 ++++++++++-- 2 files changed, 12 insertions(+), 4 deletions(-) diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c index 1adbe67cdb95..aecea7451b61 100644 --- a/kernel/bpf/stackmap.c +++ b/kernel/bpf/stackmap.c @@ -338,7 +338,7 @@ BPF_CALL_3(bpf_get_stackid_pe, struct bpf_perf_event_data_kern *, ctx, int ret; /* perf_sample_data doesn't have callchain, use bpf_get_stackid */ - if (!(event->attr.sample_type & __PERF_SAMPLE_CALLCHAIN_EARLY)) + if (!(event->attr.sample_type & PERF_SAMPLE_CALLCHAIN)) return bpf_get_stackid((unsigned long)(ctx->regs), (unsigned long) map, flags, 0, 0); @@ -506,7 +506,7 @@ BPF_CALL_4(bpf_get_stack_pe, struct bpf_perf_event_data_kern *, ctx, int err = -EINVAL; __u64 nr_kernel; - if (!(event->attr.sample_type & __PERF_SAMPLE_CALLCHAIN_EARLY)) + if (!(event->attr.sample_type & PERF_SAMPLE_CALLCHAIN)) return __bpf_get_stack(regs, NULL, NULL, buf, size, flags); if (unlikely(flags & ~(BPF_F_SKIP_FIELD_MASK | BPF_F_USER_STACK | diff --git a/kernel/events/core.c b/kernel/events/core.c index b8af9fdbf26f..2ea93ce75ad4 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -10003,8 +10003,16 @@ static void bpf_overflow_handler(struct perf_event *event, goto out; rcu_read_lock(); prog = READ_ONCE(event->prog); - if (prog) + if (prog) { + if (prog->call_get_stack && + (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN) && + !(data->sample_flags & PERF_SAMPLE_CALLCHAIN)) { + data->callchain = perf_callchain(event, regs); + data->sample_flags |= PERF_SAMPLE_CALLCHAIN; + } + ret = bpf_prog_run(prog, &ctx); + } rcu_read_unlock(); out: __this_cpu_dec(bpf_prog_active); @@ -10030,7 +10038,7 @@ static int perf_event_set_bpf_handler(struct perf_event *event, if (event->attr.precise_ip && prog->call_get_stack && - (!(event->attr.sample_type & __PERF_SAMPLE_CALLCHAIN_EARLY) || + (!(event->attr.sample_type & PERF_SAMPLE_CALLCHAIN) || event->attr.exclude_callchain_kernel || event->attr.exclude_callchain_user)) { /* From patchwork Thu Sep 8 21:41:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Namhyung Kim X-Patchwork-Id: 12970678 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE567C38145 for ; Thu, 8 Sep 2022 21:41:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230082AbiIHVlN (ORCPT ); Thu, 8 Sep 2022 17:41:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49984 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230075AbiIHVlM (ORCPT ); Thu, 8 Sep 2022 17:41:12 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D4A5AB5A4C; Thu, 8 Sep 2022 14:41:11 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id p1-20020a17090a2d8100b0020040a3f75eso3808642pjd.4; Thu, 08 Sep 2022 14:41:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date; bh=S6hyUSVNablbJH4TeRzqEa2ZTS0Xgv79KyI85GSi4Ec=; b=BPkeKoggarXSBWcrU2HhJ4+yxokvb3lNgQweDmuSOwTbvZ3SV3bFjXK4FoB6BB7+uk QY2NuFBTBPkbFJiwhJwSH0s1jw7io+cNrZYz6CXDr7Llpi4uDwJT2z4n8tkXmdWePFS/ 69mrlnFtKbP9d+PV/XHMtb6CEAZCcQNSLmokc9qfuHHKqGyKAdkXd+/rQbuGpi1bsRaR 8mZcEUsDi2mwrjXeJWFQDAmefdouL1WbV5EHAzQcwvlfQfG9w86IWLQ71yB02GNGZOko 8gsc1rM10S2qDWNLb0if3tGGUPmwKg0Kso0BTV3zGJqK1IVPWM5MwQcuonbkemkzay46 eTVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date; bh=S6hyUSVNablbJH4TeRzqEa2ZTS0Xgv79KyI85GSi4Ec=; b=RMoKYG5/8txB7zrj/xQPz6dY/CrxPfcZNhlHCa80Y1hX++SPBlpfQz32Dn8LpRJhS/ QO3wWZWyTQvmFmf3sXKRHZ3UIQQgeF3DZoFdAxR7uSmOyX8izn2VuiKIio/4w5ic32YO KMBSpLAmC3qMwRcyjJ+PuItLA6j+OkPm5/9l3dKl2h385QxAg22on6TrJN1LSw6QmyA9 aIZgnvQBTX0NpKjftD94EWofolqynHgWXBWSQvOqNX2qFSnacjasBCPD5dUG7nuvzQr8 Wht/gQ5Ex+TSzcypVMAsbFWnQAwOukgH43CgN6OI7B0q53eJxeqvXQPrcsUzjxWtFBs8 8VwA== X-Gm-Message-State: ACgBeo0h4hvijFHicapEBD6PEM4SOaFQg+MFU4B0xPNafi2GEVpo4ObS rKRTuajxHeXFOnJOLc1MSCg= X-Google-Smtp-Source: AA6agR6C8pjHvJeNC2zbjYy3SEOAFdBBKgYVdsY++qQ4HNFJRo+ew6iWrKzr8nWGOGnp2UwQPu3N3Q== X-Received: by 2002:a17:902:820f:b0:176:9654:354d with SMTP id x15-20020a170902820f00b001769654354dmr10798288pln.79.1662673271287; Thu, 08 Sep 2022 14:41:11 -0700 (PDT) Received: from youngsil.svl.corp.google.com ([2620:15c:2d4:203:b77b:e812:1879:ec2f]) by smtp.gmail.com with ESMTPSA id d28-20020aa797bc000000b00540d75197e5sm90435pfq.47.2022.09.08.14.41.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 08 Sep 2022 14:41:10 -0700 (PDT) Sender: Namhyung Kim From: Namhyung Kim To: Peter Zijlstra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Song Liu Cc: Ingo Molnar , Mark Rutland , Alexander Shishkin , Arnaldo Carvalho de Melo , Jiri Olsa , LKML , Martin KaFai Lau , Yonghong Song , John Fastabend , KP Singh , Hao Luo , Stanislav Fomichev , bpf@vger.kernel.org, Kan Liang , Ravi Bangoria Subject: [PATCH 3/3] perf: Kill __PERF_SAMPLE_CALLCHAIN_EARLY Date: Thu, 8 Sep 2022 14:41:04 -0700 Message-Id: <20220908214104.3851807-3-namhyung@kernel.org> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog In-Reply-To: <20220908214104.3851807-1-namhyung@kernel.org> References: <20220908214104.3851807-1-namhyung@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org There's no in-tree user anymore. Let's get rid of it. Signed-off-by: Namhyung Kim --- arch/x86/events/amd/ibs.c | 10 ---------- arch/x86/events/intel/core.c | 3 --- include/uapi/linux/perf_event.h | 2 -- 3 files changed, 15 deletions(-) diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c index dab094166693..ce5720bfb350 100644 --- a/arch/x86/events/amd/ibs.c +++ b/arch/x86/events/amd/ibs.c @@ -300,16 +300,6 @@ static int perf_ibs_init(struct perf_event *event) hwc->config_base = perf_ibs->msr; hwc->config = config; - /* - * rip recorded by IbsOpRip will not be consistent with rsp and rbp - * recorded as part of interrupt regs. Thus we need to use rip from - * interrupt regs while unwinding call stack. Setting _EARLY flag - * makes sure we unwind call-stack before perf sample rip is set to - * IbsOpRip. - */ - if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN) - event->attr.sample_type |= __PERF_SAMPLE_CALLCHAIN_EARLY; - return 0; } diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index ba101c28dcc9..fcd43878a24d 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -3846,9 +3846,6 @@ static int intel_pmu_hw_config(struct perf_event *event) } if (x86_pmu.pebs_aliases) x86_pmu.pebs_aliases(event); - - if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN) - event->attr.sample_type |= __PERF_SAMPLE_CALLCHAIN_EARLY; } if (needs_branch_stack(event)) { diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h index dca16582885f..e639c74cf5fb 100644 --- a/include/uapi/linux/perf_event.h +++ b/include/uapi/linux/perf_event.h @@ -164,8 +164,6 @@ enum perf_event_sample_format { PERF_SAMPLE_WEIGHT_STRUCT = 1U << 24, PERF_SAMPLE_MAX = 1U << 25, /* non-ABI */ - - __PERF_SAMPLE_CALLCHAIN_EARLY = 1ULL << 63, /* non-ABI; internal use */ }; #define PERF_SAMPLE_WEIGHT_TYPE (PERF_SAMPLE_WEIGHT | PERF_SAMPLE_WEIGHT_STRUCT)