From patchwork Thu Jun 9 11:30:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12875313 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A78FACCA473 for ; Thu, 9 Jun 2022 11:31:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231392AbiFILbD (ORCPT ); Thu, 9 Jun 2022 07:31:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53564 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235304AbiFILbC (ORCPT ); Thu, 9 Jun 2022 07:31:02 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 37A9839C104 for ; Thu, 9 Jun 2022 04:31:01 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id u128-20020a25dd86000000b0066073927e92so15957586ybg.13 for ; Thu, 09 Jun 2022 04:31:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=vZHbZ1oi5OSjCMeU4/rneix3L5XarWjinLy785pgor0=; b=rJ6pp5xowS86Nd+/gorliRhFqUB5J+6mavLLuFiIdbWcQEhh6CT0t2PyhmpZ7XlHuV fJEKAnXZefd7pn33hZxyEaRytMDaJ8OfYiSvmXWHsOaxoj8OsWOqbyD3u8Hqg4tEuyux zsPtCAqnIgLyFyTYD2Fcn6nm+BwXQfHXoAuBsmLe34GoBAI4YIIqe45Sqp8NV28d0S+D bltiNySjqDhkJoauzSpEgnSsKrVSYQZ0l486YY3xoXoI+wtdC1H3VBf2GCJqNQZFzdYt v1IYGNzqEUhFbjtOBXkScAJ9ok5FhxPFKqh0SSpwI1MU6XFAngUa/85tiBwSpXZ6u1UF 7JFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=vZHbZ1oi5OSjCMeU4/rneix3L5XarWjinLy785pgor0=; b=XTWhF2YoQRYeDtYHtnqtJQJEqnwYrA7mX3BGaGtf+AXZYMQ4Nf4iqTCJEArkxx3INV CAbfZ+u8nSe8E950+hXQj6t8RoCZ9Ia0fMp8NB70t0E9O6sKwWGbbLyyY1h1KaK45I28 phNANG6j6qQdlVRIVEu1w34c5ENVQTUWvEXbEBnnx5HEu8SMFmcodhQwbjqdaDIo5FKl Dwlpg/haTEm0FjgJ1rQ5QmXeEwW1vWXyaxwi/z+oR5O78mvHAxWa/kZYt0xLJFE7/Wxk e6muWX0f4sX8+6FwvIbdwgLefl80BaaqC4dNkY3FA+sab3BVYYotahBziVvSRTFpJ2ap WxAw== X-Gm-Message-State: AOAM530WAz44EhXYE3uEeUTdhmiQfgX6W0+lLKTwYh7k70jSuYQhYFQs eEutEmiT7Ohw8qa9v20OghSmoax//A== X-Google-Smtp-Source: ABdhPJxnsh9anBT52y2AD1F6IjCtDlLW/bu8fyR/DC08IH7PUuKLO9593httKBZZ5/ksOups8ZfqirI4yw== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:dcf:e5ba:10a5:1ea5]) (user=elver job=sendgmr) by 2002:a25:dccd:0:b0:65c:bc72:75bf with SMTP id y196-20020a25dccd000000b0065cbc7275bfmr37441073ybe.315.1654774260428; Thu, 09 Jun 2022 04:31:00 -0700 (PDT) Date: Thu, 9 Jun 2022 13:30:39 +0200 In-Reply-To: <20220609113046.780504-1-elver@google.com> Message-Id: <20220609113046.780504-2-elver@google.com> Mime-Version: 1.0 References: <20220609113046.780504-1-elver@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH 1/8] perf/hw_breakpoint: Optimize list of per-task breakpoints From: Marco Elver To: elver@google.com, Peter Zijlstra , Frederic Weisbecker , Ingo Molnar Cc: Thomas Gleixner , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Dmitry Vyukov , linux-perf-users@vger.kernel.org, x86@kernel.org, linux-sh@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org On a machine with 256 CPUs, running the recently added perf breakpoint benchmark results in: | $> perf bench -r 30 breakpoint thread -b 4 -p 64 -t 64 | # Running 'breakpoint/thread' benchmark: | # Created/joined 30 threads with 4 breakpoints and 64 parallelism | Total time: 236.418 [sec] | | 123134.794271 usecs/op | 7880626.833333 usecs/op/cpu The benchmark tests inherited breakpoint perf events across many threads. Looking at a perf profile, we can see that the majority of the time is spent in various hw_breakpoint.c functions, which execute within the 'nr_bp_mutex' critical sections which then results in contention on that mutex as well: 37.27% [kernel] [k] osq_lock 34.92% [kernel] [k] mutex_spin_on_owner 12.15% [kernel] [k] toggle_bp_slot 11.90% [kernel] [k] __reserve_bp_slot The culprit here is task_bp_pinned(), which has a runtime complexity of O(#tasks) due to storing all task breakpoints in the same list and iterating through that list looking for a matching task. Clearly, this does not scale to thousands of tasks. While one option would be to make task_struct a breakpoint list node, this would only further bloat task_struct for infrequently used data. Instead, make use of the "rhashtable" variant "rhltable" which stores multiple items with the same key in a list. This results in average runtime complexity of O(1) for task_bp_pinned(). With the optimization, the benchmark shows: | $> perf bench -r 30 breakpoint thread -b 4 -p 64 -t 64 | # Running 'breakpoint/thread' benchmark: | # Created/joined 30 threads with 4 breakpoints and 64 parallelism | Total time: 0.208 [sec] | | 108.422396 usecs/op | 6939.033333 usecs/op/cpu On this particular setup that's a speedup of ~1135x. Signed-off-by: Marco Elver Reviewed-by: Dmitry Vyukov --- include/linux/perf_event.h | 3 +- kernel/events/hw_breakpoint.c | 56 ++++++++++++++++++++++------------- 2 files changed, 37 insertions(+), 22 deletions(-) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 01231f1d976c..e27360436dc6 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -36,6 +36,7 @@ struct perf_guest_info_callbacks { }; #ifdef CONFIG_HAVE_HW_BREAKPOINT +#include #include #endif @@ -178,7 +179,7 @@ struct hw_perf_event { * creation and event initalization. */ struct arch_hw_breakpoint info; - struct list_head bp_list; + struct rhlist_head bp_list; }; #endif struct { /* amd_iommu */ diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c index f32320ac02fd..25c94c6e918d 100644 --- a/kernel/events/hw_breakpoint.c +++ b/kernel/events/hw_breakpoint.c @@ -28,7 +28,7 @@ #include #include #include -#include +#include #include #include #include @@ -55,7 +55,13 @@ static struct bp_cpuinfo *get_bp_info(int cpu, enum bp_type_idx type) } /* Keep track of the breakpoints attached to tasks */ -static LIST_HEAD(bp_task_head); +static struct rhltable task_bps_ht; +static const struct rhashtable_params task_bps_ht_params = { + .head_offset = offsetof(struct hw_perf_event, bp_list), + .key_offset = offsetof(struct hw_perf_event, target), + .key_len = sizeof_field(struct hw_perf_event, target), + .automatic_shrinking = true, +}; static int constraints_initialized; @@ -104,17 +110,23 @@ static unsigned int max_task_bp_pinned(int cpu, enum bp_type_idx type) */ static int task_bp_pinned(int cpu, struct perf_event *bp, enum bp_type_idx type) { - struct task_struct *tsk = bp->hw.target; + struct rhlist_head *head, *pos; struct perf_event *iter; int count = 0; - list_for_each_entry(iter, &bp_task_head, hw.bp_list) { - if (iter->hw.target == tsk && - find_slot_idx(iter->attr.bp_type) == type && + rcu_read_lock(); + head = rhltable_lookup(&task_bps_ht, &bp->hw.target, task_bps_ht_params); + if (!head) + goto out; + + rhl_for_each_entry_rcu(iter, pos, head, hw.bp_list) { + if (find_slot_idx(iter->attr.bp_type) == type && (iter->cpu < 0 || cpu == iter->cpu)) count += hw_breakpoint_weight(iter); } +out: + rcu_read_unlock(); return count; } @@ -187,7 +199,7 @@ static void toggle_bp_task_slot(struct perf_event *bp, int cpu, /* * Add/remove the given breakpoint in our constraint table */ -static void +static int toggle_bp_slot(struct perf_event *bp, bool enable, enum bp_type_idx type, int weight) { @@ -200,7 +212,7 @@ toggle_bp_slot(struct perf_event *bp, bool enable, enum bp_type_idx type, /* Pinned counter cpu profiling */ if (!bp->hw.target) { get_bp_info(bp->cpu, type)->cpu_pinned += weight; - return; + return 0; } /* Pinned counter task profiling */ @@ -208,9 +220,9 @@ toggle_bp_slot(struct perf_event *bp, bool enable, enum bp_type_idx type, toggle_bp_task_slot(bp, cpu, type, weight); if (enable) - list_add_tail(&bp->hw.bp_list, &bp_task_head); + return rhltable_insert(&task_bps_ht, &bp->hw.bp_list, task_bps_ht_params); else - list_del(&bp->hw.bp_list); + return rhltable_remove(&task_bps_ht, &bp->hw.bp_list, task_bps_ht_params); } __weak int arch_reserve_bp_slot(struct perf_event *bp) @@ -308,9 +320,7 @@ static int __reserve_bp_slot(struct perf_event *bp, u64 bp_type) if (ret) return ret; - toggle_bp_slot(bp, true, type, weight); - - return 0; + return toggle_bp_slot(bp, true, type, weight); } int reserve_bp_slot(struct perf_event *bp) @@ -335,7 +345,7 @@ static void __release_bp_slot(struct perf_event *bp, u64 bp_type) type = find_slot_idx(bp_type); weight = hw_breakpoint_weight(bp); - toggle_bp_slot(bp, false, type, weight); + WARN_ON(toggle_bp_slot(bp, false, type, weight)); } void release_bp_slot(struct perf_event *bp) @@ -679,7 +689,7 @@ static struct pmu perf_breakpoint = { int __init init_hw_breakpoint(void) { int cpu, err_cpu; - int i; + int i, ret; for (i = 0; i < TYPE_MAX; i++) nr_slots[i] = hw_breakpoint_slots(i); @@ -690,18 +700,24 @@ int __init init_hw_breakpoint(void) info->tsk_pinned = kcalloc(nr_slots[i], sizeof(int), GFP_KERNEL); - if (!info->tsk_pinned) - goto err_alloc; + if (!info->tsk_pinned) { + ret = -ENOMEM; + goto err; + } } } + ret = rhltable_init(&task_bps_ht, &task_bps_ht_params); + if (ret) + goto err; + constraints_initialized = 1; perf_pmu_register(&perf_breakpoint, "breakpoint", PERF_TYPE_BREAKPOINT); return register_die_notifier(&hw_breakpoint_exceptions_nb); - err_alloc: +err: for_each_possible_cpu(err_cpu) { for (i = 0; i < TYPE_MAX; i++) kfree(get_bp_info(err_cpu, i)->tsk_pinned); @@ -709,7 +725,5 @@ int __init init_hw_breakpoint(void) break; } - return -ENOMEM; + return ret; } - - From patchwork Thu Jun 9 11:30:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12875314 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9F3CC43334 for ; Thu, 9 Jun 2022 11:31:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243273AbiFILbH (ORCPT ); Thu, 9 Jun 2022 07:31:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53804 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243691AbiFILbF (ORCPT ); Thu, 9 Jun 2022 07:31:05 -0400 Received: from mail-ej1-x649.google.com (mail-ej1-x649.google.com [IPv6:2a00:1450:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 979D639E4AE for ; Thu, 9 Jun 2022 04:31:04 -0700 (PDT) Received: by mail-ej1-x649.google.com with SMTP id s4-20020a170906500400b006feaccb3a0eso10875587ejj.11 for ; Thu, 09 Jun 2022 04:31:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=wnAKNm59aHUnnDDyhy7DQsgawVHUm92RhZIM8y499GE=; b=WkMVlpXnnh1ry0CE/UDyLuTslVl/KojJzPPXdlolY5n5SJY+KHvM0s4QPsDw/ImjRh H7xWhFfwc7tkUsc7mIagXa6nY0nBENo5RRHA3oEV6WH+2w2X3aHNZ85++S+erMQfqPkx vd4C/Xgk6CSQCYAYghdKE47r5IhfLhU8b566MOuhEDdT63B/h20KCT5kzXAs8AkFhe54 tBIQJD1p2jmbzcuTVidce8EB/oRJaEyTbhMHSxR4ShMTEDvDheLB49ozQ9DEvxGI/ipf EYERxwz18K4afh4hwu2YtxEc0JbjpJC0jcF2XZbqOM1AiaYhRREJCDp2LBgG31/PJS3n AuLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=wnAKNm59aHUnnDDyhy7DQsgawVHUm92RhZIM8y499GE=; b=4FjXEpdbi1EhjB0XffQnGxN4cwcduJr7PpCKuMylOOeq8Xy6Guc3d25KnCQEDEz+zm dhbPUDsDYRvP3JXrOInQdwSglDWs2J+t9tLYcGVziq/CC/qcyJhPIjWJDAVKj/oHFNQX V38kpQuOTA/SPqjQGROBwnbZl8Ko2vg+Kn2PzzLRe4fkwHw/61Vd8FXfj1WPJE3t3YdJ nx5XxspYsAMj+cCO4/UtvczpsgJtJk6KPWJayXb0gtle737CUJdDiWGcy7I44P2iJcwe EExj23axK/B140h2Uz4/Uyot/FIvWooK/unOlaB05TLncUMr/t/aCEjkYF3HK5+u+uSI KhTw== X-Gm-Message-State: AOAM531jcWjXS/N0jjYNuspcADmUEWlJKDW4F/4F8OB/EKAQ2bKB+RJW BpYT+AwWgUZ8xSYsq8Nb+PvlVfAAIQ== X-Google-Smtp-Source: ABdhPJx73qjTeXCe10FaRzf/2yFdYj63Q0oseWtLrCparCyLdJilS10nXnGYV7x816rUyXswmnTt6sQNBQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:dcf:e5ba:10a5:1ea5]) (user=elver job=sendgmr) by 2002:a05:6402:500b:b0:431:78d0:bf9d with SMTP id p11-20020a056402500b00b0043178d0bf9dmr17528643eda.184.1654774262890; Thu, 09 Jun 2022 04:31:02 -0700 (PDT) Date: Thu, 9 Jun 2022 13:30:40 +0200 In-Reply-To: <20220609113046.780504-1-elver@google.com> Message-Id: <20220609113046.780504-3-elver@google.com> Mime-Version: 1.0 References: <20220609113046.780504-1-elver@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH 2/8] perf/hw_breakpoint: Mark data __ro_after_init From: Marco Elver To: elver@google.com, Peter Zijlstra , Frederic Weisbecker , Ingo Molnar Cc: Thomas Gleixner , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Dmitry Vyukov , linux-perf-users@vger.kernel.org, x86@kernel.org, linux-sh@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Mark read-only data after initialization as __ro_after_init. While we are here, turn 'constraints_initialized' into a bool. Signed-off-by: Marco Elver Reviewed-by: Dmitry Vyukov --- kernel/events/hw_breakpoint.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c index 25c94c6e918d..1f718745d569 100644 --- a/kernel/events/hw_breakpoint.c +++ b/kernel/events/hw_breakpoint.c @@ -47,7 +47,7 @@ struct bp_cpuinfo { }; static DEFINE_PER_CPU(struct bp_cpuinfo, bp_cpuinfo[TYPE_MAX]); -static int nr_slots[TYPE_MAX]; +static int nr_slots[TYPE_MAX] __ro_after_init; static struct bp_cpuinfo *get_bp_info(int cpu, enum bp_type_idx type) { @@ -63,7 +63,7 @@ static const struct rhashtable_params task_bps_ht_params = { .automatic_shrinking = true, }; -static int constraints_initialized; +static bool constraints_initialized __ro_after_init; /* Gather the number of total pinned and un-pinned bp in a cpuset */ struct bp_busy_slots { @@ -711,7 +711,7 @@ int __init init_hw_breakpoint(void) if (ret) goto err; - constraints_initialized = 1; + constraints_initialized = true; perf_pmu_register(&perf_breakpoint, "breakpoint", PERF_TYPE_BREAKPOINT); From patchwork Thu Jun 9 11:30:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12875315 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0786C433EF for ; Thu, 9 Jun 2022 11:31:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243757AbiFILbK (ORCPT ); Thu, 9 Jun 2022 07:31:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53998 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243689AbiFILbH (ORCPT ); Thu, 9 Jun 2022 07:31:07 -0400 Received: from mail-ej1-x649.google.com (mail-ej1-x649.google.com [IPv6:2a00:1450:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 252AF39F268 for ; Thu, 9 Jun 2022 04:31:06 -0700 (PDT) Received: by mail-ej1-x649.google.com with SMTP id s4-20020a170906500400b006feaccb3a0eso10875587ejj.11 for ; Thu, 09 Jun 2022 04:31:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=/f8j1JHjR27jI6H3SEM6tLzJpymbckkYBJ4SkEYwIq8=; b=LyoQ4FOMRhwn7n2IlecHdLZrsM1pYFG44BCJ7EDP/LFEezIyF4PbZZ6L1gdRa001qC gw2hQCy+e0sg4D+cUQ3GzF6eB0VOUCv9WMksfhBhyYQnD3a9CkO2lm6kEzfgPZr9rTlF c80CDzbsJULj/dfKYnJP53KLqTcoebAW+fICFgoGajhBYeTTOW2NSlhKF/FnPunjWzOt F5a36paLV6Sish3iefODRzGpipMD/NulwT5MQccf/AY/tMHD02muEFSgTzdxDmztdpxx /B3wgaH4brTDN/XrIX6759k0C4mRkf50lmQEESwwoSOdyVjBTWd9xYOwruYzwVoSqMuJ d5pw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=/f8j1JHjR27jI6H3SEM6tLzJpymbckkYBJ4SkEYwIq8=; b=XAqbjnnV0SUbu07Qc3OKBqxDkP9WcblD6jpEBMn8yCO5I3cEFme5eTQjEsUuIgXjNQ EURlavvXaSDoG5M+v2jwF6g2ElMf5V44xW44a5aL4ChQavBtnOZPmN6CvtuDQDTUp1nI ulSV4QlFW+j1B/gAKC7c4XWAli8JzS7/QSnwV8JJuEAq2xF4Sqp2Bjlx1O5dU7Wlzl1P QS4N4A61fNCaNmjVv/ma0mMD6l2usfWBioqo7fb+QyVtRs9Yd2AvvG1HK2OBjgC92HQp gfPyN87z7Sw6GrlwTgvy+EuSu8G2h6PHB/xtMMMS2YRsmRdMHvTrfJO+krxvTvLxmron eWUA== X-Gm-Message-State: AOAM531AxtaiHi1BmnvJvIrOWxo8xJO4ciskRR9qgjHg6f1FKXbdyt7j IccqMGQ16vivbk+1H9wFkICp2G2UwQ== X-Google-Smtp-Source: ABdhPJwCot/NHaewmZ4pL2R0MBlmKwm761sxNJBh6QT/v8O/I0zga25h7cDWb6MxLU6b5JPBogvXe6sm+A== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:dcf:e5ba:10a5:1ea5]) (user=elver job=sendgmr) by 2002:a17:906:3bd9:b0:6ff:4b5:4a8f with SMTP id v25-20020a1709063bd900b006ff04b54a8fmr29080565ejf.139.1654774265622; Thu, 09 Jun 2022 04:31:05 -0700 (PDT) Date: Thu, 9 Jun 2022 13:30:41 +0200 In-Reply-To: <20220609113046.780504-1-elver@google.com> Message-Id: <20220609113046.780504-4-elver@google.com> Mime-Version: 1.0 References: <20220609113046.780504-1-elver@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH 3/8] perf/hw_breakpoint: Optimize constant number of breakpoint slots From: Marco Elver To: elver@google.com, Peter Zijlstra , Frederic Weisbecker , Ingo Molnar Cc: Thomas Gleixner , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Dmitry Vyukov , linux-perf-users@vger.kernel.org, x86@kernel.org, linux-sh@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Optimize internal hw_breakpoint state if the architecture's number of breakpoint slots is constant. This avoids several kmalloc() calls and potentially unnecessary failures if the allocations fail, as well as subtly improves code generation and cache locality. The protocol is that if an architecture defines hw_breakpoint_slots via the preprocessor, it must be constant and the same for all types. Signed-off-by: Marco Elver Acked-by: Dmitry Vyukov --- arch/sh/include/asm/hw_breakpoint.h | 5 +- arch/x86/include/asm/hw_breakpoint.h | 5 +- kernel/events/hw_breakpoint.c | 92 ++++++++++++++++++---------- 3 files changed, 62 insertions(+), 40 deletions(-) diff --git a/arch/sh/include/asm/hw_breakpoint.h b/arch/sh/include/asm/hw_breakpoint.h index 199d17b765f2..361a0f57bdeb 100644 --- a/arch/sh/include/asm/hw_breakpoint.h +++ b/arch/sh/include/asm/hw_breakpoint.h @@ -48,10 +48,7 @@ struct pmu; /* Maximum number of UBC channels */ #define HBP_NUM 2 -static inline int hw_breakpoint_slots(int type) -{ - return HBP_NUM; -} +#define hw_breakpoint_slots(type) (HBP_NUM) /* arch/sh/kernel/hw_breakpoint.c */ extern int arch_check_bp_in_kernelspace(struct arch_hw_breakpoint *hw); diff --git a/arch/x86/include/asm/hw_breakpoint.h b/arch/x86/include/asm/hw_breakpoint.h index a1f0e90d0818..0bc931cd0698 100644 --- a/arch/x86/include/asm/hw_breakpoint.h +++ b/arch/x86/include/asm/hw_breakpoint.h @@ -44,10 +44,7 @@ struct arch_hw_breakpoint { /* Total number of available HW breakpoint registers */ #define HBP_NUM 4 -static inline int hw_breakpoint_slots(int type) -{ - return HBP_NUM; -} +#define hw_breakpoint_slots(type) (HBP_NUM) struct perf_event_attr; struct perf_event; diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c index 1f718745d569..8e939723f27d 100644 --- a/kernel/events/hw_breakpoint.c +++ b/kernel/events/hw_breakpoint.c @@ -41,13 +41,16 @@ struct bp_cpuinfo { /* Number of pinned cpu breakpoints in a cpu */ unsigned int cpu_pinned; /* tsk_pinned[n] is the number of tasks having n+1 breakpoints */ +#ifdef hw_breakpoint_slots + unsigned int tsk_pinned[hw_breakpoint_slots(0)]; +#else unsigned int *tsk_pinned; +#endif /* Number of non-pinned cpu/task breakpoints in a cpu */ unsigned int flexible; /* XXX: placeholder, see fetch_this_slot() */ }; static DEFINE_PER_CPU(struct bp_cpuinfo, bp_cpuinfo[TYPE_MAX]); -static int nr_slots[TYPE_MAX] __ro_after_init; static struct bp_cpuinfo *get_bp_info(int cpu, enum bp_type_idx type) { @@ -74,6 +77,54 @@ struct bp_busy_slots { /* Serialize accesses to the above constraints */ static DEFINE_MUTEX(nr_bp_mutex); +#ifdef hw_breakpoint_slots +/* + * Number of breakpoint slots is constant, and the same for all types. + */ +static_assert(hw_breakpoint_slots(TYPE_INST) == hw_breakpoint_slots(TYPE_DATA)); +static inline int hw_breakpoint_slots_cached(int type) { return hw_breakpoint_slots(type); } +static inline int init_breakpoint_slots(void) { return 0; } +#else +/* + * Dynamic number of breakpoint slots. + */ +static int __nr_bp_slots[TYPE_MAX] __ro_after_init; + +static inline int hw_breakpoint_slots_cached(int type) +{ + return __nr_bp_slots[type]; +} + +static __init int init_breakpoint_slots(void) +{ + int i, cpu, err_cpu; + + for (i = 0; i < TYPE_MAX; i++) + __nr_bp_slots[i] = hw_breakpoint_slots(i); + + for_each_possible_cpu(cpu) { + for (i = 0; i < TYPE_MAX; i++) { + struct bp_cpuinfo *info = get_bp_info(cpu, i); + + info->tsk_pinned = kcalloc(__nr_bp_slots[i], sizeof(int), GFP_KERNEL); + if (!info->tsk_pinned) + goto err; + } + } + + return 0; +err: + for_each_possible_cpu(err_cpu) { + for (i = 0; i < TYPE_MAX; i++) + kfree(get_bp_info(err_cpu, i)->tsk_pinned); + if (err_cpu == cpu) + break; + } + + return -ENOMEM; +} +#endif + __weak int hw_breakpoint_weight(struct perf_event *bp) { return 1; @@ -96,7 +147,7 @@ static unsigned int max_task_bp_pinned(int cpu, enum bp_type_idx type) unsigned int *tsk_pinned = get_bp_info(cpu, type)->tsk_pinned; int i; - for (i = nr_slots[type] - 1; i >= 0; i--) { + for (i = hw_breakpoint_slots_cached(type) - 1; i >= 0; i--) { if (tsk_pinned[i] > 0) return i + 1; } @@ -313,7 +364,7 @@ static int __reserve_bp_slot(struct perf_event *bp, u64 bp_type) fetch_this_slot(&slots, weight); /* Flexible counters need to keep at least one slot */ - if (slots.pinned + (!!slots.flexible) > nr_slots[type]) + if (slots.pinned + (!!slots.flexible) > hw_breakpoint_slots_cached(type)) return -ENOSPC; ret = arch_reserve_bp_slot(bp); @@ -688,42 +739,19 @@ static struct pmu perf_breakpoint = { int __init init_hw_breakpoint(void) { - int cpu, err_cpu; - int i, ret; - - for (i = 0; i < TYPE_MAX; i++) - nr_slots[i] = hw_breakpoint_slots(i); - - for_each_possible_cpu(cpu) { - for (i = 0; i < TYPE_MAX; i++) { - struct bp_cpuinfo *info = get_bp_info(cpu, i); - - info->tsk_pinned = kcalloc(nr_slots[i], sizeof(int), - GFP_KERNEL); - if (!info->tsk_pinned) { - ret = -ENOMEM; - goto err; - } - } - } + int ret; ret = rhltable_init(&task_bps_ht, &task_bps_ht_params); if (ret) - goto err; + return ret; + + ret = init_breakpoint_slots(); + if (ret) + return ret; constraints_initialized = true; perf_pmu_register(&perf_breakpoint, "breakpoint", PERF_TYPE_BREAKPOINT); return register_die_notifier(&hw_breakpoint_exceptions_nb); - -err: - for_each_possible_cpu(err_cpu) { - for (i = 0; i < TYPE_MAX; i++) - kfree(get_bp_info(err_cpu, i)->tsk_pinned); - if (err_cpu == cpu) - break; - } - - return ret; } From patchwork Thu Jun 9 11:30:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12875316 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36AB9C433EF for ; Thu, 9 Jun 2022 11:31:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243863AbiFILbO (ORCPT ); Thu, 9 Jun 2022 07:31:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54448 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243822AbiFILbM (ORCPT ); Thu, 9 Jun 2022 07:31:12 -0400 Received: from mail-ed1-x549.google.com (mail-ed1-x549.google.com [IPv6:2a00:1450:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1C09D3A3B81 for ; Thu, 9 Jun 2022 04:31:10 -0700 (PDT) Received: by mail-ed1-x549.google.com with SMTP id i20-20020a50fd14000000b0042dd305d0f7so16937961eds.18 for ; Thu, 09 Jun 2022 04:31:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=95dYiJC9IDZol3AtuiRZW/23v+t4OmpxxOGgsAq7Fsc=; b=bk/INExYMBUiLgU9B9FG9Wxis71pb98dHe4VkAA3lfm2iODhPZt7k+FVxcXk/IlJQ5 RpFX2tLyK5ayKvJnfKSmbovwQdAQ9qChX7IVHYGLvsWE83meaJOU58o4XvSilpxs1auS 1rHmsuprC9Lwk94yHLzDR08aqrODjpQg8nX/chmzEEDz9yCpbteM4nwzxs+w96n83IeJ shOx0BcOzTq3E4JF/hlMco7FAyjn60LdG7vKytD2J+ZdYsrW8rFTDNPh95b3vtUVvhsA nTRmG/zdlPgdhI6hDiah81kGmc0I75VpdGiTUpIkvfkfJ+tkQQEQ29pWBbEWgxA305qQ +ZmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=95dYiJC9IDZol3AtuiRZW/23v+t4OmpxxOGgsAq7Fsc=; b=06VPzaSaKvk9UAh2UacWHPZtf+iyjbgcvCjbs951OECcF94hKq3h/Rafsf6VxKzgqu wPfzmkP6YPdmYR8Njp1sgBz4rZpEAQhg/wwVNZjxdirUDZ3waZGrKld8V/dAgfOm2RDy t6TT9DZQJCmLWQd50vmuJEsLUnKmFPEQHd7GhSlLFIUkI5625lM/qwaFreKNl0WdJwR9 T91QWwpKd+yLLzDAoFOMbuObIgiMTnJgKxZWOvFYy5KCSeIag6fh9Vy/OnHCD6HmHmwf WNeEqBF8qvWRfAjw6LVUasr/yCvQjqmNBHhBvHaQRWJgrFs4kYCZP0HO3QQ7AIq6YJHV 1naA== X-Gm-Message-State: AOAM531t0czc3D4+ddGTDcObSirmY2l0S7BoJvgEkiljnPouaMPxAnQo yxk6f1xe3WiwHFAhLh9DRsAzrwNJdg== X-Google-Smtp-Source: ABdhPJwVUKGw0QLiFW9XPwf2r2NLAq5BjdrToZq6AokVHC66Y8e3UeznCBqRNOYwVV+V22o7YNnBpxHjfQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:dcf:e5ba:10a5:1ea5]) (user=elver job=sendgmr) by 2002:aa7:c706:0:b0:42d:c4ad:ce0a with SMTP id i6-20020aa7c706000000b0042dc4adce0amr45226048edq.272.1654774268320; Thu, 09 Jun 2022 04:31:08 -0700 (PDT) Date: Thu, 9 Jun 2022 13:30:42 +0200 In-Reply-To: <20220609113046.780504-1-elver@google.com> Message-Id: <20220609113046.780504-5-elver@google.com> Mime-Version: 1.0 References: <20220609113046.780504-1-elver@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH 4/8] perf/hw_breakpoint: Make hw_breakpoint_weight() inlinable From: Marco Elver To: elver@google.com, Peter Zijlstra , Frederic Weisbecker , Ingo Molnar Cc: Thomas Gleixner , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Dmitry Vyukov , linux-perf-users@vger.kernel.org, x86@kernel.org, linux-sh@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Due to being a __weak function, hw_breakpoint_weight() will cause the compiler to always emit a call to it. This generates unnecessarily bad code (register spills etc.) for no good reason; in fact it appears in profiles of `perf bench -r 100 breakpoint thread -b 4 -p 128 -t 512`: ... 0.70% [kernel] [k] hw_breakpoint_weight ... While a small percentage, no architecture defines its own hw_breakpoint_weight() nor are there users outside hw_breakpoint.c, which makes the fact it is currently __weak a poor choice. Change hw_breakpoint_weight()'s definition to follow a similar protocol to hw_breakpoint_slots(), such that if defines hw_breakpoint_weight(), we'll use it instead. The result is that it is inlined and no longer shows up in profiles. Signed-off-by: Marco Elver --- include/linux/hw_breakpoint.h | 1 - kernel/events/hw_breakpoint.c | 4 +++- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/include/linux/hw_breakpoint.h b/include/linux/hw_breakpoint.h index 78dd7035d1e5..9fa3547acd87 100644 --- a/include/linux/hw_breakpoint.h +++ b/include/linux/hw_breakpoint.h @@ -79,7 +79,6 @@ extern int dbg_reserve_bp_slot(struct perf_event *bp); extern int dbg_release_bp_slot(struct perf_event *bp); extern int reserve_bp_slot(struct perf_event *bp); extern void release_bp_slot(struct perf_event *bp); -int hw_breakpoint_weight(struct perf_event *bp); int arch_reserve_bp_slot(struct perf_event *bp); void arch_release_bp_slot(struct perf_event *bp); void arch_unregister_hw_breakpoint(struct perf_event *bp); diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c index 8e939723f27d..5f40c8dfa042 100644 --- a/kernel/events/hw_breakpoint.c +++ b/kernel/events/hw_breakpoint.c @@ -125,10 +125,12 @@ static __init int init_breakpoint_slots(void) } #endif -__weak int hw_breakpoint_weight(struct perf_event *bp) +#ifndef hw_breakpoint_weight +static inline int hw_breakpoint_weight(struct perf_event *bp) { return 1; } +#endif static inline enum bp_type_idx find_slot_idx(u64 bp_type) { From patchwork Thu Jun 9 11:30:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12875317 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E25F9CCA47F for ; Thu, 9 Jun 2022 11:31:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243227AbiFILbp (ORCPT ); Thu, 9 Jun 2022 07:31:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243890AbiFILbV (ORCPT ); Thu, 9 Jun 2022 07:31:21 -0400 Received: from mail-ej1-x64a.google.com (mail-ej1-x64a.google.com [IPv6:2a00:1450:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 511283A5BD7 for ; Thu, 9 Jun 2022 04:31:13 -0700 (PDT) Received: by mail-ej1-x64a.google.com with SMTP id mb1-20020a170906eb0100b00710c3b46a9aso6240599ejb.22 for ; Thu, 09 Jun 2022 04:31:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=vEnwLHfz8bnnz/z0hnI1W7+EOk8HMNN4MOfns1570U0=; b=FJgcqRt48J5Dot/Fz9Uvm/E4iUfO5qVqzissd4gJ7h4TVLWQ6eMYxF/+mDWmUYehwt kaZAdhcbjK6DjQoIPEj2ovbsH7tZR4sidZog2AVEC5Pcbv0/Fd8zdz/xgkaMk7566qWW CtXnzZyoZZQCmfhPo2iHeESBHyL1ng2CcLJaeRINF4vimizadO6EST9hw6xvtREsPhiZ Au5ALp1erVQNvVDl2CLPDLUo19W9/msQUdrJR4bOwmgKmz89mxUyWVlCwhjQLU5BI3zY jeyj/ipnnM6TeFDdk1z2+RfKLNYkSEmH9XO71edFxAxxLc8Q/xUYJwLA992cBFhp+dn3 3OPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=vEnwLHfz8bnnz/z0hnI1W7+EOk8HMNN4MOfns1570U0=; b=eVdsWZR8iXBoiV6ITklPzLdzmIbTcc9wWNhgBogc8KL9K54c6PAfFNLbqYCS3EvBzI CqwTP9zrOO8QUwREYhDFVnXPeFHwZwZ2TjW7crOicRCTjdIaOhL4C0B0Xwiz5ONVoY3K G6cU1e+FGApPD9zPXksEOmp6Z/eMXaPYe2Kr/HNBjx8an1kMfxskzEkurlS3Uo0m+r4H mA1d+9D+KPRK/4p44EwtJ2JzOROUAb+S/4oXissvS6D959JgCmNBASOCTlQbBSsLzOF9 LegRxFHKD9aoHEed5Yz/uFGeavmiMiIQsyi4FtQspOl8OourzXvzsNe1y41hpGQkXuEo NvoA== X-Gm-Message-State: AOAM530Ooq4mdmNlYRKvEI37LDE/YFj0PlfaRFc1ph8PQE/73/ctMTwe uhY+Auz70w9cSORkCJAM+hmkdGHI6w== X-Google-Smtp-Source: ABdhPJy6fxgLEEWOBBqF03SqL1tlVIBlJ9qpMkrzeULfbi/9xejgkRe26FJME5xe2MXNniUvCKkw5WpXAA== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:dcf:e5ba:10a5:1ea5]) (user=elver job=sendgmr) by 2002:a17:906:8513:b0:711:c67f:62b6 with SMTP id i19-20020a170906851300b00711c67f62b6mr21303657ejx.657.1654774271337; Thu, 09 Jun 2022 04:31:11 -0700 (PDT) Date: Thu, 9 Jun 2022 13:30:43 +0200 In-Reply-To: <20220609113046.780504-1-elver@google.com> Message-Id: <20220609113046.780504-6-elver@google.com> Mime-Version: 1.0 References: <20220609113046.780504-1-elver@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH 5/8] perf/hw_breakpoint: Remove useless code related to flexible breakpoints From: Marco Elver To: elver@google.com, Peter Zijlstra , Frederic Weisbecker , Ingo Molnar Cc: Thomas Gleixner , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Dmitry Vyukov , linux-perf-users@vger.kernel.org, x86@kernel.org, linux-sh@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Flexible breakpoints have never been implemented, with bp_cpuinfo::flexible always being 0. Unfortunately, they still occupy 4 bytes in each bp_cpuinfo and bp_busy_slots, as well as computing the max flexible count in fetch_bp_busy_slots(). This again causes suboptimal code generation, when we always know that `!!slots.flexible` will be 0. Just get rid of the flexible "placeholder" and remove all real code related to it. Make a note in the comment related to the constraints algorithm but don't remove them from the algorithm, so that if in future flexible breakpoints need supporting, it should be trivial to revive them (along with reverting this change). Signed-off-by: Marco Elver Acked-by: Dmitry Vyukov --- kernel/events/hw_breakpoint.c | 12 +++--------- 1 file changed, 3 insertions(+), 9 deletions(-) diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c index 5f40c8dfa042..afe0a6007e96 100644 --- a/kernel/events/hw_breakpoint.c +++ b/kernel/events/hw_breakpoint.c @@ -46,8 +46,6 @@ struct bp_cpuinfo { #else unsigned int *tsk_pinned; #endif - /* Number of non-pinned cpu/task breakpoints in a cpu */ - unsigned int flexible; /* XXX: placeholder, see fetch_this_slot() */ }; static DEFINE_PER_CPU(struct bp_cpuinfo, bp_cpuinfo[TYPE_MAX]); @@ -71,7 +69,6 @@ static bool constraints_initialized __ro_after_init; /* Gather the number of total pinned and un-pinned bp in a cpuset */ struct bp_busy_slots { unsigned int pinned; - unsigned int flexible; }; /* Serialize accesses to the above constraints */ @@ -213,10 +210,6 @@ fetch_bp_busy_slots(struct bp_busy_slots *slots, struct perf_event *bp, if (nr > slots->pinned) slots->pinned = nr; - - nr = info->flexible; - if (nr > slots->flexible) - slots->flexible = nr; } } @@ -299,7 +292,8 @@ __weak void arch_unregister_hw_breakpoint(struct perf_event *bp) } /* - * Constraints to check before allowing this new breakpoint counter: + * Constraints to check before allowing this new breakpoint counter. Note that + * flexible breakpoints are currently unsupported -- see fetch_this_slot(). * * == Non-pinned counter == (Considered as pinned for now) * @@ -366,7 +360,7 @@ static int __reserve_bp_slot(struct perf_event *bp, u64 bp_type) fetch_this_slot(&slots, weight); /* Flexible counters need to keep at least one slot */ - if (slots.pinned + (!!slots.flexible) > hw_breakpoint_slots_cached(type)) + if (slots.pinned > hw_breakpoint_slots_cached(type)) return -ENOSPC; ret = arch_reserve_bp_slot(bp); From patchwork Thu Jun 9 11:30:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12875318 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07FEDC433EF for ; Thu, 9 Jun 2022 11:31:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237540AbiFILbx (ORCPT ); Thu, 9 Jun 2022 07:31:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241162AbiFILbo (ORCPT ); Thu, 9 Jun 2022 07:31:44 -0400 Received: from mail-ej1-x649.google.com (mail-ej1-x649.google.com [IPv6:2a00:1450:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E86F83A79D8 for ; Thu, 9 Jun 2022 04:31:16 -0700 (PDT) Received: by mail-ej1-x649.google.com with SMTP id q5-20020a17090676c500b00704ffb95131so10889617ejn.8 for ; Thu, 09 Jun 2022 04:31:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Qrang/3zbd627ej4K3aX6CJRiMSrjsrxCenTVL581fY=; b=hrB6Z1QRdDU0UqTboVSjaTVcATPzy0QpgNxEs/1+5YavbUt9jMqUlUykVw+6gwVInq B+jlmyCEPPUrs2qC/HYc5oOQqdgpc3cGLxZZcbwF/E8edhdi/uZ7ay7BY5Od9+AzQuew vQ+x/b/Igy/b5gSJodaQSRf1NuLasDGU/RH8gh+7Unygpqj0zroRLCKC2iO9AGrtfP4z msdffUtLMCP7fbntRAM7p4WBG0nD8rTLw6OG/y9FUwL/v7skFehJOE9VF0wnHwApMqrE YO3jZX0VabvW1NuqifOcGMZ5uSWz2zzfsZMqXAQ4bb8QFPCDqL3JFGMP5e10oz6Cs6OU xggA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Qrang/3zbd627ej4K3aX6CJRiMSrjsrxCenTVL581fY=; b=dmoPajgkqJ7ljgyMnY8jAgZ+jJPeQYCjMkO56vlHcRqOe876N0U5IKjOdh4NT/wZd8 zH28kFl4c+WYflJYy/ASo2iM9TorZi5/vg63HZs717xtV5GUZSd6/b9LK/406SErm8Le B6WAzZOeSEyPcp7+wZZNndDYEh6ImOHukc5fSJtEsC/3XLCA+lAbb9KwTZTjWr5P8A6X QHjOy28YhJ5KtAaKxS7n3fkirzirz09BoneIcRl9dZe4m9FZuiQaqlurY3bdz69as+fn UKirnf4CaqS9dgELxyu9mVN/A9OB7UFVrAnXxz5s/09z+AfHxVeXIJEhbw6G9O3iaVXo ZwfQ== X-Gm-Message-State: AOAM532QZuHbogxKtiv8dsCZk3LFHDeYCVfvzHeyIA3+9JmzIGtzV4qq eb12hxx1mPhFWib6q5w/JdrX2/lLdQ== X-Google-Smtp-Source: ABdhPJwjbG9fLgFAnR6dcJQqcMlDxZyFZjqybQveBM48Z8sSIBdT/nh0HvNMJXmPNUYAznRgKVa/OT/itw== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:dcf:e5ba:10a5:1ea5]) (user=elver job=sendgmr) by 2002:aa7:c508:0:b0:42d:cc6b:df80 with SMTP id o8-20020aa7c508000000b0042dcc6bdf80mr44225331edq.393.1654774274424; Thu, 09 Jun 2022 04:31:14 -0700 (PDT) Date: Thu, 9 Jun 2022 13:30:44 +0200 In-Reply-To: <20220609113046.780504-1-elver@google.com> Message-Id: <20220609113046.780504-7-elver@google.com> Mime-Version: 1.0 References: <20220609113046.780504-1-elver@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH 6/8] perf/hw_breakpoint: Reduce contention with large number of tasks From: Marco Elver To: elver@google.com, Peter Zijlstra , Frederic Weisbecker , Ingo Molnar Cc: Thomas Gleixner , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Dmitry Vyukov , linux-perf-users@vger.kernel.org, x86@kernel.org, linux-sh@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org While optimizing task_bp_pinned()'s runtime complexity to O(1) on average helps reduce time spent in the critical section, we still suffer due to serializing everything via 'nr_bp_mutex'. Indeed, a profile shows that now contention is the biggest issue: 95.93% [kernel] [k] osq_lock 0.70% [kernel] [k] mutex_spin_on_owner 0.22% [kernel] [k] smp_cfm_core_cond 0.18% [kernel] [k] task_bp_pinned 0.18% [kernel] [k] rhashtable_jhash2 0.15% [kernel] [k] queued_spin_lock_slowpath when running the breakpoint benchmark with (system with 256 CPUs): | $> perf bench -r 30 breakpoint thread -b 4 -p 64 -t 64 | # Running 'breakpoint/thread' benchmark: | # Created/joined 30 threads with 4 breakpoints and 64 parallelism | Total time: 0.207 [sec] | | 108.267188 usecs/op | 6929.100000 usecs/op/cpu The main concern for synchronizing the breakpoint constraints data is that a consistent snapshot of the per-CPU and per-task data is observed. The access pattern is as follows: 1. If the target is a task: the task's pinned breakpoints are counted, checked for space, and then appended to; only bp_cpuinfo::cpu_pinned is used to check for conflicts with CPU-only breakpoints; bp_cpuinfo::tsk_pinned are incremented/decremented, but otherwise unused. 2. If the target is a CPU: bp_cpuinfo::cpu_pinned are counted, along with bp_cpuinfo::tsk_pinned; after a successful check, cpu_pinned is incremented. No per-task breakpoints are checked. Since rhltable safely synchronizes insertions/deletions, we can allow concurrency as follows: 1. If the target is a task: independent tasks may update and check the constraints concurrently, but same-task target calls need to be serialized; since bp_cpuinfo::tsk_pinned is only updated, but not checked, these modifications can happen concurrently by switching tsk_pinned to atomic_t. 2. If the target is a CPU: access to the per-CPU constraints needs to be serialized with other CPU-target and task-target callers (to stabilize the bp_cpuinfo::tsk_pinned snapshot). We can allow the above concurrency by introducing a per-CPU constraints data reader-writer lock (bp_cpuinfo_lock), and per-task mutexes (task_sharded_mtx): 1. If the target is a task: acquires its task_sharded_mtx, and acquires bp_cpuinfo_lock as a reader. 2. If the target is a CPU: acquires bp_cpuinfo_lock as a writer. With these changes, contention with thousands of tasks is reduced to the point where waiting on locking no longer dominates the profile: | $> perf bench -r 30 breakpoint thread -b 4 -p 64 -t 64 | # Running 'breakpoint/thread' benchmark: | # Created/joined 30 threads with 4 breakpoints and 64 parallelism | Total time: 0.080 [sec] | | 42.048437 usecs/op | 2691.100000 usecs/op/cpu 21.31% [kernel] [k] task_bp_pinned 17.49% [kernel] [k] rhashtable_jhash2 5.29% [kernel] [k] toggle_bp_slot 4.45% [kernel] [k] mutex_spin_on_owner 3.72% [kernel] [k] bcmp On this particular setup that's a speedup of 2.5x. We're also getting closer to the theoretical ideal performance through optimizations in hw_breakpoint.c -- constraints accounting disabled: | perf bench -r 30 breakpoint thread -b 4 -p 64 -t 64 | # Running 'breakpoint/thread' benchmark: | # Created/joined 30 threads with 4 breakpoints and 64 parallelism | Total time: 0.067 [sec] | | 35.286458 usecs/op | 2258.333333 usecs/op/cpu Which means the current implementation is ~19% slower than the theoretical ideal. For reference, performance without any breakpoints: | $> bench -r 30 breakpoint thread -b 0 -p 64 -t 64 | # Running 'breakpoint/thread' benchmark: | # Created/joined 30 threads with 0 breakpoints and 64 parallelism | Total time: 0.060 [sec] | | 31.365625 usecs/op | 2007.400000 usecs/op/cpu The theoretical ideal is only ~12% slower than no breakpoints at all. The current implementation is ~34% slower than no breakpoints at all. (On a system with 256 CPUs.) Signed-off-by: Marco Elver --- kernel/events/hw_breakpoint.c | 155 ++++++++++++++++++++++++++++------ 1 file changed, 128 insertions(+), 27 deletions(-) diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c index afe0a6007e96..08c9ed0626e4 100644 --- a/kernel/events/hw_breakpoint.c +++ b/kernel/events/hw_breakpoint.c @@ -17,6 +17,7 @@ * This file contains the arch-independent routines. */ +#include #include #include #include @@ -24,8 +25,10 @@ #include #include #include +#include #include #include +#include #include #include #include @@ -42,9 +45,9 @@ struct bp_cpuinfo { unsigned int cpu_pinned; /* tsk_pinned[n] is the number of tasks having n+1 breakpoints */ #ifdef hw_breakpoint_slots - unsigned int tsk_pinned[hw_breakpoint_slots(0)]; + atomic_t tsk_pinned[hw_breakpoint_slots(0)]; #else - unsigned int *tsk_pinned; + atomic_t *tsk_pinned; #endif }; @@ -71,8 +74,81 @@ struct bp_busy_slots { unsigned int pinned; }; -/* Serialize accesses to the above constraints */ -static DEFINE_MUTEX(nr_bp_mutex); +/* + * Synchronizes accesses to the per-CPU constraints; users of data in bp_cpuinfo + * must acquire bp_cpuinfo_lock as writer to get a stable snapshot of all CPUs' + * constraints. Modifications without use may only acquire bp_cpuinfo_lock as a + * reader, but must otherwise ensure modifications are never lost. + */ +static DEFINE_RWLOCK(bp_cpuinfo_lock); + +/* + * Synchronizes accesses to the per-task breakpoint list in task_bps_ht. Since + * rhltable synchronizes concurrent insertions/deletions, independent tasks may + * insert/delete concurrently; therefore, a mutex per task would be sufficient. + * + * To avoid bloating task_struct with infrequently used data, use a sharded + * mutex that scales with number of CPUs. + */ +static DEFINE_PER_CPU(struct mutex, task_sharded_mtx); + +static struct mutex *get_task_sharded_mtx(struct perf_event *bp) +{ + int shard; + + if (!bp->hw.target) + return NULL; + + /* + * Compute a valid shard index into per-CPU data. + */ + shard = task_pid_nr(bp->hw.target) % nr_cpu_ids; + shard = cpumask_next(shard - 1, cpu_possible_mask); + if (shard >= nr_cpu_ids) + shard = cpumask_first(cpu_possible_mask); + + return per_cpu_ptr(&task_sharded_mtx, shard); +} + +static struct mutex *bp_constraints_lock(struct perf_event *bp) +{ + struct mutex *mtx = get_task_sharded_mtx(bp); + + if (mtx) { + mutex_lock(mtx); + read_lock(&bp_cpuinfo_lock); + } else { + write_lock(&bp_cpuinfo_lock); + } + + return mtx; +} + +static void bp_constraints_unlock(struct mutex *mtx) +{ + if (mtx) { + read_unlock(&bp_cpuinfo_lock); + mutex_unlock(mtx); + } else { + write_unlock(&bp_cpuinfo_lock); + } +} + +static bool bp_constraints_is_locked(struct perf_event *bp) +{ + struct mutex *mtx = get_task_sharded_mtx(bp); + + return (mtx ? mutex_is_locked(mtx) : false) || + rwlock_is_contended(&bp_cpuinfo_lock); +} + +static inline void assert_bp_constraints_lock_held(struct perf_event *bp) +{ + lockdep_assert_held(&bp_cpuinfo_lock); + /* Don't call get_task_sharded_mtx() if lockdep is disabled. */ + if (IS_ENABLED(CONFIG_LOCKDEP) && bp->hw.target) + lockdep_assert_held(get_task_sharded_mtx(bp)); +} #ifdef hw_breakpoint_slots /* @@ -103,7 +179,7 @@ static __init int init_breakpoint_slots(void) for (i = 0; i < TYPE_MAX; i++) { struct bp_cpuinfo *info = get_bp_info(cpu, i); - info->tsk_pinned = kcalloc(__nr_bp_slots[i], sizeof(int), GFP_KERNEL); + info->tsk_pinned = kcalloc(__nr_bp_slots[i], sizeof(atomic_t), GFP_KERNEL); if (!info->tsk_pinned) goto err; } @@ -143,11 +219,19 @@ static inline enum bp_type_idx find_slot_idx(u64 bp_type) */ static unsigned int max_task_bp_pinned(int cpu, enum bp_type_idx type) { - unsigned int *tsk_pinned = get_bp_info(cpu, type)->tsk_pinned; + atomic_t *tsk_pinned = get_bp_info(cpu, type)->tsk_pinned; int i; + /* + * At this point we want to have acquired the bp_cpuinfo_lock as a + * writer to ensure that there are no concurrent writers in + * toggle_bp_task_slot() to tsk_pinned, and we get a stable snapshot. + */ + lockdep_assert_held_write(&bp_cpuinfo_lock); + for (i = hw_breakpoint_slots_cached(type) - 1; i >= 0; i--) { - if (tsk_pinned[i] > 0) + ASSERT_EXCLUSIVE_WRITER(tsk_pinned[i]); /* Catch unexpected writers. */ + if (atomic_read(&tsk_pinned[i]) > 0) return i + 1; } @@ -164,6 +248,11 @@ static int task_bp_pinned(int cpu, struct perf_event *bp, enum bp_type_idx type) struct perf_event *iter; int count = 0; + /* + * We need a stable snapshot of the per-task breakpoint list. + */ + assert_bp_constraints_lock_held(bp); + rcu_read_lock(); head = rhltable_lookup(&task_bps_ht, &bp->hw.target, task_bps_ht_params); if (!head) @@ -230,16 +319,25 @@ fetch_this_slot(struct bp_busy_slots *slots, int weight) static void toggle_bp_task_slot(struct perf_event *bp, int cpu, enum bp_type_idx type, int weight) { - unsigned int *tsk_pinned = get_bp_info(cpu, type)->tsk_pinned; + atomic_t *tsk_pinned = get_bp_info(cpu, type)->tsk_pinned; int old_idx, new_idx; + /* + * If bp->hw.target, tsk_pinned is only modified, but not used + * otherwise. We can permit concurrent updates as long as there are no + * other uses: having acquired bp_cpuinfo_lock as a reader allows + * concurrent updates here. Uses of tsk_pinned will require acquiring + * bp_cpuinfo_lock as a writer to stabilize tsk_pinned's value. + */ + lockdep_assert_held_read(&bp_cpuinfo_lock); + old_idx = task_bp_pinned(cpu, bp, type) - 1; new_idx = old_idx + weight; if (old_idx >= 0) - tsk_pinned[old_idx]--; + atomic_dec(&tsk_pinned[old_idx]); if (new_idx >= 0) - tsk_pinned[new_idx]++; + atomic_inc(&tsk_pinned[new_idx]); } /* @@ -257,6 +355,7 @@ toggle_bp_slot(struct perf_event *bp, bool enable, enum bp_type_idx type, /* Pinned counter cpu profiling */ if (!bp->hw.target) { + lockdep_assert_held_write(&bp_cpuinfo_lock); get_bp_info(bp->cpu, type)->cpu_pinned += weight; return 0; } @@ -265,6 +364,11 @@ toggle_bp_slot(struct perf_event *bp, bool enable, enum bp_type_idx type, for_each_cpu(cpu, cpumask) toggle_bp_task_slot(bp, cpu, type, weight); + /* + * Readers want a stable snapshot of the per-task breakpoint list. + */ + assert_bp_constraints_lock_held(bp); + if (enable) return rhltable_insert(&task_bps_ht, &bp->hw.bp_list, task_bps_ht_params); else @@ -372,14 +476,10 @@ static int __reserve_bp_slot(struct perf_event *bp, u64 bp_type) int reserve_bp_slot(struct perf_event *bp) { - int ret; - - mutex_lock(&nr_bp_mutex); - - ret = __reserve_bp_slot(bp, bp->attr.bp_type); - - mutex_unlock(&nr_bp_mutex); + struct mutex *mtx = bp_constraints_lock(bp); + int ret = __reserve_bp_slot(bp, bp->attr.bp_type); + bp_constraints_unlock(mtx); return ret; } @@ -397,12 +497,11 @@ static void __release_bp_slot(struct perf_event *bp, u64 bp_type) void release_bp_slot(struct perf_event *bp) { - mutex_lock(&nr_bp_mutex); + struct mutex *mtx = bp_constraints_lock(bp); arch_unregister_hw_breakpoint(bp); __release_bp_slot(bp, bp->attr.bp_type); - - mutex_unlock(&nr_bp_mutex); + bp_constraints_unlock(mtx); } static int __modify_bp_slot(struct perf_event *bp, u64 old_type, u64 new_type) @@ -429,11 +528,10 @@ static int __modify_bp_slot(struct perf_event *bp, u64 old_type, u64 new_type) static int modify_bp_slot(struct perf_event *bp, u64 old_type, u64 new_type) { - int ret; + struct mutex *mtx = bp_constraints_lock(bp); + int ret = __modify_bp_slot(bp, old_type, new_type); - mutex_lock(&nr_bp_mutex); - ret = __modify_bp_slot(bp, old_type, new_type); - mutex_unlock(&nr_bp_mutex); + bp_constraints_unlock(mtx); return ret; } @@ -444,7 +542,7 @@ static int modify_bp_slot(struct perf_event *bp, u64 old_type, u64 new_type) */ int dbg_reserve_bp_slot(struct perf_event *bp) { - if (mutex_is_locked(&nr_bp_mutex)) + if (bp_constraints_is_locked(bp)) return -1; return __reserve_bp_slot(bp, bp->attr.bp_type); @@ -452,7 +550,7 @@ int dbg_reserve_bp_slot(struct perf_event *bp) int dbg_release_bp_slot(struct perf_event *bp) { - if (mutex_is_locked(&nr_bp_mutex)) + if (bp_constraints_is_locked(bp)) return -1; __release_bp_slot(bp, bp->attr.bp_type); @@ -735,7 +833,10 @@ static struct pmu perf_breakpoint = { int __init init_hw_breakpoint(void) { - int ret; + int cpu, ret; + + for_each_possible_cpu(cpu) + mutex_init(&per_cpu(task_sharded_mtx, cpu)); ret = rhltable_init(&task_bps_ht, &task_bps_ht_params); if (ret) From patchwork Thu Jun 9 11:30:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12875319 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C13DC433EF for ; Thu, 9 Jun 2022 11:32:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235925AbiFILb6 (ORCPT ); Thu, 9 Jun 2022 07:31:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54356 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243919AbiFILbp (ORCPT ); Thu, 9 Jun 2022 07:31:45 -0400 Received: from mail-ed1-x549.google.com (mail-ed1-x549.google.com [IPv6:2a00:1450:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DE5243A79F9 for ; Thu, 9 Jun 2022 04:31:18 -0700 (PDT) Received: by mail-ed1-x549.google.com with SMTP id m5-20020a056402430500b004319d8ba8afso4154432edc.5 for ; Thu, 09 Jun 2022 04:31:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=WcGBHtbaqL7btwnwCehWzC9DdtYID6QXS6Q+AzJHJpo=; b=FUHqgrymmD/1yS6eajWNPNkLHwczgi647YPiMvdMd8wKh4/2sjqiaRXlAA620pm4Mv xL9r1xgoQJCeRUF1AfHW4m3wwQ8dBMPuJAP+MS3Gf5Y4Q60vW28etT5n4Qhzynpw2KlW SSVheyu7eK0bXuB94IqvRRPEZWUAGiG94P9UljvrtyMvC/mSSOEArG+bKmndPV4CGitO wDBW7o2Agqcv1HgI7082Kz0V0EtABzWdXhs7xBTwIj93xRyEar5DBQPibE84GDP4kT4l aCFjysDsm/A75pLI9BoCvtbcKb45WhhYACi7rL+A8fAmZPWBVhJtRTNOPQ0mx7QAsnKn 2JAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=WcGBHtbaqL7btwnwCehWzC9DdtYID6QXS6Q+AzJHJpo=; b=bpUAIoS+VCJKBRzmlT9zWnEdbjURoYXm43zLT3pKlDMeLcrZeJcagnzpDQG/QB66Ng 2LylmwkiAaTy9/eZtHYfHALq92E/exzXpZILPHOJHlRkeLfmkj3G/Z3DZgZ/E6YW+AYg rg5+jdBb10QTRDlXiPOVzbQVY5nKZSJCqgoe/mF5pKYXe379VLU6mibLDVX0i/tC91CV rAC7AdIS2u4nKD3r/JeKmPS+DoFyA31sfIE1r75AwCzKRiO4dFE1vbDWrWpgYcbnJVkf a9TE3gLr3QeQ3FmoNRGcFdKl9VqkAo34gTUuyLYWHl6Jp4i3nXzoCYBdnCMAULtjjRXJ S+lA== X-Gm-Message-State: AOAM531pJl8nByJaUnVhrWrMGNrSNbV1uccxfes7Zoo2/i+ZN3LxTuk7 JtnA0dbHyDP6IL+k3m2ogYzjJ5bEFA== X-Google-Smtp-Source: ABdhPJxvw8djj03eIGJz3wC6dhNkuY0kMcjHQnolA0VMoUo95H0kQsdBeZko+WfPF714kuHSsxB//Ounew== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:dcf:e5ba:10a5:1ea5]) (user=elver job=sendgmr) by 2002:aa7:c306:0:b0:42d:d4cc:c606 with SMTP id l6-20020aa7c306000000b0042dd4ccc606mr44735253edq.341.1654774277090; Thu, 09 Jun 2022 04:31:17 -0700 (PDT) Date: Thu, 9 Jun 2022 13:30:45 +0200 In-Reply-To: <20220609113046.780504-1-elver@google.com> Message-Id: <20220609113046.780504-8-elver@google.com> Mime-Version: 1.0 References: <20220609113046.780504-1-elver@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH 7/8] perf/hw_breakpoint: Optimize task_bp_pinned() if CPU-independent From: Marco Elver To: elver@google.com, Peter Zijlstra , Frederic Weisbecker , Ingo Molnar Cc: Thomas Gleixner , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Dmitry Vyukov , linux-perf-users@vger.kernel.org, x86@kernel.org, linux-sh@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Running the perf benchmark with (note: more aggressive parameters vs. preceding changes, but same host with 256 CPUs): | $> perf bench -r 100 breakpoint thread -b 4 -p 128 -t 512 | # Running 'breakpoint/thread' benchmark: | # Created/joined 100 threads with 4 breakpoints and 128 parallelism | Total time: 1.953 [sec] | | 38.146289 usecs/op | 4882.725000 usecs/op/cpu 16.29% [kernel] [k] rhashtable_jhash2 16.19% [kernel] [k] osq_lock 14.22% [kernel] [k] queued_spin_lock_slowpath 8.58% [kernel] [k] task_bp_pinned 8.30% [kernel] [k] mutex_spin_on_owner 4.03% [kernel] [k] smp_cfm_core_cond 2.97% [kernel] [k] toggle_bp_slot 2.94% [kernel] [k] bcmp We can see that a majority of the time is now spent hashing task pointers to index into task_bps_ht in task_bp_pinned(). However, if task_bp_pinned()'s computation is independent of any CPU, i.e. always `iter->cpu < 0`, the result for each invocation will be identical. With increasing CPU-count, this problem worsens. Instead, identify if every call to task_bp_pinned() is CPU-independent, and cache the result. Use the cached result instead of a call to task_bp_pinned(), now __task_bp_pinned(), with task_bp_pinned() deciding if the cached result can be used. After this optimization: 21.96% [kernel] [k] queued_spin_lock_slowpath 16.39% [kernel] [k] osq_lock 9.82% [kernel] [k] toggle_bp_slot 9.81% [kernel] [k] find_next_bit 4.93% [kernel] [k] mutex_spin_on_owner 4.71% [kernel] [k] smp_cfm_core_cond 4.30% [kernel] [k] __reserve_bp_slot 2.65% [kernel] [k] cpumask_next Showing that the time spent hashing keys has become insignificant. With the given benchmark parameters, however, we see no statistically significant improvement in performance on the test system with 256 CPUs. This is very likely due to the benchmark parameters being too aggressive and contention elsewhere becoming dominant. Indeed, when using the less aggressive parameters from the preceding changes, we now observe: | $> perf bench -r 30 breakpoint thread -b 4 -p 64 -t 64 | # Running 'breakpoint/thread' benchmark: | # Created/joined 30 threads with 4 breakpoints and 64 parallelism | Total time: 0.071 [sec] | | 37.134896 usecs/op | 2376.633333 usecs/op/cpu Which is an improvement of 12% compared to without this optimization (baseline is 42 usecs/op). This is now only 5% slower than the theoretical ideal (constraints disabled), and 18% slower than no breakpoints at all. [ While we're here, swap task_bp_pinned()'s bp and cpu arguments to be more consistent with other functions (which have bp first, before the cpu argument). ] Signed-off-by: Marco Elver --- kernel/events/hw_breakpoint.c | 71 +++++++++++++++++++++++++---------- 1 file changed, 52 insertions(+), 19 deletions(-) diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c index 08c9ed0626e4..3b33a4075104 100644 --- a/kernel/events/hw_breakpoint.c +++ b/kernel/events/hw_breakpoint.c @@ -242,11 +242,22 @@ static unsigned int max_task_bp_pinned(int cpu, enum bp_type_idx type) * Count the number of breakpoints of the same type and same task. * The given event must be not on the list. */ -static int task_bp_pinned(int cpu, struct perf_event *bp, enum bp_type_idx type) +struct task_bp_pinned { + /* + * If @cpu_independent is true, we can avoid calling __task_bp_pinned() + * for each CPU, since @count will be the same for each invocation. + */ + bool cpu_independent; + int count; + struct perf_event *bp; + enum bp_type_idx type; +}; +static struct task_bp_pinned +__task_bp_pinned(struct perf_event *bp, int cpu, enum bp_type_idx type) { + struct task_bp_pinned ret = {true, 0, bp, type}; struct rhlist_head *head, *pos; struct perf_event *iter; - int count = 0; /* * We need a stable snapshot of the per-task breakpoint list. @@ -259,14 +270,33 @@ static int task_bp_pinned(int cpu, struct perf_event *bp, enum bp_type_idx type) goto out; rhl_for_each_entry_rcu(iter, pos, head, hw.bp_list) { - if (find_slot_idx(iter->attr.bp_type) == type && - (iter->cpu < 0 || cpu == iter->cpu)) - count += hw_breakpoint_weight(iter); + if (find_slot_idx(iter->attr.bp_type) == type) { + if (iter->cpu >= 0) { + ret.cpu_independent = false; + if (cpu != iter->cpu) + continue; + } + ret.count += hw_breakpoint_weight(iter); + } } out: rcu_read_unlock(); - return count; + return ret; +} + +static int +task_bp_pinned(struct perf_event *bp, int cpu, enum bp_type_idx type, + struct task_bp_pinned *cached_tbp_pinned) +{ + if (cached_tbp_pinned->cpu_independent) { + assert_bp_constraints_lock_held(bp); + if (!WARN_ON(cached_tbp_pinned->bp != bp || cached_tbp_pinned->type != type)) + return cached_tbp_pinned->count; + } + + *cached_tbp_pinned = __task_bp_pinned(bp, cpu, type); + return cached_tbp_pinned->count; } static const struct cpumask *cpumask_of_bp(struct perf_event *bp) @@ -281,8 +311,8 @@ static const struct cpumask *cpumask_of_bp(struct perf_event *bp) * a given cpu (cpu > -1) or in all of them (cpu = -1). */ static void -fetch_bp_busy_slots(struct bp_busy_slots *slots, struct perf_event *bp, - enum bp_type_idx type) +fetch_bp_busy_slots(struct bp_busy_slots *slots, struct perf_event *bp, enum bp_type_idx type, + struct task_bp_pinned *cached_tbp_pinned) { const struct cpumask *cpumask = cpumask_of_bp(bp); int cpu; @@ -295,7 +325,7 @@ fetch_bp_busy_slots(struct bp_busy_slots *slots, struct perf_event *bp, if (!bp->hw.target) nr += max_task_bp_pinned(cpu, type); else - nr += task_bp_pinned(cpu, bp, type); + nr += task_bp_pinned(bp, cpu, type, cached_tbp_pinned); if (nr > slots->pinned) slots->pinned = nr; @@ -314,10 +344,11 @@ fetch_this_slot(struct bp_busy_slots *slots, int weight) } /* - * Add a pinned breakpoint for the given task in our constraint table + * Add a pinned breakpoint for the given task in our constraint table. */ -static void toggle_bp_task_slot(struct perf_event *bp, int cpu, - enum bp_type_idx type, int weight) +static void +toggle_bp_task_slot(struct perf_event *bp, int cpu, enum bp_type_idx type, int weight, + struct task_bp_pinned *cached_tbp_pinned) { atomic_t *tsk_pinned = get_bp_info(cpu, type)->tsk_pinned; int old_idx, new_idx; @@ -331,7 +362,7 @@ static void toggle_bp_task_slot(struct perf_event *bp, int cpu, */ lockdep_assert_held_read(&bp_cpuinfo_lock); - old_idx = task_bp_pinned(cpu, bp, type) - 1; + old_idx = task_bp_pinned(bp, cpu, type, cached_tbp_pinned) - 1; new_idx = old_idx + weight; if (old_idx >= 0) @@ -341,11 +372,11 @@ static void toggle_bp_task_slot(struct perf_event *bp, int cpu, } /* - * Add/remove the given breakpoint in our constraint table + * Add/remove the given breakpoint in our constraint table. */ static int toggle_bp_slot(struct perf_event *bp, bool enable, enum bp_type_idx type, - int weight) + int weight, struct task_bp_pinned *cached_tbp_pinned) { const struct cpumask *cpumask = cpumask_of_bp(bp); int cpu; @@ -362,7 +393,7 @@ toggle_bp_slot(struct perf_event *bp, bool enable, enum bp_type_idx type, /* Pinned counter task profiling */ for_each_cpu(cpu, cpumask) - toggle_bp_task_slot(bp, cpu, type, weight); + toggle_bp_task_slot(bp, cpu, type, weight, cached_tbp_pinned); /* * Readers want a stable snapshot of the per-task breakpoint list. @@ -439,6 +470,7 @@ __weak void arch_unregister_hw_breakpoint(struct perf_event *bp) */ static int __reserve_bp_slot(struct perf_event *bp, u64 bp_type) { + struct task_bp_pinned cached_tbp_pinned = {}; struct bp_busy_slots slots = {0}; enum bp_type_idx type; int weight; @@ -456,7 +488,7 @@ static int __reserve_bp_slot(struct perf_event *bp, u64 bp_type) type = find_slot_idx(bp_type); weight = hw_breakpoint_weight(bp); - fetch_bp_busy_slots(&slots, bp, type); + fetch_bp_busy_slots(&slots, bp, type, &cached_tbp_pinned); /* * Simulate the addition of this breakpoint to the constraints * and see the result. @@ -471,7 +503,7 @@ static int __reserve_bp_slot(struct perf_event *bp, u64 bp_type) if (ret) return ret; - return toggle_bp_slot(bp, true, type, weight); + return toggle_bp_slot(bp, true, type, weight, &cached_tbp_pinned); } int reserve_bp_slot(struct perf_event *bp) @@ -485,6 +517,7 @@ int reserve_bp_slot(struct perf_event *bp) static void __release_bp_slot(struct perf_event *bp, u64 bp_type) { + struct task_bp_pinned cached_tbp_pinned = {}; enum bp_type_idx type; int weight; @@ -492,7 +525,7 @@ static void __release_bp_slot(struct perf_event *bp, u64 bp_type) type = find_slot_idx(bp_type); weight = hw_breakpoint_weight(bp); - WARN_ON(toggle_bp_slot(bp, false, type, weight)); + WARN_ON(toggle_bp_slot(bp, false, type, weight, &cached_tbp_pinned)); } void release_bp_slot(struct perf_event *bp) From patchwork Thu Jun 9 11:30:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12875320 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C426CC433EF for ; Thu, 9 Jun 2022 11:32:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243891AbiFILc0 (ORCPT ); Thu, 9 Jun 2022 07:32:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55642 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243962AbiFILbq (ORCPT ); Thu, 9 Jun 2022 07:31:46 -0400 Received: from mail-ej1-x649.google.com (mail-ej1-x649.google.com [IPv6:2a00:1450:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 856D03A4807 for ; Thu, 9 Jun 2022 04:31:21 -0700 (PDT) Received: by mail-ej1-x649.google.com with SMTP id a9-20020a17090682c900b0070b513b9dc4so9085804ejy.4 for ; Thu, 09 Jun 2022 04:31:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=bycUJydp4Is1Pgrv0uqBEliLMdGvYRKoN6kNtR19/dc=; b=lPOLFAzaVOjO2E4jOt3ajdFnigRsm9W5V8NrkrH81hFiV+UvGpy8BzgrUxJPkgesV4 JHfrN4FpaAFLyyF0K+5ILXw4du2srmoNDODBSReZ0dqEEMsNlyUx+J9sOcdv336zaj8D WtlzYclXrCEY/MRa3dkhE2ArEhpYIFs2bTDiQSwNLa7qv+7KeLZZeav+jW98O39oDjWB WtWGIaUqWlr7k6ZOsucAdEX4WJ91Gr2hDXMGDLo7jig6TjQ07gN9+YmpNhuNNqX464hF 9DIPwAlg1zMIralWSr504/4Z95jF1S2ilpCpYRzXhkFGJ9iY1e5Cgdy0HjmtbVQ3fQCF oEXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=bycUJydp4Is1Pgrv0uqBEliLMdGvYRKoN6kNtR19/dc=; b=ywpE1EcQ1J3GcZ5336aPs5gvJxxltmdQOxQ5QkhwJ5eWpk+jp5CJTirt6PALa8zJTK 4Uray3B2oS10aOPoE7BAJzpmbHz5xTWliX39vKtAmPITGcoVKPHU1hleSP9Xbsu9A9V6 s+URCA5x/yaqNfY+E/PPaJKPYlUqejeIVaLn+WPSiSwM7+kUuqp/Oo9VAG6dLof5pRDa X4QbCnnbm92fWpRlGj1vuAxYaixcBcAkHgXud6qxlskBTc5JWfHIOiu4cSQ2/iEDNl/j wzaAGAkvuX9zVeqVyDVqz1l60naZ+aO0C4FwmAPCzmseLJ6Vr1n3+GBgTR7mwoAWXxt+ vPFg== X-Gm-Message-State: AOAM530+OIHMkhUodOHeBdQKedeJr4Tkd96HPYDAoT/+b9yOul9Cl2dh P1yc24ihoEmJi8OAEkiugt/k3iinDQ== X-Google-Smtp-Source: ABdhPJydpZCykPcdaFY2rxjM84Lwhm+BapZzdgw9pPl1PPzpnynWUqwblyylyqmpNsUsmyb5P8OdnbA0Lg== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:dcf:e5ba:10a5:1ea5]) (user=elver job=sendgmr) by 2002:a05:6402:11:b0:431:680c:cca1 with SMTP id d17-20020a056402001100b00431680ccca1mr22174051edu.420.1654774279801; Thu, 09 Jun 2022 04:31:19 -0700 (PDT) Date: Thu, 9 Jun 2022 13:30:46 +0200 In-Reply-To: <20220609113046.780504-1-elver@google.com> Message-Id: <20220609113046.780504-9-elver@google.com> Mime-Version: 1.0 References: <20220609113046.780504-1-elver@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH 8/8] perf/hw_breakpoint: Clean up headers From: Marco Elver To: elver@google.com, Peter Zijlstra , Frederic Weisbecker , Ingo Molnar Cc: Thomas Gleixner , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Dmitry Vyukov , linux-perf-users@vger.kernel.org, x86@kernel.org, linux-sh@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Clean up headers: - Remove unused - Remove unused - Remove unused - Remove unused - Add for EXPORT_SYMBOL_GPL(). - Sort alphabetically. - Move to top to test it compiles on its own. Signed-off-by: Marco Elver Acked-by: Dmitry Vyukov --- kernel/events/hw_breakpoint.c | 20 +++++++++----------- 1 file changed, 9 insertions(+), 11 deletions(-) diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c index 3b33a4075104..e9aa7f2c031a 100644 --- a/kernel/events/hw_breakpoint.c +++ b/kernel/events/hw_breakpoint.c @@ -17,26 +17,24 @@ * This file contains the arch-independent routines. */ +#include + #include +#include +#include +#include +#include #include -#include -#include -#include #include #include -#include #include +#include #include +#include #include -#include -#include #include -#include -#include -#include -#include +#include -#include /* * Constraints data */