From patchwork Mon Aug 29 12:47:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12957733 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79685ECAAD5 for ; Mon, 29 Aug 2022 12:56:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230387AbiH2M4z (ORCPT ); Mon, 29 Aug 2022 08:56:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56572 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230199AbiH2M4X (ORCPT ); Mon, 29 Aug 2022 08:56:23 -0400 Received: from mail-ed1-x54a.google.com (mail-ed1-x54a.google.com [IPv6:2a00:1450:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0EC776581F for ; Mon, 29 Aug 2022 05:48:02 -0700 (PDT) Received: by mail-ed1-x54a.google.com with SMTP id z6-20020a05640240c600b0043e1d52fd98so5481245edb.22 for ; Mon, 29 Aug 2022 05:48:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=2wzFsLYHBSO+bSg9K3lLWWvk30GMleRrtWDuIqbsYXA=; b=M9Rqk2DZbWx+M0j2ZNREBFTuUmUql9FGMSZUdPStSv1BolmPhWMDlNgK+7vyNFFD5J sB4HOcmHm3qjS4OpBDfSzHbX4S4haFyminVHG6seqVv7pn+u3VXdnaVSql4j6aDu2YJ4 cSH8NpoH8lS7T3FrhvZAZAGuJ1yW0fPguBXtAP3lbFS0AQ8qnn+2vyiZZdQUUt6hP70X mHWF5w3tnlLXIWh18vif9ntbngOeROkc7fz5meyo4vV507tjq+y5Rbm7lBpb+t0hM9AT GvVWCnJc8dTM7rD6ATQjIQ+F5pjOp49ZjN9tC1AhYQoO6CRk8fw3MjIF5X/3xRhh0X+x nMnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=2wzFsLYHBSO+bSg9K3lLWWvk30GMleRrtWDuIqbsYXA=; b=4NP3IZ20fkyHm/WUopQK+JKdgDoaXr9TXJsGyZHus46NrgFSuDKc6kvKFH83cXh8dd BEMCpF++5I3PPENTXzNzOyMWLDEJaPztyB1OByDir4CVg0RV7gahLUbkxfm+iB6teuvX ediLsji0QsfQyLJfR740r8FE68OamhRKQx+wtwGjEOUvHUu2O4AyTcC36PpMgjvky6KE 02b+bMXfmSnnZZGJXnGyAXkw4G7hQdjZlk7+lQer+6xRKuxrHuH3cp5W2XbO7/p3j0CO Wwpth03tiBCaRDB3e60kR31a3/Wq7oDftxTvREIdZFeKDh2R9XXhADEBgKvhu1D0MGiG 4Nng== X-Gm-Message-State: ACgBeo3C9jAVIi2oLWNaGWqtRABliLgNQXKsbyiX+BwGVuiVooy5xqkf CARvm7mkiv502WXzl9DBJE5+0nMlKw== X-Google-Smtp-Source: AA6agR7nspJ0efbTxsLMD8rT8iQXj1XN4d3UZQvBYWAOzeWOis7eXggMAwfZ1bw7AqzT6uKNdiec7FvHMg== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:196d:4fc7:fa9c:62e3]) (user=elver job=sendgmr) by 2002:a17:907:97d3:b0:73d:8b9b:a6c1 with SMTP id js19-20020a17090797d300b0073d8b9ba6c1mr13449067ejc.71.1661777281218; Mon, 29 Aug 2022 05:48:01 -0700 (PDT) Date: Mon, 29 Aug 2022 14:47:06 +0200 In-Reply-To: <20220829124719.675715-1-elver@google.com> Mime-Version: 1.0 References: <20220829124719.675715-1-elver@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220829124719.675715-2-elver@google.com> Subject: [PATCH v4 01/14] perf/hw_breakpoint: Add KUnit test for constraints accounting From: Marco Elver To: elver@google.com, Peter Zijlstra , Frederic Weisbecker , Ingo Molnar Cc: Thomas Gleixner , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Dmitry Vyukov , Michael Ellerman , linuxppc-dev@lists.ozlabs.org, linux-perf-users@vger.kernel.org, x86@kernel.org, linux-sh@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, Ian Rogers Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Add KUnit test for hw_breakpoint constraints accounting, with various interesting mixes of breakpoint targets (some care was taken to catch interesting corner cases via bug-injection). The test cannot be built as a module because it requires access to hw_breakpoint_slots(), which is not inlinable or exported on all architectures. Signed-off-by: Marco Elver Reviewed-by: Dmitry Vyukov Acked-by: Ian Rogers --- v3: * Don't use raw_smp_processor_id(). v2: * New patch. --- kernel/events/Makefile | 1 + kernel/events/hw_breakpoint_test.c | 323 +++++++++++++++++++++++++++++ lib/Kconfig.debug | 10 + 3 files changed, 334 insertions(+) create mode 100644 kernel/events/hw_breakpoint_test.c diff --git a/kernel/events/Makefile b/kernel/events/Makefile index 8591c180b52b..91a62f566743 100644 --- a/kernel/events/Makefile +++ b/kernel/events/Makefile @@ -2,4 +2,5 @@ obj-y := core.o ring_buffer.o callchain.o obj-$(CONFIG_HAVE_HW_BREAKPOINT) += hw_breakpoint.o +obj-$(CONFIG_HW_BREAKPOINT_KUNIT_TEST) += hw_breakpoint_test.o obj-$(CONFIG_UPROBES) += uprobes.o diff --git a/kernel/events/hw_breakpoint_test.c b/kernel/events/hw_breakpoint_test.c new file mode 100644 index 000000000000..433c5c45e2a5 --- /dev/null +++ b/kernel/events/hw_breakpoint_test.c @@ -0,0 +1,323 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KUnit test for hw_breakpoint constraints accounting logic. + * + * Copyright (C) 2022, Google LLC. + */ + +#include +#include +#include +#include +#include +#include + +#define TEST_REQUIRES_BP_SLOTS(test, slots) \ + do { \ + if ((slots) > get_test_bp_slots()) { \ + kunit_skip((test), "Requires breakpoint slots: %d > %d", slots, \ + get_test_bp_slots()); \ + } \ + } while (0) + +#define TEST_EXPECT_NOSPC(expr) KUNIT_EXPECT_EQ(test, -ENOSPC, PTR_ERR(expr)) + +#define MAX_TEST_BREAKPOINTS 512 + +static char break_vars[MAX_TEST_BREAKPOINTS]; +static struct perf_event *test_bps[MAX_TEST_BREAKPOINTS]; +static struct task_struct *__other_task; + +static struct perf_event *register_test_bp(int cpu, struct task_struct *tsk, int idx) +{ + struct perf_event_attr attr = {}; + + if (WARN_ON(idx < 0 || idx >= MAX_TEST_BREAKPOINTS)) + return NULL; + + hw_breakpoint_init(&attr); + attr.bp_addr = (unsigned long)&break_vars[idx]; + attr.bp_len = HW_BREAKPOINT_LEN_1; + attr.bp_type = HW_BREAKPOINT_RW; + return perf_event_create_kernel_counter(&attr, cpu, tsk, NULL, NULL); +} + +static void unregister_test_bp(struct perf_event **bp) +{ + if (WARN_ON(IS_ERR(*bp))) + return; + if (WARN_ON(!*bp)) + return; + unregister_hw_breakpoint(*bp); + *bp = NULL; +} + +static int get_test_bp_slots(void) +{ + static int slots; + + if (!slots) + slots = hw_breakpoint_slots(TYPE_DATA); + + return slots; +} + +static void fill_one_bp_slot(struct kunit *test, int *id, int cpu, struct task_struct *tsk) +{ + struct perf_event *bp = register_test_bp(cpu, tsk, *id); + + KUNIT_ASSERT_NOT_NULL(test, bp); + KUNIT_ASSERT_FALSE(test, IS_ERR(bp)); + KUNIT_ASSERT_NULL(test, test_bps[*id]); + test_bps[(*id)++] = bp; +} + +/* + * Fills up the given @cpu/@tsk with breakpoints, only leaving @skip slots free. + * + * Returns true if this can be called again, continuing at @id. + */ +static bool fill_bp_slots(struct kunit *test, int *id, int cpu, struct task_struct *tsk, int skip) +{ + for (int i = 0; i < get_test_bp_slots() - skip; ++i) + fill_one_bp_slot(test, id, cpu, tsk); + + return *id + get_test_bp_slots() <= MAX_TEST_BREAKPOINTS; +} + +static int dummy_kthread(void *arg) +{ + return 0; +} + +static struct task_struct *get_other_task(struct kunit *test) +{ + struct task_struct *tsk; + + if (__other_task) + return __other_task; + + tsk = kthread_create(dummy_kthread, NULL, "hw_breakpoint_dummy_task"); + KUNIT_ASSERT_FALSE(test, IS_ERR(tsk)); + __other_task = tsk; + return __other_task; +} + +static int get_test_cpu(int num) +{ + int cpu; + + WARN_ON(num < 0); + + for_each_online_cpu(cpu) { + if (num-- <= 0) + break; + } + + return cpu; +} + +/* ===== Test cases ===== */ + +static void test_one_cpu(struct kunit *test) +{ + int idx = 0; + + fill_bp_slots(test, &idx, get_test_cpu(0), NULL, 0); + TEST_EXPECT_NOSPC(register_test_bp(-1, current, idx)); + TEST_EXPECT_NOSPC(register_test_bp(get_test_cpu(0), NULL, idx)); +} + +static void test_many_cpus(struct kunit *test) +{ + int idx = 0; + int cpu; + + /* Test that CPUs are independent. */ + for_each_online_cpu(cpu) { + bool do_continue = fill_bp_slots(test, &idx, cpu, NULL, 0); + + TEST_EXPECT_NOSPC(register_test_bp(cpu, NULL, idx)); + if (!do_continue) + break; + } +} + +static void test_one_task_on_all_cpus(struct kunit *test) +{ + int idx = 0; + + fill_bp_slots(test, &idx, -1, current, 0); + TEST_EXPECT_NOSPC(register_test_bp(-1, current, idx)); + TEST_EXPECT_NOSPC(register_test_bp(get_test_cpu(0), current, idx)); + TEST_EXPECT_NOSPC(register_test_bp(get_test_cpu(0), NULL, idx)); + /* Remove one and adding back CPU-target should work. */ + unregister_test_bp(&test_bps[0]); + fill_one_bp_slot(test, &idx, get_test_cpu(0), NULL); +} + +static void test_two_tasks_on_all_cpus(struct kunit *test) +{ + int idx = 0; + + /* Test that tasks are independent. */ + fill_bp_slots(test, &idx, -1, current, 0); + fill_bp_slots(test, &idx, -1, get_other_task(test), 0); + + TEST_EXPECT_NOSPC(register_test_bp(-1, current, idx)); + TEST_EXPECT_NOSPC(register_test_bp(-1, get_other_task(test), idx)); + TEST_EXPECT_NOSPC(register_test_bp(get_test_cpu(0), current, idx)); + TEST_EXPECT_NOSPC(register_test_bp(get_test_cpu(0), get_other_task(test), idx)); + TEST_EXPECT_NOSPC(register_test_bp(get_test_cpu(0), NULL, idx)); + /* Remove one from first task and adding back CPU-target should not work. */ + unregister_test_bp(&test_bps[0]); + TEST_EXPECT_NOSPC(register_test_bp(get_test_cpu(0), NULL, idx)); +} + +static void test_one_task_on_one_cpu(struct kunit *test) +{ + int idx = 0; + + fill_bp_slots(test, &idx, get_test_cpu(0), current, 0); + TEST_EXPECT_NOSPC(register_test_bp(-1, current, idx)); + TEST_EXPECT_NOSPC(register_test_bp(get_test_cpu(0), current, idx)); + TEST_EXPECT_NOSPC(register_test_bp(get_test_cpu(0), NULL, idx)); + /* + * Remove one and adding back CPU-target should work; this case is + * special vs. above because the task's constraints are CPU-dependent. + */ + unregister_test_bp(&test_bps[0]); + fill_one_bp_slot(test, &idx, get_test_cpu(0), NULL); +} + +static void test_one_task_mixed(struct kunit *test) +{ + int idx = 0; + + TEST_REQUIRES_BP_SLOTS(test, 3); + + fill_one_bp_slot(test, &idx, get_test_cpu(0), current); + fill_bp_slots(test, &idx, -1, current, 1); + TEST_EXPECT_NOSPC(register_test_bp(-1, current, idx)); + TEST_EXPECT_NOSPC(register_test_bp(get_test_cpu(0), current, idx)); + TEST_EXPECT_NOSPC(register_test_bp(get_test_cpu(0), NULL, idx)); + + /* Transition from CPU-dependent pinned count to CPU-independent. */ + unregister_test_bp(&test_bps[0]); + unregister_test_bp(&test_bps[1]); + fill_one_bp_slot(test, &idx, get_test_cpu(0), NULL); + fill_one_bp_slot(test, &idx, get_test_cpu(0), NULL); + TEST_EXPECT_NOSPC(register_test_bp(get_test_cpu(0), NULL, idx)); +} + +static void test_two_tasks_on_one_cpu(struct kunit *test) +{ + int idx = 0; + + fill_bp_slots(test, &idx, get_test_cpu(0), current, 0); + fill_bp_slots(test, &idx, get_test_cpu(0), get_other_task(test), 0); + + TEST_EXPECT_NOSPC(register_test_bp(-1, current, idx)); + TEST_EXPECT_NOSPC(register_test_bp(-1, get_other_task(test), idx)); + TEST_EXPECT_NOSPC(register_test_bp(get_test_cpu(0), current, idx)); + TEST_EXPECT_NOSPC(register_test_bp(get_test_cpu(0), get_other_task(test), idx)); + TEST_EXPECT_NOSPC(register_test_bp(get_test_cpu(0), NULL, idx)); + /* Can still create breakpoints on some other CPU. */ + fill_bp_slots(test, &idx, get_test_cpu(1), NULL, 0); +} + +static void test_two_tasks_on_one_all_cpus(struct kunit *test) +{ + int idx = 0; + + fill_bp_slots(test, &idx, get_test_cpu(0), current, 0); + fill_bp_slots(test, &idx, -1, get_other_task(test), 0); + + TEST_EXPECT_NOSPC(register_test_bp(-1, current, idx)); + TEST_EXPECT_NOSPC(register_test_bp(-1, get_other_task(test), idx)); + TEST_EXPECT_NOSPC(register_test_bp(get_test_cpu(0), current, idx)); + TEST_EXPECT_NOSPC(register_test_bp(get_test_cpu(0), get_other_task(test), idx)); + TEST_EXPECT_NOSPC(register_test_bp(get_test_cpu(0), NULL, idx)); + /* Cannot create breakpoints on some other CPU either. */ + TEST_EXPECT_NOSPC(register_test_bp(get_test_cpu(1), NULL, idx)); +} + +static void test_task_on_all_and_one_cpu(struct kunit *test) +{ + int tsk_on_cpu_idx, cpu_idx; + int idx = 0; + + TEST_REQUIRES_BP_SLOTS(test, 3); + + fill_bp_slots(test, &idx, -1, current, 2); + /* Transitioning from only all CPU breakpoints to mixed. */ + tsk_on_cpu_idx = idx; + fill_one_bp_slot(test, &idx, get_test_cpu(0), current); + fill_one_bp_slot(test, &idx, -1, current); + + TEST_EXPECT_NOSPC(register_test_bp(-1, current, idx)); + TEST_EXPECT_NOSPC(register_test_bp(get_test_cpu(0), current, idx)); + TEST_EXPECT_NOSPC(register_test_bp(get_test_cpu(0), NULL, idx)); + + /* We should still be able to use up another CPU's slots. */ + cpu_idx = idx; + fill_one_bp_slot(test, &idx, get_test_cpu(1), NULL); + TEST_EXPECT_NOSPC(register_test_bp(get_test_cpu(1), NULL, idx)); + + /* Transitioning back to task target on all CPUs. */ + unregister_test_bp(&test_bps[tsk_on_cpu_idx]); + /* Still have a CPU target breakpoint in get_test_cpu(1). */ + TEST_EXPECT_NOSPC(register_test_bp(-1, current, idx)); + /* Remove it and try again. */ + unregister_test_bp(&test_bps[cpu_idx]); + fill_one_bp_slot(test, &idx, -1, current); + + TEST_EXPECT_NOSPC(register_test_bp(-1, current, idx)); + TEST_EXPECT_NOSPC(register_test_bp(get_test_cpu(0), current, idx)); + TEST_EXPECT_NOSPC(register_test_bp(get_test_cpu(0), NULL, idx)); + TEST_EXPECT_NOSPC(register_test_bp(get_test_cpu(1), NULL, idx)); +} + +static struct kunit_case hw_breakpoint_test_cases[] = { + KUNIT_CASE(test_one_cpu), + KUNIT_CASE(test_many_cpus), + KUNIT_CASE(test_one_task_on_all_cpus), + KUNIT_CASE(test_two_tasks_on_all_cpus), + KUNIT_CASE(test_one_task_on_one_cpu), + KUNIT_CASE(test_one_task_mixed), + KUNIT_CASE(test_two_tasks_on_one_cpu), + KUNIT_CASE(test_two_tasks_on_one_all_cpus), + KUNIT_CASE(test_task_on_all_and_one_cpu), + {}, +}; + +static int test_init(struct kunit *test) +{ + /* Most test cases want 2 distinct CPUs. */ + return num_online_cpus() < 2 ? -EINVAL : 0; +} + +static void test_exit(struct kunit *test) +{ + for (int i = 0; i < MAX_TEST_BREAKPOINTS; ++i) { + if (test_bps[i]) + unregister_test_bp(&test_bps[i]); + } + + if (__other_task) { + kthread_stop(__other_task); + __other_task = NULL; + } +} + +static struct kunit_suite hw_breakpoint_test_suite = { + .name = "hw_breakpoint", + .test_cases = hw_breakpoint_test_cases, + .init = test_init, + .exit = test_exit, +}; + +kunit_test_suites(&hw_breakpoint_test_suite); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Marco Elver "); diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index bcbe60d6c80c..84309a00f9aa 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -2533,6 +2533,16 @@ config STACKINIT_KUNIT_TEST CONFIG_GCC_PLUGIN_STRUCTLEAK, CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF, or CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF_ALL. +config HW_BREAKPOINT_KUNIT_TEST + bool "Test hw_breakpoint constraints accounting" if !KUNIT_ALL_TESTS + depends on HAVE_HW_BREAKPOINT + depends on KUNIT=y + default KUNIT_ALL_TESTS + help + Tests for hw_breakpoint constraints accounting. + + If unsure, say N. + config TEST_UDELAY tristate "udelay test driver" help From patchwork Mon Aug 29 12:47:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12957752 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78321C0502C for ; Mon, 29 Aug 2022 12:56:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230000AbiH2M45 (ORCPT ); Mon, 29 Aug 2022 08:56:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49728 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230207AbiH2M4Y (ORCPT ); Mon, 29 Aug 2022 08:56:24 -0400 Received: from mail-ej1-x649.google.com (mail-ej1-x649.google.com [IPv6:2a00:1450:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7E980696DC for ; Mon, 29 Aug 2022 05:48:05 -0700 (PDT) Received: by mail-ej1-x649.google.com with SMTP id xc12-20020a170907074c00b007416699ea14so1121879ejb.19 for ; Mon, 29 Aug 2022 05:48:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=1tERDk4Y7XiUFi7foUICKHh8rrNwWwU5I8SMJsHxX4c=; b=LRs6v78+5zV31XmPhiGFaI8d9smXBgIGVT6nVR89fc5UTZbqJpASCbp8OzqwdJo0WN X50HlNQTkBztzM8bMAfsxW3ZMbXKbNqWzzhCmWXRtGLVM510jCvBX7oLyzzVuARxpNHm 9AonX8Qd10NaBeiwtbKaFe5cBQckgAKnxEE71EwpUZFx/aNS5vw1FonUbiF01IytjObS n8Dh1GNK3dBickyio3mTRV1yiaGNRQ1VMvwcGi/v7qpfhZBgl6yuTCbsFZhh4yT5ilJo 7vS+0TPd+ThWiAMA/rfOsW7qFg2fd7fhGjlipPHc/6KEppO4C/f7SNI637VFT7wvXhPX 7UMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=1tERDk4Y7XiUFi7foUICKHh8rrNwWwU5I8SMJsHxX4c=; b=c3FkwnTCM9lEKlu/s6FVckIg/EBEYjoE8Dl1fHR+ME295/MK8Nz+KS0NCbjmsIBOFD aqllKrgSsusPuM8xi8VjZImQ6zdEOSNTdMGz/uAfacvXx3rdbt2tfp64z3wBOiS2rfUd 8zhK0EN12rTsTktOVKidkt8UDCdCwz710Ib5uOqRmYaXAO9a1ctmaUk2y/L9O5svOO3a ZmEUXhotWrtEDHKFc3LiyaVgbyJUdADG3/vBWQSAP0WfFhj9J89M7pDyDVymxLA65o8R x79Zrunv7JPuYQ+eGdjN3fCz6SyGeaLUJxhtiURx2MUAhk8P8CiGZ4uXclsfYKD54wuY A+cQ== X-Gm-Message-State: ACgBeo0wxhwz1OvLSmxl002GusKp4My8IUZCZ9ARMFnl12u/vOHhWTL5 oxpy4UNvYlC+JWN+/HikSl6enIoOHQ== X-Google-Smtp-Source: AA6agR4xS4MVvlQxXwy93Y+GTWrZcSFKD+wMhZ6QqWvQdPyWAiU+olMFbA+DUyOWg5RwEscoM50Yu8cu3A== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:196d:4fc7:fa9c:62e3]) (user=elver job=sendgmr) by 2002:a05:6402:2707:b0:448:ad8e:39c1 with SMTP id y7-20020a056402270700b00448ad8e39c1mr265466edd.315.1661777284113; Mon, 29 Aug 2022 05:48:04 -0700 (PDT) Date: Mon, 29 Aug 2022 14:47:07 +0200 In-Reply-To: <20220829124719.675715-1-elver@google.com> Mime-Version: 1.0 References: <20220829124719.675715-1-elver@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220829124719.675715-3-elver@google.com> Subject: [PATCH v4 02/14] perf/hw_breakpoint: Provide hw_breakpoint_is_used() and use in test From: Marco Elver To: elver@google.com, Peter Zijlstra , Frederic Weisbecker , Ingo Molnar Cc: Thomas Gleixner , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Dmitry Vyukov , Michael Ellerman , linuxppc-dev@lists.ozlabs.org, linux-perf-users@vger.kernel.org, x86@kernel.org, linux-sh@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, Ian Rogers Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Provide hw_breakpoint_is_used() to check if breakpoints are in use on the system. Use it in the KUnit test to verify the global state before and after a test case. Signed-off-by: Marco Elver Reviewed-by: Dmitry Vyukov Acked-by: Ian Rogers --- v3: * New patch. --- include/linux/hw_breakpoint.h | 3 +++ kernel/events/hw_breakpoint.c | 29 +++++++++++++++++++++++++++++ kernel/events/hw_breakpoint_test.c | 12 +++++++++++- 3 files changed, 43 insertions(+), 1 deletion(-) diff --git a/include/linux/hw_breakpoint.h b/include/linux/hw_breakpoint.h index 78dd7035d1e5..a3fb846705eb 100644 --- a/include/linux/hw_breakpoint.h +++ b/include/linux/hw_breakpoint.h @@ -74,6 +74,7 @@ register_wide_hw_breakpoint(struct perf_event_attr *attr, extern int register_perf_hw_breakpoint(struct perf_event *bp); extern void unregister_hw_breakpoint(struct perf_event *bp); extern void unregister_wide_hw_breakpoint(struct perf_event * __percpu *cpu_events); +extern bool hw_breakpoint_is_used(void); extern int dbg_reserve_bp_slot(struct perf_event *bp); extern int dbg_release_bp_slot(struct perf_event *bp); @@ -121,6 +122,8 @@ register_perf_hw_breakpoint(struct perf_event *bp) { return -ENOSYS; } static inline void unregister_hw_breakpoint(struct perf_event *bp) { } static inline void unregister_wide_hw_breakpoint(struct perf_event * __percpu *cpu_events) { } +static inline bool hw_breakpoint_is_used(void) { return false; } + static inline int reserve_bp_slot(struct perf_event *bp) {return -ENOSYS; } static inline void release_bp_slot(struct perf_event *bp) { } diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c index f32320ac02fd..fd5cd1f9e7fc 100644 --- a/kernel/events/hw_breakpoint.c +++ b/kernel/events/hw_breakpoint.c @@ -604,6 +604,35 @@ void unregister_wide_hw_breakpoint(struct perf_event * __percpu *cpu_events) } EXPORT_SYMBOL_GPL(unregister_wide_hw_breakpoint); +/** + * hw_breakpoint_is_used - check if breakpoints are currently used + * + * Returns: true if breakpoints are used, false otherwise. + */ +bool hw_breakpoint_is_used(void) +{ + int cpu; + + if (!constraints_initialized) + return false; + + for_each_possible_cpu(cpu) { + for (int type = 0; type < TYPE_MAX; ++type) { + struct bp_cpuinfo *info = get_bp_info(cpu, type); + + if (info->cpu_pinned) + return true; + + for (int slot = 0; slot < nr_slots[type]; ++slot) { + if (info->tsk_pinned[slot]) + return true; + } + } + } + + return false; +} + static struct notifier_block hw_breakpoint_exceptions_nb = { .notifier_call = hw_breakpoint_exceptions_notify, /* we need to be notified first */ diff --git a/kernel/events/hw_breakpoint_test.c b/kernel/events/hw_breakpoint_test.c index 433c5c45e2a5..5ced822df788 100644 --- a/kernel/events/hw_breakpoint_test.c +++ b/kernel/events/hw_breakpoint_test.c @@ -294,7 +294,14 @@ static struct kunit_case hw_breakpoint_test_cases[] = { static int test_init(struct kunit *test) { /* Most test cases want 2 distinct CPUs. */ - return num_online_cpus() < 2 ? -EINVAL : 0; + if (num_online_cpus() < 2) + return -EINVAL; + + /* Want the system to not use breakpoints elsewhere. */ + if (hw_breakpoint_is_used()) + return -EBUSY; + + return 0; } static void test_exit(struct kunit *test) @@ -308,6 +315,9 @@ static void test_exit(struct kunit *test) kthread_stop(__other_task); __other_task = NULL; } + + /* Verify that internal state agrees that no breakpoints are in use. */ + KUNIT_EXPECT_FALSE(test, hw_breakpoint_is_used()); } static struct kunit_suite hw_breakpoint_test_suite = { From patchwork Mon Aug 29 12:47:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12957753 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5101C54EE9 for ; Mon, 29 Aug 2022 12:56:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229577AbiH2M45 (ORCPT ); Mon, 29 Aug 2022 08:56:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49746 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229886AbiH2M4Z (ORCPT ); Mon, 29 Aug 2022 08:56:25 -0400 Received: from mail-ed1-x54a.google.com (mail-ed1-x54a.google.com [IPv6:2a00:1450:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3F0A7696FC for ; Mon, 29 Aug 2022 05:48:08 -0700 (PDT) Received: by mail-ed1-x54a.google.com with SMTP id q32-20020a05640224a000b004462f105fa9so5351779eda.4 for ; Mon, 29 Aug 2022 05:48:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=Xnrbr4NWXpH3ILMfaL+Hqy6Mh/173nAe6Rh4Ql+SwwA=; b=aoWdQSkdpteyXmYAo1c6eiVE2udp1iPCkTbGD3cqc5tyTEc5w9TDE5eQ2pH/aC5ONx wvITjNHuqGva9IIaS4Jy/kRGfM87daq+ZT24dKaI5knatHyVNPce7SKxJrvTIxK+jpRV T+oUK6JwmqGiqfWOw+qZfCjfKSncTUyCL/rTKrD/R/uwr1cZQDo2BiYCLIVtBSK8KpsZ CLE54Tvl9hptOey0GrB4CQxeXWFE0oHlZwm2u/7tiypSnfTXu/PJqf5/FBc4ceMvlGJj bS8z75PhngVBIBAZf6h+e4wkxi/WQq7RrV8yo3ue7h3N/zwN5y8SZVpJk5+wDM79X8yf 2pgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=Xnrbr4NWXpH3ILMfaL+Hqy6Mh/173nAe6Rh4Ql+SwwA=; b=YSaQm167UVP+aH6HWP5XK3ZFfYxbxTKxImloB8Qum54ZXDcdRG3TE0nv+N+28hKSVB WmQ/G75ojlxfikNUgML5Aomdc87eu8OjwW0CQL56IUMdaIBUhbxl45oGKnLbIv4vElWP fSPLXmS+qGI6+e6H5k4+1UGjfpUZQZ9vU8RQJV7TEYUNBybB0KQvhFSSSymEBw0VgQm4 ncSYu61Csg3gCyKaR4Yd5ZMJfqq3Sf6HgWwS81cd/fVSrSDm9gkvq/UEcVkbMIpDRPD4 U8suQTe85Mpo9sUmqZBnHa0vDMITnrIfQARhTI1U/iTWOIZ2R7YpDAJDPL2w3KrIAgLX 5p3A== X-Gm-Message-State: ACgBeo30LsuHQ46ABvEpHP0EhEHiYAtoxyy3KRxREHBD9i3Iy7abx+ZI 7bonomp1UKLGaw24OdBzZKwkIy8jzg== X-Google-Smtp-Source: AA6agR42dMVaiS+cJuLOtzyugHi3iaDb9VrUvxxRA/60qGgLveidvAmI7LaoSF8LrpwtVNgvbP26lurGuA== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:196d:4fc7:fa9c:62e3]) (user=elver job=sendgmr) by 2002:aa7:c946:0:b0:43d:3038:1381 with SMTP id h6-20020aa7c946000000b0043d30381381mr16380942edt.354.1661777286713; Mon, 29 Aug 2022 05:48:06 -0700 (PDT) Date: Mon, 29 Aug 2022 14:47:08 +0200 In-Reply-To: <20220829124719.675715-1-elver@google.com> Mime-Version: 1.0 References: <20220829124719.675715-1-elver@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220829124719.675715-4-elver@google.com> Subject: [PATCH v4 03/14] perf/hw_breakpoint: Clean up headers From: Marco Elver To: elver@google.com, Peter Zijlstra , Frederic Weisbecker , Ingo Molnar Cc: Thomas Gleixner , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Dmitry Vyukov , Michael Ellerman , linuxppc-dev@lists.ozlabs.org, linux-perf-users@vger.kernel.org, x86@kernel.org, linux-sh@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, Ian Rogers Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Clean up headers: - Remove unused - Remove unused - Remove unused - Remove unused - Add for EXPORT_SYMBOL_GPL(). - Add for mutex. - Sort alphabetically. - Move to top to test it compiles on its own. Signed-off-by: Marco Elver Acked-by: Dmitry Vyukov Acked-by: Ian Rogers --- v2: * Move to start of series. --- kernel/events/hw_breakpoint.c | 19 +++++++++---------- 1 file changed, 9 insertions(+), 10 deletions(-) diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c index fd5cd1f9e7fc..6076c6346291 100644 --- a/kernel/events/hw_breakpoint.c +++ b/kernel/events/hw_breakpoint.c @@ -17,23 +17,22 @@ * This file contains the arch-independent routines. */ +#include + +#include +#include +#include +#include #include -#include -#include -#include #include #include -#include +#include +#include +#include #include #include -#include #include -#include -#include -#include -#include -#include /* * Constraints data */ From patchwork Mon Aug 29 12:47:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12957754 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A145ECAAD5 for ; Mon, 29 Aug 2022 12:57:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230130AbiH2M47 (ORCPT ); Mon, 29 Aug 2022 08:56:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49732 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230226AbiH2M41 (ORCPT ); Mon, 29 Aug 2022 08:56:27 -0400 Received: from mail-ed1-x549.google.com (mail-ed1-x549.google.com [IPv6:2a00:1450:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D916A76472 for ; Mon, 29 Aug 2022 05:48:09 -0700 (PDT) Received: by mail-ed1-x549.google.com with SMTP id m15-20020a056402430f00b00448af09b674so47989edc.13 for ; Mon, 29 Aug 2022 05:48:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=Qud8pWzf3g4FVwsYBG/lvnDK9jpEJXDArWePZHukaec=; b=QxY/7D4H+6Wm8ODbCoYIX4jcTSNanjtdoc1j3Iz7xu5mMl6TUJICrlbXaUNTWvcEjy 4c4cUTvwCtdAs5ab7bucTDOk1tTEHaWIDXORtNXuC3re0PrHVUefmShgyFj5B7b5OgZZ t4Yu15YoJLhe0U4wAbtL+f1gdUTACvONv/nwoJ486+p9lcXDNJ/FXaDMKxXkqZ/DcNnQ bVit8hcj5YbEy8u3iVI/NE3APWnlTlxjyJmpCk3g5re4516DamYP1dTQXUKRZojE3wJk ujWYpvxhsiZO2EWxUD8bVayH6zQpzOCVZkKsYEHOuPbJxM2l99CpGmiBI40NQ85R4yS8 pTRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=Qud8pWzf3g4FVwsYBG/lvnDK9jpEJXDArWePZHukaec=; b=bC5LqybyvJMmAyP9Gy3XP/gi/tmlq+ZgP34w8T9Wp3A9IhZZHchHMxI25UbQPq4iJy rgBLuGugslhC1CrHE2yPct3ss1rDL7LK4hh//SQB61hjGF9US2ifAWbcPvMOeje0vopi qyaza43ZnogZRCjAGqfluQpBUU8PZSsKSS211RTzOWyIVDl+Xafn5MQRFQmXijdsB1Vx 3ke3dj1544g2/JgPBWGpX7FIcOhGXDBRIgonuUpSqqhZwxx0WaRbta2mrJloxaDIeC1h 1NxGt6Tg+iGSWD2Ic93IYGgQDG7YF6XHlQks+1ASYr4OWlFZBMS3SzQwKBmMZpCcNs1X QUHA== X-Gm-Message-State: ACgBeo0QAjnlWEBi5Y13EvY/E2mYYjK7FSSGZ5Hf+Zb7z2Hav13FE1bz MKnvgLITP6wW4p7rJ7CxRN1tuqYA2g== X-Google-Smtp-Source: AA6agR5PUlzv5XQworgNqpqqg/1h1X/W0LAjuxF8N7CtRrxKcV4spvajGtYVvIbKkvOHFON/rN0rja2wcQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:196d:4fc7:fa9c:62e3]) (user=elver job=sendgmr) by 2002:a17:906:cc13:b0:73d:d22d:63cd with SMTP id ml19-20020a170906cc1300b0073dd22d63cdmr13213673ejb.741.1661777289495; Mon, 29 Aug 2022 05:48:09 -0700 (PDT) Date: Mon, 29 Aug 2022 14:47:09 +0200 In-Reply-To: <20220829124719.675715-1-elver@google.com> Mime-Version: 1.0 References: <20220829124719.675715-1-elver@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220829124719.675715-5-elver@google.com> Subject: [PATCH v4 04/14] perf/hw_breakpoint: Optimize list of per-task breakpoints From: Marco Elver To: elver@google.com, Peter Zijlstra , Frederic Weisbecker , Ingo Molnar Cc: Thomas Gleixner , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Dmitry Vyukov , Michael Ellerman , linuxppc-dev@lists.ozlabs.org, linux-perf-users@vger.kernel.org, x86@kernel.org, linux-sh@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, Ian Rogers Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org On a machine with 256 CPUs, running the recently added perf breakpoint benchmark results in: | $> perf bench -r 30 breakpoint thread -b 4 -p 64 -t 64 | # Running 'breakpoint/thread' benchmark: | # Created/joined 30 threads with 4 breakpoints and 64 parallelism | Total time: 236.418 [sec] | | 123134.794271 usecs/op | 7880626.833333 usecs/op/cpu The benchmark tests inherited breakpoint perf events across many threads. Looking at a perf profile, we can see that the majority of the time is spent in various hw_breakpoint.c functions, which execute within the 'nr_bp_mutex' critical sections which then results in contention on that mutex as well: 37.27% [kernel] [k] osq_lock 34.92% [kernel] [k] mutex_spin_on_owner 12.15% [kernel] [k] toggle_bp_slot 11.90% [kernel] [k] __reserve_bp_slot The culprit here is task_bp_pinned(), which has a runtime complexity of O(#tasks) due to storing all task breakpoints in the same list and iterating through that list looking for a matching task. Clearly, this does not scale to thousands of tasks. Instead, make use of the "rhashtable" variant "rhltable" which stores multiple items with the same key in a list. This results in average runtime complexity of O(1) for task_bp_pinned(). With the optimization, the benchmark shows: | $> perf bench -r 30 breakpoint thread -b 4 -p 64 -t 64 | # Running 'breakpoint/thread' benchmark: | # Created/joined 30 threads with 4 breakpoints and 64 parallelism | Total time: 0.208 [sec] | | 108.422396 usecs/op | 6939.033333 usecs/op/cpu On this particular setup that's a speedup of ~1135x. While one option would be to make task_struct a breakpoint list node, this would only further bloat task_struct for infrequently used data. Furthermore, after all optimizations in this series, there's no evidence it would result in better performance: later optimizations make the time spent looking up entries in the hash table negligible (we'll reach the theoretical ideal performance i.e. no constraints). Signed-off-by: Marco Elver Reviewed-by: Dmitry Vyukov Acked-by: Ian Rogers --- v2: * Commit message tweaks. --- include/linux/perf_event.h | 3 +- kernel/events/hw_breakpoint.c | 56 ++++++++++++++++++++++------------- 2 files changed, 37 insertions(+), 22 deletions(-) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index ee8b9ecdc03b..a784e055002e 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -36,6 +36,7 @@ struct perf_guest_info_callbacks { }; #ifdef CONFIG_HAVE_HW_BREAKPOINT +#include #include #endif @@ -178,7 +179,7 @@ struct hw_perf_event { * creation and event initalization. */ struct arch_hw_breakpoint info; - struct list_head bp_list; + struct rhlist_head bp_list; }; #endif struct { /* amd_iommu */ diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c index 6076c6346291..6d09edc80d19 100644 --- a/kernel/events/hw_breakpoint.c +++ b/kernel/events/hw_breakpoint.c @@ -26,10 +26,10 @@ #include #include #include -#include #include #include #include +#include #include #include @@ -54,7 +54,13 @@ static struct bp_cpuinfo *get_bp_info(int cpu, enum bp_type_idx type) } /* Keep track of the breakpoints attached to tasks */ -static LIST_HEAD(bp_task_head); +static struct rhltable task_bps_ht; +static const struct rhashtable_params task_bps_ht_params = { + .head_offset = offsetof(struct hw_perf_event, bp_list), + .key_offset = offsetof(struct hw_perf_event, target), + .key_len = sizeof_field(struct hw_perf_event, target), + .automatic_shrinking = true, +}; static int constraints_initialized; @@ -103,17 +109,23 @@ static unsigned int max_task_bp_pinned(int cpu, enum bp_type_idx type) */ static int task_bp_pinned(int cpu, struct perf_event *bp, enum bp_type_idx type) { - struct task_struct *tsk = bp->hw.target; + struct rhlist_head *head, *pos; struct perf_event *iter; int count = 0; - list_for_each_entry(iter, &bp_task_head, hw.bp_list) { - if (iter->hw.target == tsk && - find_slot_idx(iter->attr.bp_type) == type && + rcu_read_lock(); + head = rhltable_lookup(&task_bps_ht, &bp->hw.target, task_bps_ht_params); + if (!head) + goto out; + + rhl_for_each_entry_rcu(iter, pos, head, hw.bp_list) { + if (find_slot_idx(iter->attr.bp_type) == type && (iter->cpu < 0 || cpu == iter->cpu)) count += hw_breakpoint_weight(iter); } +out: + rcu_read_unlock(); return count; } @@ -186,7 +198,7 @@ static void toggle_bp_task_slot(struct perf_event *bp, int cpu, /* * Add/remove the given breakpoint in our constraint table */ -static void +static int toggle_bp_slot(struct perf_event *bp, bool enable, enum bp_type_idx type, int weight) { @@ -199,7 +211,7 @@ toggle_bp_slot(struct perf_event *bp, bool enable, enum bp_type_idx type, /* Pinned counter cpu profiling */ if (!bp->hw.target) { get_bp_info(bp->cpu, type)->cpu_pinned += weight; - return; + return 0; } /* Pinned counter task profiling */ @@ -207,9 +219,9 @@ toggle_bp_slot(struct perf_event *bp, bool enable, enum bp_type_idx type, toggle_bp_task_slot(bp, cpu, type, weight); if (enable) - list_add_tail(&bp->hw.bp_list, &bp_task_head); + return rhltable_insert(&task_bps_ht, &bp->hw.bp_list, task_bps_ht_params); else - list_del(&bp->hw.bp_list); + return rhltable_remove(&task_bps_ht, &bp->hw.bp_list, task_bps_ht_params); } __weak int arch_reserve_bp_slot(struct perf_event *bp) @@ -307,9 +319,7 @@ static int __reserve_bp_slot(struct perf_event *bp, u64 bp_type) if (ret) return ret; - toggle_bp_slot(bp, true, type, weight); - - return 0; + return toggle_bp_slot(bp, true, type, weight); } int reserve_bp_slot(struct perf_event *bp) @@ -334,7 +344,7 @@ static void __release_bp_slot(struct perf_event *bp, u64 bp_type) type = find_slot_idx(bp_type); weight = hw_breakpoint_weight(bp); - toggle_bp_slot(bp, false, type, weight); + WARN_ON(toggle_bp_slot(bp, false, type, weight)); } void release_bp_slot(struct perf_event *bp) @@ -707,7 +717,7 @@ static struct pmu perf_breakpoint = { int __init init_hw_breakpoint(void) { int cpu, err_cpu; - int i; + int i, ret; for (i = 0; i < TYPE_MAX; i++) nr_slots[i] = hw_breakpoint_slots(i); @@ -718,18 +728,24 @@ int __init init_hw_breakpoint(void) info->tsk_pinned = kcalloc(nr_slots[i], sizeof(int), GFP_KERNEL); - if (!info->tsk_pinned) - goto err_alloc; + if (!info->tsk_pinned) { + ret = -ENOMEM; + goto err; + } } } + ret = rhltable_init(&task_bps_ht, &task_bps_ht_params); + if (ret) + goto err; + constraints_initialized = 1; perf_pmu_register(&perf_breakpoint, "breakpoint", PERF_TYPE_BREAKPOINT); return register_die_notifier(&hw_breakpoint_exceptions_nb); - err_alloc: +err: for_each_possible_cpu(err_cpu) { for (i = 0; i < TYPE_MAX; i++) kfree(get_bp_info(err_cpu, i)->tsk_pinned); @@ -737,7 +753,5 @@ int __init init_hw_breakpoint(void) break; } - return -ENOMEM; + return ret; } - - From patchwork Mon Aug 29 12:47:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12957755 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A365EECAAD2 for ; Mon, 29 Aug 2022 12:57:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230449AbiH2M5R (ORCPT ); Mon, 29 Aug 2022 08:57:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55266 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230255AbiH2M43 (ORCPT ); Mon, 29 Aug 2022 08:56:29 -0400 Received: from mail-ed1-x54a.google.com (mail-ed1-x54a.google.com [IPv6:2a00:1450:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8A8057CB52 for ; Mon, 29 Aug 2022 05:48:13 -0700 (PDT) Received: by mail-ed1-x54a.google.com with SMTP id z20-20020a05640235d400b0043e1e74a495so5437742edc.11 for ; Mon, 29 Aug 2022 05:48:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=xCb3PCmBWSP36RKUEWOlpZN7hfh85Z1tvKAaYcX0zpk=; b=bkzY5qKI18VWd4mlwWM1vsviFVP4q78HrnTxrdtR5mo9H6pfnRG17R5qugpXfL7a3C wCeY287TAp6qkMpaEvrmcs8dRYfSWwV8oWll0gU0C2+YoURGAMIxPu5xp4rjPy+bUaaA Pe7ln30jJ/atykyMBocwI9/fKS382484rx9GtCgqO1avY7pHIQ5SHOMiGbCVqPIvYzgc vum+2/Q5MVS6eBFJN4+hVCuVqfedsJUjTaeXPMQ2Z03LEk0tk/b+PnLwtSM5gTRlKVFJ 4wj5vAGL/ISBuJFFNcV6QXU/h0GHjtdWdo2nTY8203whs27GL67Jq5Zsc5AZdNjCjL6M R2zg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=xCb3PCmBWSP36RKUEWOlpZN7hfh85Z1tvKAaYcX0zpk=; b=Ewbd7rfn6ZyxPQz60uf52qv057UQiiFXfYWl0/6+CGkRaTBE1Xy6Zq+a5CXfMJVLu0 ED6ButrqUMTljjAV7xBzl06+0WJ3PhmexHmYKQ6JlWKhNTZLbYxVOsuSUoOEbZbrvibD O8+cOhYpgVpM4O+MagOEdfmsFUa9e8h7HbFxPpjBNWTxj/hAv5Bjj5royPhwIzx28xgl duBi58SnaOz7hFn/HVt/e2P0Mw4ZDknKI62P+LO+jhe5M7TlIRPvzolJ+x7/lDTh9tUs EP3WvEwLGFAMQbSBbAb6vL0R0ExjuLPEIbHqJKQoAmHGaZSiLm9OsCpzftaJGkMD0SnF 64mQ== X-Gm-Message-State: ACgBeo3ebDGp62L80lBtghvuk5YN+yKLNn7vXtyy9UVCexhcJadriXzf 0ATpjwL1z5YKIwHvqPmhG/vhLq1muQ== X-Google-Smtp-Source: AA6agR7i/RJsfaE6+mHqdwn3xeEjB0tlbiv/szuLIYXZ6XN1jxpVHA+uSIe33NebngMb8iQE2cPcnN59JA== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:196d:4fc7:fa9c:62e3]) (user=elver job=sendgmr) by 2002:a17:906:ef90:b0:730:9d18:17b3 with SMTP id ze16-20020a170906ef9000b007309d1817b3mr13769351ejb.141.1661777292013; Mon, 29 Aug 2022 05:48:12 -0700 (PDT) Date: Mon, 29 Aug 2022 14:47:10 +0200 In-Reply-To: <20220829124719.675715-1-elver@google.com> Mime-Version: 1.0 References: <20220829124719.675715-1-elver@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220829124719.675715-6-elver@google.com> Subject: [PATCH v4 05/14] perf/hw_breakpoint: Mark data __ro_after_init From: Marco Elver To: elver@google.com, Peter Zijlstra , Frederic Weisbecker , Ingo Molnar Cc: Thomas Gleixner , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Dmitry Vyukov , Michael Ellerman , linuxppc-dev@lists.ozlabs.org, linux-perf-users@vger.kernel.org, x86@kernel.org, linux-sh@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, Ian Rogers Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Mark read-only data after initialization as __ro_after_init. While we are here, turn 'constraints_initialized' into a bool. Signed-off-by: Marco Elver Reviewed-by: Dmitry Vyukov Acked-by: Ian Rogers --- kernel/events/hw_breakpoint.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c index 6d09edc80d19..7df46b276452 100644 --- a/kernel/events/hw_breakpoint.c +++ b/kernel/events/hw_breakpoint.c @@ -46,7 +46,7 @@ struct bp_cpuinfo { }; static DEFINE_PER_CPU(struct bp_cpuinfo, bp_cpuinfo[TYPE_MAX]); -static int nr_slots[TYPE_MAX]; +static int nr_slots[TYPE_MAX] __ro_after_init; static struct bp_cpuinfo *get_bp_info(int cpu, enum bp_type_idx type) { @@ -62,7 +62,7 @@ static const struct rhashtable_params task_bps_ht_params = { .automatic_shrinking = true, }; -static int constraints_initialized; +static bool constraints_initialized __ro_after_init; /* Gather the number of total pinned and un-pinned bp in a cpuset */ struct bp_busy_slots { @@ -739,7 +739,7 @@ int __init init_hw_breakpoint(void) if (ret) goto err; - constraints_initialized = 1; + constraints_initialized = true; perf_pmu_register(&perf_breakpoint, "breakpoint", PERF_TYPE_BREAKPOINT); From patchwork Mon Aug 29 12:47:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12957757 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 659C7ECAAD4 for ; Mon, 29 Aug 2022 12:57:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230353AbiH2M5T (ORCPT ); Mon, 29 Aug 2022 08:57:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55312 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230264AbiH2M4b (ORCPT ); Mon, 29 Aug 2022 08:56:31 -0400 Received: from mail-lf1-x14a.google.com (mail-lf1-x14a.google.com [IPv6:2a00:1450:4864:20::14a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 82A1E8048F for ; Mon, 29 Aug 2022 05:48:16 -0700 (PDT) Received: by mail-lf1-x14a.google.com with SMTP id p36-20020a05651213a400b004779d806c13so2013507lfa.10 for ; Mon, 29 Aug 2022 05:48:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=NAgg+axkJ/HbuBi/N/cfy43ZzHoR1EkHXaJqULg0eJ8=; b=d6tp/6qX5TL3XVtPCbJz2/1vuIAcUvKthwO40GKuZhgzsTTDRuiOkz3BZcixKBbi2n UQYkdthiBMcKQ9ZXpIqi/To5OOYxKb/S4Z/1EGZsN0+gq/poq9hq9KqthbUQVCglAJX8 xlmoGDW15PsCUmXFgve7TsBen3OR8air0io8XYZ1XE6ulaap/zwXhenfCdVxUVBbmA2d AvKHvHaeNzWbyizTuyopXBWL0dm+dKBB/SYfST65x9lohRyv8I2PmWRhey6EebuOhlT7 K7hjRncGEscd9yk4szPmUVZk0eVLuEF72owL1CGws1KtJ8MO0hkKYiib6fb6Z7IKwn/6 JMVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=NAgg+axkJ/HbuBi/N/cfy43ZzHoR1EkHXaJqULg0eJ8=; b=c0sXon4arrT3AAx6nXQFrUkGGC8B/FHXjYFWCbU1dgHSnm8h9dn/KnTSuS4Wbk6slH nrqJhstBJc5rhldDJweNMl7djk3P9EnSQYO78QqNnVytjxtmlGXMM4T4p+ay68UG6UKf ZqW580BiOWqgKy6f4riDdq+Yu4Uul9j9fBHupYJQFL1F0iBqsm5hSYqZfj1iEYksW4Ti 4vUx9j3+VDc9l8yMi8+EjpOZ1iQmY3VJkaoUvb+I8F7/s6uDIEtJx36Xdm9VLuophm8n xx7t+v2s4TuiuJoVQ7WA1Z/P5IIc4wh7yBhziE0Mo7vs306OPgV1yivcDfmQyxQRNcHJ N3AQ== X-Gm-Message-State: ACgBeo3EQk2ozI/JDWsfm7dTMz9wzqlVVSEKSVFc4WvwoBwcpCkSx9k7 3ACfElNoYDygdisq+ueFQhWHCOIkuQ== X-Google-Smtp-Source: AA6agR7ShzfoSB2vjBsmBIrxfMh2D8H7+VTFm9wSqU5SVOMol70ihnkQBQYE6o2VylcaG46rPGK6/VRekQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:196d:4fc7:fa9c:62e3]) (user=elver job=sendgmr) by 2002:a05:6512:3b10:b0:494:6105:1f62 with SMTP id f16-20020a0565123b1000b0049461051f62mr3032181lfv.172.1661777294789; Mon, 29 Aug 2022 05:48:14 -0700 (PDT) Date: Mon, 29 Aug 2022 14:47:11 +0200 In-Reply-To: <20220829124719.675715-1-elver@google.com> Mime-Version: 1.0 References: <20220829124719.675715-1-elver@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220829124719.675715-7-elver@google.com> Subject: [PATCH v4 06/14] perf/hw_breakpoint: Optimize constant number of breakpoint slots From: Marco Elver To: elver@google.com, Peter Zijlstra , Frederic Weisbecker , Ingo Molnar Cc: Thomas Gleixner , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Dmitry Vyukov , Michael Ellerman , linuxppc-dev@lists.ozlabs.org, linux-perf-users@vger.kernel.org, x86@kernel.org, linux-sh@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, Ian Rogers Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Optimize internal hw_breakpoint state if the architecture's number of breakpoint slots is constant. This avoids several kmalloc() calls and potentially unnecessary failures if the allocations fail, as well as subtly improves code generation and cache locality. The protocol is that if an architecture defines hw_breakpoint_slots via the preprocessor, it must be constant and the same for all types. Signed-off-by: Marco Elver Acked-by: Dmitry Vyukov Acked-by: Ian Rogers --- arch/sh/include/asm/hw_breakpoint.h | 5 +- arch/x86/include/asm/hw_breakpoint.h | 5 +- kernel/events/hw_breakpoint.c | 94 ++++++++++++++++++---------- 3 files changed, 63 insertions(+), 41 deletions(-) diff --git a/arch/sh/include/asm/hw_breakpoint.h b/arch/sh/include/asm/hw_breakpoint.h index 199d17b765f2..361a0f57bdeb 100644 --- a/arch/sh/include/asm/hw_breakpoint.h +++ b/arch/sh/include/asm/hw_breakpoint.h @@ -48,10 +48,7 @@ struct pmu; /* Maximum number of UBC channels */ #define HBP_NUM 2 -static inline int hw_breakpoint_slots(int type) -{ - return HBP_NUM; -} +#define hw_breakpoint_slots(type) (HBP_NUM) /* arch/sh/kernel/hw_breakpoint.c */ extern int arch_check_bp_in_kernelspace(struct arch_hw_breakpoint *hw); diff --git a/arch/x86/include/asm/hw_breakpoint.h b/arch/x86/include/asm/hw_breakpoint.h index a1f0e90d0818..0bc931cd0698 100644 --- a/arch/x86/include/asm/hw_breakpoint.h +++ b/arch/x86/include/asm/hw_breakpoint.h @@ -44,10 +44,7 @@ struct arch_hw_breakpoint { /* Total number of available HW breakpoint registers */ #define HBP_NUM 4 -static inline int hw_breakpoint_slots(int type) -{ - return HBP_NUM; -} +#define hw_breakpoint_slots(type) (HBP_NUM) struct perf_event_attr; struct perf_event; diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c index 7df46b276452..9fb66d358d81 100644 --- a/kernel/events/hw_breakpoint.c +++ b/kernel/events/hw_breakpoint.c @@ -40,13 +40,16 @@ struct bp_cpuinfo { /* Number of pinned cpu breakpoints in a cpu */ unsigned int cpu_pinned; /* tsk_pinned[n] is the number of tasks having n+1 breakpoints */ +#ifdef hw_breakpoint_slots + unsigned int tsk_pinned[hw_breakpoint_slots(0)]; +#else unsigned int *tsk_pinned; +#endif /* Number of non-pinned cpu/task breakpoints in a cpu */ unsigned int flexible; /* XXX: placeholder, see fetch_this_slot() */ }; static DEFINE_PER_CPU(struct bp_cpuinfo, bp_cpuinfo[TYPE_MAX]); -static int nr_slots[TYPE_MAX] __ro_after_init; static struct bp_cpuinfo *get_bp_info(int cpu, enum bp_type_idx type) { @@ -73,6 +76,54 @@ struct bp_busy_slots { /* Serialize accesses to the above constraints */ static DEFINE_MUTEX(nr_bp_mutex); +#ifdef hw_breakpoint_slots +/* + * Number of breakpoint slots is constant, and the same for all types. + */ +static_assert(hw_breakpoint_slots(TYPE_INST) == hw_breakpoint_slots(TYPE_DATA)); +static inline int hw_breakpoint_slots_cached(int type) { return hw_breakpoint_slots(type); } +static inline int init_breakpoint_slots(void) { return 0; } +#else +/* + * Dynamic number of breakpoint slots. + */ +static int __nr_bp_slots[TYPE_MAX] __ro_after_init; + +static inline int hw_breakpoint_slots_cached(int type) +{ + return __nr_bp_slots[type]; +} + +static __init int init_breakpoint_slots(void) +{ + int i, cpu, err_cpu; + + for (i = 0; i < TYPE_MAX; i++) + __nr_bp_slots[i] = hw_breakpoint_slots(i); + + for_each_possible_cpu(cpu) { + for (i = 0; i < TYPE_MAX; i++) { + struct bp_cpuinfo *info = get_bp_info(cpu, i); + + info->tsk_pinned = kcalloc(__nr_bp_slots[i], sizeof(int), GFP_KERNEL); + if (!info->tsk_pinned) + goto err; + } + } + + return 0; +err: + for_each_possible_cpu(err_cpu) { + for (i = 0; i < TYPE_MAX; i++) + kfree(get_bp_info(err_cpu, i)->tsk_pinned); + if (err_cpu == cpu) + break; + } + + return -ENOMEM; +} +#endif + __weak int hw_breakpoint_weight(struct perf_event *bp) { return 1; @@ -95,7 +146,7 @@ static unsigned int max_task_bp_pinned(int cpu, enum bp_type_idx type) unsigned int *tsk_pinned = get_bp_info(cpu, type)->tsk_pinned; int i; - for (i = nr_slots[type] - 1; i >= 0; i--) { + for (i = hw_breakpoint_slots_cached(type) - 1; i >= 0; i--) { if (tsk_pinned[i] > 0) return i + 1; } @@ -312,7 +363,7 @@ static int __reserve_bp_slot(struct perf_event *bp, u64 bp_type) fetch_this_slot(&slots, weight); /* Flexible counters need to keep at least one slot */ - if (slots.pinned + (!!slots.flexible) > nr_slots[type]) + if (slots.pinned + (!!slots.flexible) > hw_breakpoint_slots_cached(type)) return -ENOSPC; ret = arch_reserve_bp_slot(bp); @@ -632,7 +683,7 @@ bool hw_breakpoint_is_used(void) if (info->cpu_pinned) return true; - for (int slot = 0; slot < nr_slots[type]; ++slot) { + for (int slot = 0; slot < hw_breakpoint_slots_cached(type); ++slot) { if (info->tsk_pinned[slot]) return true; } @@ -716,42 +767,19 @@ static struct pmu perf_breakpoint = { int __init init_hw_breakpoint(void) { - int cpu, err_cpu; - int i, ret; - - for (i = 0; i < TYPE_MAX; i++) - nr_slots[i] = hw_breakpoint_slots(i); - - for_each_possible_cpu(cpu) { - for (i = 0; i < TYPE_MAX; i++) { - struct bp_cpuinfo *info = get_bp_info(cpu, i); - - info->tsk_pinned = kcalloc(nr_slots[i], sizeof(int), - GFP_KERNEL); - if (!info->tsk_pinned) { - ret = -ENOMEM; - goto err; - } - } - } + int ret; ret = rhltable_init(&task_bps_ht, &task_bps_ht_params); if (ret) - goto err; + return ret; + + ret = init_breakpoint_slots(); + if (ret) + return ret; constraints_initialized = true; perf_pmu_register(&perf_breakpoint, "breakpoint", PERF_TYPE_BREAKPOINT); return register_die_notifier(&hw_breakpoint_exceptions_nb); - -err: - for_each_possible_cpu(err_cpu) { - for (i = 0; i < TYPE_MAX; i++) - kfree(get_bp_info(err_cpu, i)->tsk_pinned); - if (err_cpu == cpu) - break; - } - - return ret; } From patchwork Mon Aug 29 12:47:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12957756 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8375ECAAD5 for ; Mon, 29 Aug 2022 12:57:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230255AbiH2M5S (ORCPT ); Mon, 29 Aug 2022 08:57:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49870 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230266AbiH2M4b (ORCPT ); Mon, 29 Aug 2022 08:56:31 -0400 Received: from mail-ej1-x64a.google.com (mail-ej1-x64a.google.com [IPv6:2a00:1450:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 18D4D5FEA for ; Mon, 29 Aug 2022 05:48:19 -0700 (PDT) Received: by mail-ej1-x64a.google.com with SMTP id qb39-20020a1709077ea700b0073ddc845586so2243000ejc.2 for ; Mon, 29 Aug 2022 05:48:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=T2C4sxYRs2lEzCnHUaLRTNKBKI12YQqfPe+dPUrodjM=; b=Oei65SiBeyseuj2GkCMZu6UkQDXfMukCs8q/ts7docSju1x50Wnuyo56PNgsA6HyKh QIN3iALVpE4KtqSJCJw7xhEOCvdF6Nj4aaQRdldUGReSHeSUPlQtPNrfZMshM9WmIh7D F66kQer8IfUfc0aZZBJKqF2p8mQNUtHZduPcpyzmcjmfTP8IAtSfvKheWIe8pbsuF80V n6JT0dTHHJGAeqPEaae41JTQVmfHG3xrkJcxGeiFu5iSEmPx+AxR8pfqXZMWkWBblTqj qwkITtO1N1SMp71IBMV7uOhDi/g7bC6iP/ZkN9sylaZYO6ap321AAv0sJxlYpg70So+Z I9Ig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=T2C4sxYRs2lEzCnHUaLRTNKBKI12YQqfPe+dPUrodjM=; b=AbBmCOKykdhRJWn1EfBEfit3QxBay9Mowwb1hdiu9RQ6MhKpoowwkBUDfLnoInKa18 M5tpVz2uagyjIGHOktbTq+AcrJcbD9sE3nHh0tUQzQZvlGhHYl/hCwT6RhfMmw6GOVlJ Y3Tv19Xf5kB4mxDw79h/U01zlZYZBqVhUXmPbISOhINtR0oPaIIdcQt3HxKqE18ahnE2 F76xHybb4C9yCkTdbj+U7aM4GkcTHrRgvkJs+D3XHcecqGv8Cr7ykbc2ZCMM6RiZKF7C NK1QPjLD8lkl7vaguen84XKjC0WSBFJdO80gzOWmcDasJXczV5ct6umSme0rDjEkNNWL NT+w== X-Gm-Message-State: ACgBeo2GBe88ANgm5jNCiPBnzM8TkmMHH47xsFdmxbV33EtGc0fhf/QI xk2fR1AJq/ayMnvFrbSVO+H0Q9pzAA== X-Google-Smtp-Source: AA6agR7pPBxBn4gZSDR06aP3DtZ7Ndp+4q6FXFUDOBECLllaiRHkqTE+4IlfvAW4YcpDEfo5TzfTEk2i4g== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:196d:4fc7:fa9c:62e3]) (user=elver job=sendgmr) by 2002:a17:907:3f97:b0:741:84b4:8356 with SMTP id hr23-20020a1709073f9700b0074184b48356mr3916826ejc.148.1661777297695; Mon, 29 Aug 2022 05:48:17 -0700 (PDT) Date: Mon, 29 Aug 2022 14:47:12 +0200 In-Reply-To: <20220829124719.675715-1-elver@google.com> Mime-Version: 1.0 References: <20220829124719.675715-1-elver@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220829124719.675715-8-elver@google.com> Subject: [PATCH v4 07/14] perf/hw_breakpoint: Make hw_breakpoint_weight() inlinable From: Marco Elver To: elver@google.com, Peter Zijlstra , Frederic Weisbecker , Ingo Molnar Cc: Thomas Gleixner , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Dmitry Vyukov , Michael Ellerman , linuxppc-dev@lists.ozlabs.org, linux-perf-users@vger.kernel.org, x86@kernel.org, linux-sh@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, Ian Rogers Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Due to being a __weak function, hw_breakpoint_weight() will cause the compiler to always emit a call to it. This generates unnecessarily bad code (register spills etc.) for no good reason; in fact it appears in profiles of `perf bench -r 100 breakpoint thread -b 4 -p 128 -t 512`: ... 0.70% [kernel] [k] hw_breakpoint_weight ... While a small percentage, no architecture defines its own hw_breakpoint_weight() nor are there users outside hw_breakpoint.c, which makes the fact it is currently __weak a poor choice. Change hw_breakpoint_weight()'s definition to follow a similar protocol to hw_breakpoint_slots(), such that if defines hw_breakpoint_weight(), we'll use it instead. The result is that it is inlined and no longer shows up in profiles. Signed-off-by: Marco Elver Reviewed-by: Dmitry Vyukov Acked-by: Ian Rogers --- include/linux/hw_breakpoint.h | 1 - kernel/events/hw_breakpoint.c | 4 +++- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/include/linux/hw_breakpoint.h b/include/linux/hw_breakpoint.h index a3fb846705eb..f319bd26b030 100644 --- a/include/linux/hw_breakpoint.h +++ b/include/linux/hw_breakpoint.h @@ -80,7 +80,6 @@ extern int dbg_reserve_bp_slot(struct perf_event *bp); extern int dbg_release_bp_slot(struct perf_event *bp); extern int reserve_bp_slot(struct perf_event *bp); extern void release_bp_slot(struct perf_event *bp); -int hw_breakpoint_weight(struct perf_event *bp); int arch_reserve_bp_slot(struct perf_event *bp); void arch_release_bp_slot(struct perf_event *bp); void arch_unregister_hw_breakpoint(struct perf_event *bp); diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c index 9fb66d358d81..9c9bf17666a5 100644 --- a/kernel/events/hw_breakpoint.c +++ b/kernel/events/hw_breakpoint.c @@ -124,10 +124,12 @@ static __init int init_breakpoint_slots(void) } #endif -__weak int hw_breakpoint_weight(struct perf_event *bp) +#ifndef hw_breakpoint_weight +static inline int hw_breakpoint_weight(struct perf_event *bp) { return 1; } +#endif static inline enum bp_type_idx find_slot_idx(u64 bp_type) { From patchwork Mon Aug 29 12:47:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12957758 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48372ECAAD4 for ; Mon, 29 Aug 2022 12:57:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230463AbiH2M5W (ORCPT ); Mon, 29 Aug 2022 08:57:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47844 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230290AbiH2M4b (ORCPT ); Mon, 29 Aug 2022 08:56:31 -0400 Received: from mail-ed1-x54a.google.com (mail-ed1-x54a.google.com [IPv6:2a00:1450:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A7CF4B44 for ; Mon, 29 Aug 2022 05:48:21 -0700 (PDT) Received: by mail-ed1-x54a.google.com with SMTP id b13-20020a056402350d00b0043dfc84c533so5338329edd.5 for ; Mon, 29 Aug 2022 05:48:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=mMk84n4RwXUmdOHQie4vTMaJ1dE0ku+oL7Yd7vDc1Ps=; b=W7W/4jDzGQZPpWdKVVb3iyVFqESRjMIgUO/Wp66DnWVzBuac4gaS/+1d/8UGTZ19PO yNtk8YvzBHI+IABZPJpTnSP7zAh86D71lOIr3JNjBItKmnkUoHgHOUPUlxwxGnAm4oS5 86BJQ/SK7KYzWOtLyDZjBernRUKqylcv/xvvA/bW3Q1BXcc+MCMG9JT0Igf39PMzbjua NcJLlRx2gSgtfzMG9Setvh6XnVCZ+mASJQamzHBKwMdP7yYZDkYPH8rxnXE9jJiadyEE k/PiCxKPnf4lA4uPuFTPjgJjOrcTc1ZRCS/25a2bUTBNJmAmEojBOeVuXhG7Ot0UUAKZ Y4rA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=mMk84n4RwXUmdOHQie4vTMaJ1dE0ku+oL7Yd7vDc1Ps=; b=rJi7c8qcXH8PGhwTGq3bnpU1MebhBtm2lIuNrBwk0LpqO+MLKfyXjOSZcuWFL1B42/ QbmwcOmB8pFEFbCuh9GmqCU9sTJelVXFyn5aXS5h5WBYTJUP8L83+52tLUhB6wUfrCB2 FLdLtK8dJdfDjkrCK9wVOZJTK5z5lYd0/QhgvcWZZpbZG0I+90x/jam8G7col01LV7ZS ab6voNDzhR45WnO0Exzj5Ttso9a+qwfv9pa6geR1PYoEKihbhvPI+jmovnU8MzZRrAV9 wj0aEWbpQ/VEoo8mhl9LKntZMj/x8pYAtSgkVvlIVdJtMmL20RsgQHLf/g3AIUFT0Zpf CU8Q== X-Gm-Message-State: ACgBeo0we+eeF0z8prbFH7ThnTitgoDaCyCqpFVuJ8mRqQX6n8TcBxyr 1UuOJn9hCdXw1Rw7DJ8mE9850LKQvg== X-Google-Smtp-Source: AA6agR6SxUOVw/+qKc286EXnhwyeQv/Uts/8z5qu8IaKngSGvfFuyE7tYrd41vyrdbuHS1CgREQPsDPnsA== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:196d:4fc7:fa9c:62e3]) (user=elver job=sendgmr) by 2002:a05:6402:5024:b0:440:e4ad:f7b6 with SMTP id p36-20020a056402502400b00440e4adf7b6mr16547207eda.358.1661777300281; Mon, 29 Aug 2022 05:48:20 -0700 (PDT) Date: Mon, 29 Aug 2022 14:47:13 +0200 In-Reply-To: <20220829124719.675715-1-elver@google.com> Mime-Version: 1.0 References: <20220829124719.675715-1-elver@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220829124719.675715-9-elver@google.com> Subject: [PATCH v4 08/14] perf/hw_breakpoint: Remove useless code related to flexible breakpoints From: Marco Elver To: elver@google.com, Peter Zijlstra , Frederic Weisbecker , Ingo Molnar Cc: Thomas Gleixner , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Dmitry Vyukov , Michael Ellerman , linuxppc-dev@lists.ozlabs.org, linux-perf-users@vger.kernel.org, x86@kernel.org, linux-sh@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, Ian Rogers Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Flexible breakpoints have never been implemented, with bp_cpuinfo::flexible always being 0. Unfortunately, they still occupy 4 bytes in each bp_cpuinfo and bp_busy_slots, as well as computing the max flexible count in fetch_bp_busy_slots(). This again causes suboptimal code generation, when we always know that `!!slots.flexible` will be 0. Just get rid of the flexible "placeholder" and remove all real code related to it. Make a note in the comment related to the constraints algorithm but don't remove them from the algorithm, so that if in future flexible breakpoints need supporting, it should be trivial to revive them (along with reverting this change). Signed-off-by: Marco Elver Reviewed-by: Dmitry Vyukov Acked-by: Ian Rogers --- v2: * Also remove struct bp_busy_slots, and simplify functions. --- kernel/events/hw_breakpoint.c | 57 +++++++++++------------------------ 1 file changed, 17 insertions(+), 40 deletions(-) diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c index 9c9bf17666a5..8b40fca1a063 100644 --- a/kernel/events/hw_breakpoint.c +++ b/kernel/events/hw_breakpoint.c @@ -45,8 +45,6 @@ struct bp_cpuinfo { #else unsigned int *tsk_pinned; #endif - /* Number of non-pinned cpu/task breakpoints in a cpu */ - unsigned int flexible; /* XXX: placeholder, see fetch_this_slot() */ }; static DEFINE_PER_CPU(struct bp_cpuinfo, bp_cpuinfo[TYPE_MAX]); @@ -67,12 +65,6 @@ static const struct rhashtable_params task_bps_ht_params = { static bool constraints_initialized __ro_after_init; -/* Gather the number of total pinned and un-pinned bp in a cpuset */ -struct bp_busy_slots { - unsigned int pinned; - unsigned int flexible; -}; - /* Serialize accesses to the above constraints */ static DEFINE_MUTEX(nr_bp_mutex); @@ -190,14 +182,14 @@ static const struct cpumask *cpumask_of_bp(struct perf_event *bp) } /* - * Report the number of pinned/un-pinned breakpoints we have in - * a given cpu (cpu > -1) or in all of them (cpu = -1). + * Returns the max pinned breakpoint slots in a given + * CPU (cpu > -1) or across all of them (cpu = -1). */ -static void -fetch_bp_busy_slots(struct bp_busy_slots *slots, struct perf_event *bp, - enum bp_type_idx type) +static int +max_bp_pinned_slots(struct perf_event *bp, enum bp_type_idx type) { const struct cpumask *cpumask = cpumask_of_bp(bp); + int pinned_slots = 0; int cpu; for_each_cpu(cpu, cpumask) { @@ -210,24 +202,10 @@ fetch_bp_busy_slots(struct bp_busy_slots *slots, struct perf_event *bp, else nr += task_bp_pinned(cpu, bp, type); - if (nr > slots->pinned) - slots->pinned = nr; - - nr = info->flexible; - if (nr > slots->flexible) - slots->flexible = nr; + pinned_slots = max(nr, pinned_slots); } -} -/* - * For now, continue to consider flexible as pinned, until we can - * ensure no flexible event can ever be scheduled before a pinned event - * in a same cpu. - */ -static void -fetch_this_slot(struct bp_busy_slots *slots, int weight) -{ - slots->pinned += weight; + return pinned_slots; } /* @@ -298,7 +276,12 @@ __weak void arch_unregister_hw_breakpoint(struct perf_event *bp) } /* - * Constraints to check before allowing this new breakpoint counter: + * Constraints to check before allowing this new breakpoint counter. + * + * Note: Flexible breakpoints are currently unimplemented, but outlined in the + * below algorithm for completeness. The implementation treats flexible as + * pinned due to no guarantee that we currently always schedule flexible events + * before a pinned event in a same CPU. * * == Non-pinned counter == (Considered as pinned for now) * @@ -340,8 +323,8 @@ __weak void arch_unregister_hw_breakpoint(struct perf_event *bp) */ static int __reserve_bp_slot(struct perf_event *bp, u64 bp_type) { - struct bp_busy_slots slots = {0}; enum bp_type_idx type; + int max_pinned_slots; int weight; int ret; @@ -357,15 +340,9 @@ static int __reserve_bp_slot(struct perf_event *bp, u64 bp_type) type = find_slot_idx(bp_type); weight = hw_breakpoint_weight(bp); - fetch_bp_busy_slots(&slots, bp, type); - /* - * Simulate the addition of this breakpoint to the constraints - * and see the result. - */ - fetch_this_slot(&slots, weight); - - /* Flexible counters need to keep at least one slot */ - if (slots.pinned + (!!slots.flexible) > hw_breakpoint_slots_cached(type)) + /* Check if this new breakpoint can be satisfied across all CPUs. */ + max_pinned_slots = max_bp_pinned_slots(bp, type) + weight; + if (max_pinned_slots > hw_breakpoint_slots_cached(type)) return -ENOSPC; ret = arch_reserve_bp_slot(bp); From patchwork Mon Aug 29 12:47:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12957759 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5A86ECAAD2 for ; Mon, 29 Aug 2022 12:57:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230254AbiH2M5d (ORCPT ); Mon, 29 Aug 2022 08:57:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50030 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230110AbiH2M4b (ORCPT ); Mon, 29 Aug 2022 08:56:31 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9478AF7F for ; Mon, 29 Aug 2022 05:48:23 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-340a4dcb403so106836077b3.22 for ; Mon, 29 Aug 2022 05:48:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=17GhIUVsdBD381xCSlsSf/bvMmuZOpgOvCsizfGJYww=; b=CbUwP42fuhmKg7ri7biMklFhB21W8cZzoP21+kt7Lgj6pRAyQfpzbZRWFbKeP6coDy AtkU/EIId8EZZt3IqUVsnXV5/PdeWQ/wbj3+W3aWt+UGVld9yGpyxTMNZ+LkqFqFDq4E Eg6dryn4nCZzZyYEDkk4dENmaVGKAEjjahQAFSW9MrwIn2ia3Ddf8nIbwrLhCtzzbI6c JLfLxik35LBKo1sshesg7tGD+yKVjyy6SkFYmLaXuEjFwS58Wn8DixsyUbFkK1+1u4L0 yG9CGURYktVy3sgJ3tVijbDxB25JjEwg6LeoC3rHTdBX+q3W9l3w84Qs4AN3cxM2jwfa 1Pqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=17GhIUVsdBD381xCSlsSf/bvMmuZOpgOvCsizfGJYww=; b=K/4Rg/XApDxBuTXSOv6aEKgDmDELx02k2qgccbYZCniqhhas3cuGFq3SiPVPwknsGn iNg4AxEH2jQvEDJoouKmP1zpiD4htS1BVrUXLHYY9/NjLp+pO29KFl4NMAL6BKoVirNR OJnsbXmoFlE3u5GyCTxlKJtxQalsgIDlgiC6S8oK9mDkWdqgrVn0bfTfOVGEat3zfB4z y6j+7ZhDTdI/79js0MmO8GrkbqrVmG9UNY3gSWssFUFdoVCkUJDNtsnlZghSM6nVDpX5 BtZMu0HjXHD9pKQl3AorKTsvj65cPmTG0uiLFVlza2nd0GL0Z8hyPQmg/aP9HKYoHzIH 7mFA== X-Gm-Message-State: ACgBeo3Ona22ABcyH/lDrFs0Pbl9mesJf6FtA0cF1z1/mD0NL9O4Bpgt e0e57MiLlFtFH0oFDTPri2uUHy5M9g== X-Google-Smtp-Source: AA6agR7saRNQPyfeUGNlvdKaMJxYtoC1JW5xi3rwIoqgRCRjBdEPFh2wH5NnVmpB77WBemcvNGm+4C/jLw== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:196d:4fc7:fa9c:62e3]) (user=elver job=sendgmr) by 2002:a0d:f402:0:b0:33c:eda4:8557 with SMTP id d2-20020a0df402000000b0033ceda48557mr9486024ywf.183.1661777302876; Mon, 29 Aug 2022 05:48:22 -0700 (PDT) Date: Mon, 29 Aug 2022 14:47:14 +0200 In-Reply-To: <20220829124719.675715-1-elver@google.com> Mime-Version: 1.0 References: <20220829124719.675715-1-elver@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220829124719.675715-10-elver@google.com> Subject: [PATCH v4 09/14] powerpc/hw_breakpoint: Avoid relying on caller synchronization From: Marco Elver To: elver@google.com, Peter Zijlstra , Frederic Weisbecker , Ingo Molnar Cc: Thomas Gleixner , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Dmitry Vyukov , Michael Ellerman , linuxppc-dev@lists.ozlabs.org, linux-perf-users@vger.kernel.org, x86@kernel.org, linux-sh@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, Ian Rogers Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Internal data structures (cpu_bps, task_bps) of powerpc's hw_breakpoint implementation have relied on nr_bp_mutex serializing access to them. Before overhauling synchronization of kernel/events/hw_breakpoint.c, introduce 2 spinlocks to synchronize cpu_bps and task_bps respectively, thus avoiding reliance on callers synchronizing powerpc's hw_breakpoint. Reported-by: Dmitry Vyukov Signed-off-by: Marco Elver Acked-by: Dmitry Vyukov Acked-by: Ian Rogers --- v2: * New patch. --- arch/powerpc/kernel/hw_breakpoint.c | 53 ++++++++++++++++++++++------- 1 file changed, 40 insertions(+), 13 deletions(-) diff --git a/arch/powerpc/kernel/hw_breakpoint.c b/arch/powerpc/kernel/hw_breakpoint.c index 2669f80b3a49..8db1a15d7acb 100644 --- a/arch/powerpc/kernel/hw_breakpoint.c +++ b/arch/powerpc/kernel/hw_breakpoint.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include @@ -129,7 +130,14 @@ struct breakpoint { bool ptrace_bp; }; +/* + * While kernel/events/hw_breakpoint.c does its own synchronization, we cannot + * rely on it safely synchronizing internals here; however, we can rely on it + * not requesting more breakpoints than available. + */ +static DEFINE_SPINLOCK(cpu_bps_lock); static DEFINE_PER_CPU(struct breakpoint *, cpu_bps[HBP_NUM_MAX]); +static DEFINE_SPINLOCK(task_bps_lock); static LIST_HEAD(task_bps); static struct breakpoint *alloc_breakpoint(struct perf_event *bp) @@ -174,7 +182,9 @@ static int task_bps_add(struct perf_event *bp) if (IS_ERR(tmp)) return PTR_ERR(tmp); + spin_lock(&task_bps_lock); list_add(&tmp->list, &task_bps); + spin_unlock(&task_bps_lock); return 0; } @@ -182,6 +192,7 @@ static void task_bps_remove(struct perf_event *bp) { struct list_head *pos, *q; + spin_lock(&task_bps_lock); list_for_each_safe(pos, q, &task_bps) { struct breakpoint *tmp = list_entry(pos, struct breakpoint, list); @@ -191,6 +202,7 @@ static void task_bps_remove(struct perf_event *bp) break; } } + spin_unlock(&task_bps_lock); } /* @@ -200,12 +212,17 @@ static void task_bps_remove(struct perf_event *bp) static bool all_task_bps_check(struct perf_event *bp) { struct breakpoint *tmp; + bool ret = false; + spin_lock(&task_bps_lock); list_for_each_entry(tmp, &task_bps, list) { - if (!can_co_exist(tmp, bp)) - return true; + if (!can_co_exist(tmp, bp)) { + ret = true; + break; + } } - return false; + spin_unlock(&task_bps_lock); + return ret; } /* @@ -215,13 +232,18 @@ static bool all_task_bps_check(struct perf_event *bp) static bool same_task_bps_check(struct perf_event *bp) { struct breakpoint *tmp; + bool ret = false; + spin_lock(&task_bps_lock); list_for_each_entry(tmp, &task_bps, list) { if (tmp->bp->hw.target == bp->hw.target && - !can_co_exist(tmp, bp)) - return true; + !can_co_exist(tmp, bp)) { + ret = true; + break; + } } - return false; + spin_unlock(&task_bps_lock); + return ret; } static int cpu_bps_add(struct perf_event *bp) @@ -234,6 +256,7 @@ static int cpu_bps_add(struct perf_event *bp) if (IS_ERR(tmp)) return PTR_ERR(tmp); + spin_lock(&cpu_bps_lock); cpu_bp = per_cpu_ptr(cpu_bps, bp->cpu); for (i = 0; i < nr_wp_slots(); i++) { if (!cpu_bp[i]) { @@ -241,6 +264,7 @@ static int cpu_bps_add(struct perf_event *bp) break; } } + spin_unlock(&cpu_bps_lock); return 0; } @@ -249,6 +273,7 @@ static void cpu_bps_remove(struct perf_event *bp) struct breakpoint **cpu_bp; int i = 0; + spin_lock(&cpu_bps_lock); cpu_bp = per_cpu_ptr(cpu_bps, bp->cpu); for (i = 0; i < nr_wp_slots(); i++) { if (!cpu_bp[i]) @@ -260,19 +285,25 @@ static void cpu_bps_remove(struct perf_event *bp) break; } } + spin_unlock(&cpu_bps_lock); } static bool cpu_bps_check(int cpu, struct perf_event *bp) { struct breakpoint **cpu_bp; + bool ret = false; int i; + spin_lock(&cpu_bps_lock); cpu_bp = per_cpu_ptr(cpu_bps, cpu); for (i = 0; i < nr_wp_slots(); i++) { - if (cpu_bp[i] && !can_co_exist(cpu_bp[i], bp)) - return true; + if (cpu_bp[i] && !can_co_exist(cpu_bp[i], bp)) { + ret = true; + break; + } } - return false; + spin_unlock(&cpu_bps_lock); + return ret; } static bool all_cpu_bps_check(struct perf_event *bp) @@ -286,10 +317,6 @@ static bool all_cpu_bps_check(struct perf_event *bp) return false; } -/* - * We don't use any locks to serialize accesses to cpu_bps or task_bps - * because are already inside nr_bp_mutex. - */ int arch_reserve_bp_slot(struct perf_event *bp) { int ret; From patchwork Mon Aug 29 12:47:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12957760 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2118ECAAD5 for ; Mon, 29 Aug 2022 12:57:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229572AbiH2M5e (ORCPT ); Mon, 29 Aug 2022 08:57:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56252 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229880AbiH2M4c (ORCPT ); Mon, 29 Aug 2022 08:56:32 -0400 Received: from mail-ej1-x64a.google.com (mail-ej1-x64a.google.com [IPv6:2a00:1450:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 39386D103 for ; Mon, 29 Aug 2022 05:48:27 -0700 (PDT) Received: by mail-ej1-x64a.google.com with SMTP id sg35-20020a170907a42300b007414343814bso1482554ejc.18 for ; Mon, 29 Aug 2022 05:48:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=o7nFvD+bAzzd8hTxdy6J5SFI8QAvwQGuiGUQWboy0hk=; b=DUROuHk1Z9qzwl+b2DTEg2S7HJMexhpmqANb/PJZ7O33R9XhSPct7AwehP14PQc/2o wK/bi8a2CwUsCNYLZ60BVrbEBT9p31oxiK+AfnIEOciyJytERkocF14TQxcpALfKW3+m b2Ub+eMgVCVEVNppFz86zBqIyKvR0lUxlYrSs0Pqqu53c88/O9WnN5AbglZY4E2lNY3X UurK/UKzohtpKpt81PvQ4NXYcynS9qosWESRGDtGbt4p3CdpxU/ZwTvmCC71hiCrb5Iz oBnMf/JVbgAxkVDKJuMfwanruY91nDmZy+mUGQ78tN/tN+0RJuWqqSZWWdmcn4+YhWS4 IZ8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=o7nFvD+bAzzd8hTxdy6J5SFI8QAvwQGuiGUQWboy0hk=; b=TzxrHSDb9wJaEfIaAKu1mn8AiAOwJMdCV9RKYg6HcLFdobRl8OeddMQIzdnJh3Perg B5+9jBrw1UtXHZ0hb/w16gsIBqPNcTN1x+R2batzgBvaqQaFZHralVXGwyi3KKZnm87z S4y2SHG3kdKuPtdek4kEbkqNMauejLIJB5fLKHHc/3rQo5Ir+D7Ix9OkJQTBlivZStUV 7kOBpVDDVxO6/VLUOnid1U+1e6EUTZZDd1LweTqFo3okADBR9kpKNffzxLBO7uK/ozo+ kOGiWRmDBOlHsm0HAAnXEsSRmhCD8bSRq2OdgaP/iWOczSPI0WKBGdoEPwftxs7zDswV V2vg== X-Gm-Message-State: ACgBeo1vbVWUaoaZYxf7V/yTwrwR7JUvfAw1B5hukUhrr7pRISiJuUl1 8Ft5mQ+KpQGJ80xBNkYcvdMdWwmksA== X-Google-Smtp-Source: AA6agR5wokwGhbG/dPuolBfo3Bpkl9cRXVj3x5ekleyIMoAvz+i6ptf2ezpCAANeuPi0p5VgMRtvOQZZzA== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:196d:4fc7:fa9c:62e3]) (user=elver job=sendgmr) by 2002:a17:907:75c6:b0:741:75a0:b82b with SMTP id jl6-20020a17090775c600b0074175a0b82bmr4672915ejc.465.1661777305817; Mon, 29 Aug 2022 05:48:25 -0700 (PDT) Date: Mon, 29 Aug 2022 14:47:15 +0200 In-Reply-To: <20220829124719.675715-1-elver@google.com> Mime-Version: 1.0 References: <20220829124719.675715-1-elver@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220829124719.675715-11-elver@google.com> Subject: [PATCH v4 10/14] locking/percpu-rwsem: Add percpu_is_write_locked() and percpu_is_read_locked() From: Marco Elver To: elver@google.com, Peter Zijlstra , Frederic Weisbecker , Ingo Molnar Cc: Thomas Gleixner , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Dmitry Vyukov , Michael Ellerman , linuxppc-dev@lists.ozlabs.org, linux-perf-users@vger.kernel.org, x86@kernel.org, linux-sh@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, Ian Rogers Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Implement simple accessors to probe percpu-rwsem's locked state: percpu_is_write_locked(), percpu_is_read_locked(). Signed-off-by: Marco Elver Reviewed-by: Dmitry Vyukov Acked-by: Ian Rogers --- v4: * Due to spurious read_count increments in __percpu_down_read_trylock() if sem->block != 0, check that !sem->block (reported by Peter). v2: * New patch. --- include/linux/percpu-rwsem.h | 6 ++++++ kernel/locking/percpu-rwsem.c | 6 ++++++ 2 files changed, 12 insertions(+) diff --git a/include/linux/percpu-rwsem.h b/include/linux/percpu-rwsem.h index 5fda40f97fe9..36b942b67b7d 100644 --- a/include/linux/percpu-rwsem.h +++ b/include/linux/percpu-rwsem.h @@ -121,9 +121,15 @@ static inline void percpu_up_read(struct percpu_rw_semaphore *sem) preempt_enable(); } +extern bool percpu_is_read_locked(struct percpu_rw_semaphore *); extern void percpu_down_write(struct percpu_rw_semaphore *); extern void percpu_up_write(struct percpu_rw_semaphore *); +static inline bool percpu_is_write_locked(struct percpu_rw_semaphore *sem) +{ + return atomic_read(&sem->block); +} + extern int __percpu_init_rwsem(struct percpu_rw_semaphore *, const char *, struct lock_class_key *); diff --git a/kernel/locking/percpu-rwsem.c b/kernel/locking/percpu-rwsem.c index 5fe4c5495ba3..185bd1c906b0 100644 --- a/kernel/locking/percpu-rwsem.c +++ b/kernel/locking/percpu-rwsem.c @@ -192,6 +192,12 @@ EXPORT_SYMBOL_GPL(__percpu_down_read); __sum; \ }) +bool percpu_is_read_locked(struct percpu_rw_semaphore *sem) +{ + return per_cpu_sum(*sem->read_count) != 0 && !atomic_read(&sem->block); +} +EXPORT_SYMBOL_GPL(percpu_is_read_locked); + /* * Return true if the modular sum of the sem->read_count per-CPU variable is * zero. If this sum is zero, then it is stable due to the fact that if any From patchwork Mon Aug 29 12:47:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12957761 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B7D7C0502C for ; Mon, 29 Aug 2022 12:57:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229880AbiH2M5f (ORCPT ); Mon, 29 Aug 2022 08:57:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54194 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229925AbiH2M4e (ORCPT ); Mon, 29 Aug 2022 08:56:34 -0400 Received: from mail-ed1-x549.google.com (mail-ed1-x549.google.com [IPv6:2a00:1450:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1A75838BA for ; Mon, 29 Aug 2022 05:48:30 -0700 (PDT) Received: by mail-ed1-x549.google.com with SMTP id q32-20020a05640224a000b004462f105fa9so5352360eda.4 for ; Mon, 29 Aug 2022 05:48:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=4MQGLKQFnJVUvdjOru8g5RSblwlXonFN0qds5IJiSaQ=; b=CmjOUut951L999WLwmlt+UUBDkUkID0VEnORIg7zo6HFgK3OV1dg6yfjFy8NzK6oK2 zLlLk11AXKWtYXA8wpKINAh2ePz/dDl0QR2oPbemi82B7x2q3iTluIUyRMX+anE66I2M 6Vi3NkvbZ626wkSPsPMMUOxC2wuzkHgo37Eyb64MGWVbB9lA/ZmIMrQRO4BaB1q1TGRZ zoyv4RO0bBD+/8X/r8gqVNB7UpA6l5gWUEi7WyEbkyyw15yRCxW9sa/BYcZniPHfZgQa rOyC6ODvrQXIybKlXXFlXhGYU7cbmCF8TGbqa3sBYQBrCtpUKUNaC4qa0aRwjnMrSQ9P 7Cxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=4MQGLKQFnJVUvdjOru8g5RSblwlXonFN0qds5IJiSaQ=; b=imwncuGRiqko/muTj7NjsVj5JU3dbJFtVR5d34IReeaX87Hw9AuTedP7zSp05bhUmm wfajbNwgYp/flIEN8rvHz102iV60zHhZtqg7rSfuCMUNTYoLlrQz9/GPC5gS51JBN4Sp FXVWu4cXjsJ+B0TaKm2I8SDUVgiXV7j6ti/uNhVvIIIk9q2YRELBSI+G0ogG+zkD/cMw Rp42fWgOZBi+8kvRp28N0xn+l3vpFQIXpTjhppz9bH2bTHQDexR7cSIuCj04TpU69LjZ xSqiSez2WuZwdTkzG+rkte+CZQO5KLldu58iEKJ5N8G8YtW5e06JF2Yc6hW819l1RE5b rfzA== X-Gm-Message-State: ACgBeo0C++NoDPMRhSIGN7pSztBBcg0bUmg0Xk15dkl51ySt53w4rgVE bB5Nueli2kfUd7QK8+2gTHk80Vv0WQ== X-Google-Smtp-Source: AA6agR4arMDiivSDdSaDT/h01qVLJ0XxsZnAksa8m8hmsn3BFyKI7hz0MJFrTOkhINw+OPBDtmWmGYUuLw== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:196d:4fc7:fa9c:62e3]) (user=elver job=sendgmr) by 2002:aa7:c488:0:b0:448:d11:4830 with SMTP id m8-20020aa7c488000000b004480d114830mr10023735edq.97.1661777308667; Mon, 29 Aug 2022 05:48:28 -0700 (PDT) Date: Mon, 29 Aug 2022 14:47:16 +0200 In-Reply-To: <20220829124719.675715-1-elver@google.com> Mime-Version: 1.0 References: <20220829124719.675715-1-elver@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220829124719.675715-12-elver@google.com> Subject: [PATCH v4 11/14] perf/hw_breakpoint: Reduce contention with large number of tasks From: Marco Elver To: elver@google.com, Peter Zijlstra , Frederic Weisbecker , Ingo Molnar Cc: Thomas Gleixner , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Dmitry Vyukov , Michael Ellerman , linuxppc-dev@lists.ozlabs.org, linux-perf-users@vger.kernel.org, x86@kernel.org, linux-sh@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, Ian Rogers Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org While optimizing task_bp_pinned()'s runtime complexity to O(1) on average helps reduce time spent in the critical section, we still suffer due to serializing everything via 'nr_bp_mutex'. Indeed, a profile shows that now contention is the biggest issue: 95.93% [kernel] [k] osq_lock 0.70% [kernel] [k] mutex_spin_on_owner 0.22% [kernel] [k] smp_cfm_core_cond 0.18% [kernel] [k] task_bp_pinned 0.18% [kernel] [k] rhashtable_jhash2 0.15% [kernel] [k] queued_spin_lock_slowpath when running the breakpoint benchmark with (system with 256 CPUs): | $> perf bench -r 30 breakpoint thread -b 4 -p 64 -t 64 | # Running 'breakpoint/thread' benchmark: | # Created/joined 30 threads with 4 breakpoints and 64 parallelism | Total time: 0.207 [sec] | | 108.267188 usecs/op | 6929.100000 usecs/op/cpu The main concern for synchronizing the breakpoint constraints data is that a consistent snapshot of the per-CPU and per-task data is observed. The access pattern is as follows: 1. If the target is a task: the task's pinned breakpoints are counted, checked for space, and then appended to; only bp_cpuinfo::cpu_pinned is used to check for conflicts with CPU-only breakpoints; bp_cpuinfo::tsk_pinned are incremented/decremented, but otherwise unused. 2. If the target is a CPU: bp_cpuinfo::cpu_pinned are counted, along with bp_cpuinfo::tsk_pinned; after a successful check, cpu_pinned is incremented. No per-task breakpoints are checked. Since rhltable safely synchronizes insertions/deletions, we can allow concurrency as follows: 1. If the target is a task: independent tasks may update and check the constraints concurrently, but same-task target calls need to be serialized; since bp_cpuinfo::tsk_pinned is only updated, but not checked, these modifications can happen concurrently by switching tsk_pinned to atomic_t. 2. If the target is a CPU: access to the per-CPU constraints needs to be serialized with other CPU-target and task-target callers (to stabilize the bp_cpuinfo::tsk_pinned snapshot). We can allow the above concurrency by introducing a per-CPU constraints data reader-writer lock (bp_cpuinfo_sem), and per-task mutexes (reuses task_struct::perf_event_mutex): 1. If the target is a task: acquires perf_event_mutex, and acquires bp_cpuinfo_sem as a reader. The choice of percpu-rwsem minimizes contention in the presence of many read-lock but few write-lock acquisitions: we assume many orders of magnitude more task target breakpoints creations/destructions than CPU target breakpoints. 2. If the target is a CPU: acquires bp_cpuinfo_sem as a writer. With these changes, contention with thousands of tasks is reduced to the point where waiting on locking no longer dominates the profile: | $> perf bench -r 30 breakpoint thread -b 4 -p 64 -t 64 | # Running 'breakpoint/thread' benchmark: | # Created/joined 30 threads with 4 breakpoints and 64 parallelism | Total time: 0.077 [sec] | | 40.201563 usecs/op | 2572.900000 usecs/op/cpu 21.54% [kernel] [k] task_bp_pinned 20.18% [kernel] [k] rhashtable_jhash2 6.81% [kernel] [k] toggle_bp_slot 5.47% [kernel] [k] queued_spin_lock_slowpath 3.75% [kernel] [k] smp_cfm_core_cond 3.48% [kernel] [k] bcmp On this particular setup that's a speedup of 2.7x. We're also getting closer to the theoretical ideal performance through optimizations in hw_breakpoint.c -- constraints accounting disabled: | perf bench -r 30 breakpoint thread -b 4 -p 64 -t 64 | # Running 'breakpoint/thread' benchmark: | # Created/joined 30 threads with 4 breakpoints and 64 parallelism | Total time: 0.067 [sec] | | 35.286458 usecs/op | 2258.333333 usecs/op/cpu Which means the current implementation is ~12% slower than the theoretical ideal. For reference, performance without any breakpoints: | $> bench -r 30 breakpoint thread -b 0 -p 64 -t 64 | # Running 'breakpoint/thread' benchmark: | # Created/joined 30 threads with 0 breakpoints and 64 parallelism | Total time: 0.060 [sec] | | 31.365625 usecs/op | 2007.400000 usecs/op/cpu On a system with 256 CPUs, the theoretical ideal is only ~12% slower than no breakpoints at all; the current implementation is ~28% slower. Signed-off-by: Marco Elver Reviewed-by: Dmitry Vyukov Acked-by: Ian Rogers --- v2: * Use percpu-rwsem instead of rwlock. * Use task_struct::perf_event_mutex. See code comment for reasoning. ==> Speedup of 2.7x (vs 2.5x in v1). --- kernel/events/hw_breakpoint.c | 161 ++++++++++++++++++++++++++++------ 1 file changed, 133 insertions(+), 28 deletions(-) diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c index 8b40fca1a063..229c6f4fae75 100644 --- a/kernel/events/hw_breakpoint.c +++ b/kernel/events/hw_breakpoint.c @@ -19,6 +19,7 @@ #include +#include #include #include #include @@ -28,6 +29,7 @@ #include #include #include +#include #include #include #include @@ -41,9 +43,9 @@ struct bp_cpuinfo { unsigned int cpu_pinned; /* tsk_pinned[n] is the number of tasks having n+1 breakpoints */ #ifdef hw_breakpoint_slots - unsigned int tsk_pinned[hw_breakpoint_slots(0)]; + atomic_t tsk_pinned[hw_breakpoint_slots(0)]; #else - unsigned int *tsk_pinned; + atomic_t *tsk_pinned; #endif }; @@ -65,8 +67,79 @@ static const struct rhashtable_params task_bps_ht_params = { static bool constraints_initialized __ro_after_init; -/* Serialize accesses to the above constraints */ -static DEFINE_MUTEX(nr_bp_mutex); +/* + * Synchronizes accesses to the per-CPU constraints; the locking rules are: + * + * 1. Atomic updates to bp_cpuinfo::tsk_pinned only require a held read-lock + * (due to bp_slots_histogram::count being atomic, no update are lost). + * + * 2. Holding a write-lock is required for computations that require a + * stable snapshot of all bp_cpuinfo::tsk_pinned. + * + * 3. In all other cases, non-atomic accesses require the appropriately held + * lock (read-lock for read-only accesses; write-lock for reads/writes). + */ +DEFINE_STATIC_PERCPU_RWSEM(bp_cpuinfo_sem); + +/* + * Return mutex to serialize accesses to per-task lists in task_bps_ht. Since + * rhltable synchronizes concurrent insertions/deletions, independent tasks may + * insert/delete concurrently; therefore, a mutex per task is sufficient. + * + * Uses task_struct::perf_event_mutex, to avoid extending task_struct with a + * hw_breakpoint-only mutex, which may be infrequently used. The caveat here is + * that hw_breakpoint may contend with per-task perf event list management. The + * assumption is that perf usecases involving hw_breakpoints are very unlikely + * to result in unnecessary contention. + */ +static inline struct mutex *get_task_bps_mutex(struct perf_event *bp) +{ + struct task_struct *tsk = bp->hw.target; + + return tsk ? &tsk->perf_event_mutex : NULL; +} + +static struct mutex *bp_constraints_lock(struct perf_event *bp) +{ + struct mutex *tsk_mtx = get_task_bps_mutex(bp); + + if (tsk_mtx) { + mutex_lock(tsk_mtx); + percpu_down_read(&bp_cpuinfo_sem); + } else { + percpu_down_write(&bp_cpuinfo_sem); + } + + return tsk_mtx; +} + +static void bp_constraints_unlock(struct mutex *tsk_mtx) +{ + if (tsk_mtx) { + percpu_up_read(&bp_cpuinfo_sem); + mutex_unlock(tsk_mtx); + } else { + percpu_up_write(&bp_cpuinfo_sem); + } +} + +static bool bp_constraints_is_locked(struct perf_event *bp) +{ + struct mutex *tsk_mtx = get_task_bps_mutex(bp); + + return percpu_is_write_locked(&bp_cpuinfo_sem) || + (tsk_mtx ? mutex_is_locked(tsk_mtx) : + percpu_is_read_locked(&bp_cpuinfo_sem)); +} + +static inline void assert_bp_constraints_lock_held(struct perf_event *bp) +{ + struct mutex *tsk_mtx = get_task_bps_mutex(bp); + + if (tsk_mtx) + lockdep_assert_held(tsk_mtx); + lockdep_assert_held(&bp_cpuinfo_sem); +} #ifdef hw_breakpoint_slots /* @@ -97,7 +170,7 @@ static __init int init_breakpoint_slots(void) for (i = 0; i < TYPE_MAX; i++) { struct bp_cpuinfo *info = get_bp_info(cpu, i); - info->tsk_pinned = kcalloc(__nr_bp_slots[i], sizeof(int), GFP_KERNEL); + info->tsk_pinned = kcalloc(__nr_bp_slots[i], sizeof(atomic_t), GFP_KERNEL); if (!info->tsk_pinned) goto err; } @@ -137,11 +210,19 @@ static inline enum bp_type_idx find_slot_idx(u64 bp_type) */ static unsigned int max_task_bp_pinned(int cpu, enum bp_type_idx type) { - unsigned int *tsk_pinned = get_bp_info(cpu, type)->tsk_pinned; + atomic_t *tsk_pinned = get_bp_info(cpu, type)->tsk_pinned; int i; + /* + * At this point we want to have acquired the bp_cpuinfo_sem as a + * writer to ensure that there are no concurrent writers in + * toggle_bp_task_slot() to tsk_pinned, and we get a stable snapshot. + */ + lockdep_assert_held_write(&bp_cpuinfo_sem); + for (i = hw_breakpoint_slots_cached(type) - 1; i >= 0; i--) { - if (tsk_pinned[i] > 0) + ASSERT_EXCLUSIVE_WRITER(tsk_pinned[i]); /* Catch unexpected writers. */ + if (atomic_read(&tsk_pinned[i]) > 0) return i + 1; } @@ -158,6 +239,11 @@ static int task_bp_pinned(int cpu, struct perf_event *bp, enum bp_type_idx type) struct perf_event *iter; int count = 0; + /* + * We need a stable snapshot of the per-task breakpoint list. + */ + assert_bp_constraints_lock_held(bp); + rcu_read_lock(); head = rhltable_lookup(&task_bps_ht, &bp->hw.target, task_bps_ht_params); if (!head) @@ -214,16 +300,25 @@ max_bp_pinned_slots(struct perf_event *bp, enum bp_type_idx type) static void toggle_bp_task_slot(struct perf_event *bp, int cpu, enum bp_type_idx type, int weight) { - unsigned int *tsk_pinned = get_bp_info(cpu, type)->tsk_pinned; + atomic_t *tsk_pinned = get_bp_info(cpu, type)->tsk_pinned; int old_idx, new_idx; + /* + * If bp->hw.target, tsk_pinned is only modified, but not used + * otherwise. We can permit concurrent updates as long as there are no + * other uses: having acquired bp_cpuinfo_sem as a reader allows + * concurrent updates here. Uses of tsk_pinned will require acquiring + * bp_cpuinfo_sem as a writer to stabilize tsk_pinned's value. + */ + lockdep_assert_held_read(&bp_cpuinfo_sem); + old_idx = task_bp_pinned(cpu, bp, type) - 1; new_idx = old_idx + weight; if (old_idx >= 0) - tsk_pinned[old_idx]--; + atomic_dec(&tsk_pinned[old_idx]); if (new_idx >= 0) - tsk_pinned[new_idx]++; + atomic_inc(&tsk_pinned[new_idx]); } /* @@ -241,6 +336,7 @@ toggle_bp_slot(struct perf_event *bp, bool enable, enum bp_type_idx type, /* Pinned counter cpu profiling */ if (!bp->hw.target) { + lockdep_assert_held_write(&bp_cpuinfo_sem); get_bp_info(bp->cpu, type)->cpu_pinned += weight; return 0; } @@ -249,6 +345,11 @@ toggle_bp_slot(struct perf_event *bp, bool enable, enum bp_type_idx type, for_each_cpu(cpu, cpumask) toggle_bp_task_slot(bp, cpu, type, weight); + /* + * Readers want a stable snapshot of the per-task breakpoint list. + */ + assert_bp_constraints_lock_held(bp); + if (enable) return rhltable_insert(&task_bps_ht, &bp->hw.bp_list, task_bps_ht_params); else @@ -354,14 +455,10 @@ static int __reserve_bp_slot(struct perf_event *bp, u64 bp_type) int reserve_bp_slot(struct perf_event *bp) { - int ret; - - mutex_lock(&nr_bp_mutex); - - ret = __reserve_bp_slot(bp, bp->attr.bp_type); - - mutex_unlock(&nr_bp_mutex); + struct mutex *mtx = bp_constraints_lock(bp); + int ret = __reserve_bp_slot(bp, bp->attr.bp_type); + bp_constraints_unlock(mtx); return ret; } @@ -379,12 +476,11 @@ static void __release_bp_slot(struct perf_event *bp, u64 bp_type) void release_bp_slot(struct perf_event *bp) { - mutex_lock(&nr_bp_mutex); + struct mutex *mtx = bp_constraints_lock(bp); arch_unregister_hw_breakpoint(bp); __release_bp_slot(bp, bp->attr.bp_type); - - mutex_unlock(&nr_bp_mutex); + bp_constraints_unlock(mtx); } static int __modify_bp_slot(struct perf_event *bp, u64 old_type, u64 new_type) @@ -411,11 +507,10 @@ static int __modify_bp_slot(struct perf_event *bp, u64 old_type, u64 new_type) static int modify_bp_slot(struct perf_event *bp, u64 old_type, u64 new_type) { - int ret; + struct mutex *mtx = bp_constraints_lock(bp); + int ret = __modify_bp_slot(bp, old_type, new_type); - mutex_lock(&nr_bp_mutex); - ret = __modify_bp_slot(bp, old_type, new_type); - mutex_unlock(&nr_bp_mutex); + bp_constraints_unlock(mtx); return ret; } @@ -426,18 +521,28 @@ static int modify_bp_slot(struct perf_event *bp, u64 old_type, u64 new_type) */ int dbg_reserve_bp_slot(struct perf_event *bp) { - if (mutex_is_locked(&nr_bp_mutex)) + int ret; + + if (bp_constraints_is_locked(bp)) return -1; - return __reserve_bp_slot(bp, bp->attr.bp_type); + /* Locks aren't held; disable lockdep assert checking. */ + lockdep_off(); + ret = __reserve_bp_slot(bp, bp->attr.bp_type); + lockdep_on(); + + return ret; } int dbg_release_bp_slot(struct perf_event *bp) { - if (mutex_is_locked(&nr_bp_mutex)) + if (bp_constraints_is_locked(bp)) return -1; + /* Locks aren't held; disable lockdep assert checking. */ + lockdep_off(); __release_bp_slot(bp, bp->attr.bp_type); + lockdep_on(); return 0; } @@ -663,7 +768,7 @@ bool hw_breakpoint_is_used(void) return true; for (int slot = 0; slot < hw_breakpoint_slots_cached(type); ++slot) { - if (info->tsk_pinned[slot]) + if (atomic_read(&info->tsk_pinned[slot])) return true; } } From patchwork Mon Aug 29 12:47:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12957762 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E73BCECAAD4 for ; Mon, 29 Aug 2022 12:57:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229895AbiH2M5g (ORCPT ); Mon, 29 Aug 2022 08:57:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54260 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230035AbiH2M4f (ORCPT ); Mon, 29 Aug 2022 08:56:35 -0400 Received: from mail-ed1-x549.google.com (mail-ed1-x549.google.com [IPv6:2a00:1450:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB8B42AE4 for ; Mon, 29 Aug 2022 05:48:32 -0700 (PDT) Received: by mail-ed1-x549.google.com with SMTP id ee47-20020a056402292f00b004486550972aso2033032edb.1 for ; Mon, 29 Aug 2022 05:48:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=ALPSUyHQQrXCFAhvO3GrH4UhqRd+C7lu7ougUmdRQTc=; b=q/g+BEJNCUTLPiz8396cNH9olRGtWS638cpfrFVFSi612qvfENTt46zL1qFRyM+pmh 77lwXRIbI2SG1T3iQuzznwog069UNsx7j79yKDYPfEIIE8KOoqaph+6cL7X0YRyM3h7s 6xH/A55Fw48B8qCpLQA64b/9gVxx62EPmCUmMzOvP63xQUx6KJhIaQb9PuP16K5N6KXh UylovVVaoQar0XH8pDPKFHzTrwqH2WLw2bH/7oRlSvsY03DaZYAnipwBLKwKFlYBGloT ktV9ihc4nRfEdgpS9AyVL6azxWpeM66rwWpDKYB+yd0l6fDSNiVQsh58ByWWoTPdIfq8 5uaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=ALPSUyHQQrXCFAhvO3GrH4UhqRd+C7lu7ougUmdRQTc=; b=LAnQW7OhMUp/XmGZF5MpeujjOTl5rvweOe55M1Whczd/jpsDHxpQYTPr0tJDmi09n5 yiAhCUve2/hKpr6FLJoLySyY/y4IcGn8NiXYqmYP2Ce3bCUMHC6tUWfipEqwO5Toa2uJ 7qBhAaSRZaETtce5XYqqx6Bz9zL3e0S3HaHIl+gnLH90tFuPy2hN1GJ2QuzVm6ulLl3k Yd8OGa1Oqp7FMeWFFf4agg80kGgptdmZ0duLSyK9cQSEYnchG0OfRG+h+MYF+mxNOP+U satcNBVTySoamQiXY4ILRr9zPodOwMapkYjXyqvOBAk6/VgxMlrrKC99TLNNL+xF6kbt /fOA== X-Gm-Message-State: ACgBeo2DMLgz0wx/X1/hBnFJTjbqPx7LasjusoKHowKy4xK5P0G5xDzf CPGzfk5CT+OAW88n/EjGYEjNXDtbcQ== X-Google-Smtp-Source: AA6agR50o74ooBgV57arMCJWxWak0fy/La9f8bONjy2t3gGS6dKZKpdmoPpW1tWOfGyh6NMettd3KBAJhg== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:196d:4fc7:fa9c:62e3]) (user=elver job=sendgmr) by 2002:a05:6402:e94:b0:443:e3fe:7c87 with SMTP id h20-20020a0564020e9400b00443e3fe7c87mr16853908eda.144.1661777311215; Mon, 29 Aug 2022 05:48:31 -0700 (PDT) Date: Mon, 29 Aug 2022 14:47:17 +0200 In-Reply-To: <20220829124719.675715-1-elver@google.com> Mime-Version: 1.0 References: <20220829124719.675715-1-elver@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220829124719.675715-13-elver@google.com> Subject: [PATCH v4 12/14] perf/hw_breakpoint: Introduce bp_slots_histogram From: Marco Elver To: elver@google.com, Peter Zijlstra , Frederic Weisbecker , Ingo Molnar Cc: Thomas Gleixner , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Dmitry Vyukov , Michael Ellerman , linuxppc-dev@lists.ozlabs.org, linux-perf-users@vger.kernel.org, x86@kernel.org, linux-sh@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, Ian Rogers Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Factor out the existing `atomic_t count[N]` into its own struct called 'bp_slots_histogram', to generalize and make its intent clearer in preparation of reusing elsewhere. The basic idea of bucketing "total uses of N slots" resembles a histogram, so calling it such seems most intuitive. No functional change. Signed-off-by: Marco Elver Reviewed-by: Dmitry Vyukov Acked-by: Ian Rogers --- v3: * Also warn in bp_slots_histogram_add() if count goes below 0. v2: * New patch. --- kernel/events/hw_breakpoint.c | 96 +++++++++++++++++++++++------------ 1 file changed, 63 insertions(+), 33 deletions(-) diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c index 229c6f4fae75..03ebecf048c0 100644 --- a/kernel/events/hw_breakpoint.c +++ b/kernel/events/hw_breakpoint.c @@ -36,19 +36,27 @@ #include /* - * Constraints data + * Datastructure to track the total uses of N slots across tasks or CPUs; + * bp_slots_histogram::count[N] is the number of assigned N+1 breakpoint slots. */ -struct bp_cpuinfo { - /* Number of pinned cpu breakpoints in a cpu */ - unsigned int cpu_pinned; - /* tsk_pinned[n] is the number of tasks having n+1 breakpoints */ +struct bp_slots_histogram { #ifdef hw_breakpoint_slots - atomic_t tsk_pinned[hw_breakpoint_slots(0)]; + atomic_t count[hw_breakpoint_slots(0)]; #else - atomic_t *tsk_pinned; + atomic_t *count; #endif }; +/* + * Per-CPU constraints data. + */ +struct bp_cpuinfo { + /* Number of pinned CPU breakpoints in a CPU. */ + unsigned int cpu_pinned; + /* Histogram of pinned task breakpoints in a CPU. */ + struct bp_slots_histogram tsk_pinned; +}; + static DEFINE_PER_CPU(struct bp_cpuinfo, bp_cpuinfo[TYPE_MAX]); static struct bp_cpuinfo *get_bp_info(int cpu, enum bp_type_idx type) @@ -159,6 +167,18 @@ static inline int hw_breakpoint_slots_cached(int type) return __nr_bp_slots[type]; } +static __init bool +bp_slots_histogram_alloc(struct bp_slots_histogram *hist, enum bp_type_idx type) +{ + hist->count = kcalloc(hw_breakpoint_slots_cached(type), sizeof(*hist->count), GFP_KERNEL); + return hist->count; +} + +static __init void bp_slots_histogram_free(struct bp_slots_histogram *hist) +{ + kfree(hist->count); +} + static __init int init_breakpoint_slots(void) { int i, cpu, err_cpu; @@ -170,8 +190,7 @@ static __init int init_breakpoint_slots(void) for (i = 0; i < TYPE_MAX; i++) { struct bp_cpuinfo *info = get_bp_info(cpu, i); - info->tsk_pinned = kcalloc(__nr_bp_slots[i], sizeof(atomic_t), GFP_KERNEL); - if (!info->tsk_pinned) + if (!bp_slots_histogram_alloc(&info->tsk_pinned, i)) goto err; } } @@ -180,7 +199,7 @@ static __init int init_breakpoint_slots(void) err: for_each_possible_cpu(err_cpu) { for (i = 0; i < TYPE_MAX; i++) - kfree(get_bp_info(err_cpu, i)->tsk_pinned); + bp_slots_histogram_free(&get_bp_info(err_cpu, i)->tsk_pinned); if (err_cpu == cpu) break; } @@ -189,6 +208,34 @@ static __init int init_breakpoint_slots(void) } #endif +static inline void +bp_slots_histogram_add(struct bp_slots_histogram *hist, int old, int val) +{ + const int old_idx = old - 1; + const int new_idx = old_idx + val; + + if (old_idx >= 0) + WARN_ON(atomic_dec_return_relaxed(&hist->count[old_idx]) < 0); + if (new_idx >= 0) + WARN_ON(atomic_inc_return_relaxed(&hist->count[new_idx]) < 0); +} + +static int +bp_slots_histogram_max(struct bp_slots_histogram *hist, enum bp_type_idx type) +{ + for (int i = hw_breakpoint_slots_cached(type) - 1; i >= 0; i--) { + const int count = atomic_read(&hist->count[i]); + + /* Catch unexpected writers; we want a stable snapshot. */ + ASSERT_EXCLUSIVE_WRITER(hist->count[i]); + if (count > 0) + return i + 1; + WARN(count < 0, "inconsistent breakpoint slots histogram"); + } + + return 0; +} + #ifndef hw_breakpoint_weight static inline int hw_breakpoint_weight(struct perf_event *bp) { @@ -205,13 +252,11 @@ static inline enum bp_type_idx find_slot_idx(u64 bp_type) } /* - * Report the maximum number of pinned breakpoints a task - * have in this cpu + * Return the maximum number of pinned breakpoints a task has in this CPU. */ static unsigned int max_task_bp_pinned(int cpu, enum bp_type_idx type) { - atomic_t *tsk_pinned = get_bp_info(cpu, type)->tsk_pinned; - int i; + struct bp_slots_histogram *tsk_pinned = &get_bp_info(cpu, type)->tsk_pinned; /* * At this point we want to have acquired the bp_cpuinfo_sem as a @@ -219,14 +264,7 @@ static unsigned int max_task_bp_pinned(int cpu, enum bp_type_idx type) * toggle_bp_task_slot() to tsk_pinned, and we get a stable snapshot. */ lockdep_assert_held_write(&bp_cpuinfo_sem); - - for (i = hw_breakpoint_slots_cached(type) - 1; i >= 0; i--) { - ASSERT_EXCLUSIVE_WRITER(tsk_pinned[i]); /* Catch unexpected writers. */ - if (atomic_read(&tsk_pinned[i]) > 0) - return i + 1; - } - - return 0; + return bp_slots_histogram_max(tsk_pinned, type); } /* @@ -300,8 +338,7 @@ max_bp_pinned_slots(struct perf_event *bp, enum bp_type_idx type) static void toggle_bp_task_slot(struct perf_event *bp, int cpu, enum bp_type_idx type, int weight) { - atomic_t *tsk_pinned = get_bp_info(cpu, type)->tsk_pinned; - int old_idx, new_idx; + struct bp_slots_histogram *tsk_pinned = &get_bp_info(cpu, type)->tsk_pinned; /* * If bp->hw.target, tsk_pinned is only modified, but not used @@ -311,14 +348,7 @@ static void toggle_bp_task_slot(struct perf_event *bp, int cpu, * bp_cpuinfo_sem as a writer to stabilize tsk_pinned's value. */ lockdep_assert_held_read(&bp_cpuinfo_sem); - - old_idx = task_bp_pinned(cpu, bp, type) - 1; - new_idx = old_idx + weight; - - if (old_idx >= 0) - atomic_dec(&tsk_pinned[old_idx]); - if (new_idx >= 0) - atomic_inc(&tsk_pinned[new_idx]); + bp_slots_histogram_add(tsk_pinned, task_bp_pinned(cpu, bp, type), weight); } /* @@ -768,7 +798,7 @@ bool hw_breakpoint_is_used(void) return true; for (int slot = 0; slot < hw_breakpoint_slots_cached(type); ++slot) { - if (atomic_read(&info->tsk_pinned[slot])) + if (atomic_read(&info->tsk_pinned.count[slot])) return true; } } From patchwork Mon Aug 29 12:47:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12957763 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C623ECAAD5 for ; Mon, 29 Aug 2022 12:57:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230129AbiH2M5i (ORCPT ); Mon, 29 Aug 2022 08:57:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54666 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230055AbiH2M4g (ORCPT ); Mon, 29 Aug 2022 08:56:36 -0400 Received: from mail-wm1-x349.google.com (mail-wm1-x349.google.com [IPv6:2a00:1450:4864:20::349]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A3F556416 for ; Mon, 29 Aug 2022 05:48:35 -0700 (PDT) Received: by mail-wm1-x349.google.com with SMTP id ay27-20020a05600c1e1b00b003a5bff0df8dso5968920wmb.0 for ; Mon, 29 Aug 2022 05:48:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=tHogyOz5U0y7+H4DYPxiXG1ukMrOYP0VQiSgNpC/zOM=; b=rToQIqGvSV9gtHxU6/haFwVvhDbNuIXIAJ8mLeS71jKMZ8uIbmDa6I+Xx8uSHEYE2Q 8TUo8uYJtrXeNC0kju/+pAa3u6BcejatiMjxerNjN7jXaAgPisxjmZkH9iC3/4zTdN76 UfqrMKYs5pXmcItmuiIasOQ8UNASt288FZ/xBL2p/GC29D1+vcw2OBOIedIsdKH3yJgE Fud5R+reC/TBdAdfycV0lDis2b5V33c/1lglvQfxKDhti08R2zRYzOBWAqongTkbFUwB uUR7Kq3u7r7eXXozNcGskPFBrQXEyi/grPIc8n03qsJr2WVzx0H/ouS+FYb3au1Ss2xN Q0GQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=tHogyOz5U0y7+H4DYPxiXG1ukMrOYP0VQiSgNpC/zOM=; b=TXhHS/TzPRVD1s1iUZj56R5LsayeXs7yBXleSRR1IFxQnirXt/oXh0kkCVZ76X+vEF nQuSR03xYqKIDvjmW9qxfcQFfhBlm10/cfe04PcgC6zBz1BVkLm44m2LoZWu8lmesofa 9lQQLjg8vfHkT75ajSdPc9McArPpOVUlfu036hcw98ljODZVPKQ22ly3q8P5f+EjkZdA CnjpvPMKtMe+74SAvwYlgIV2105lfcm9B9FDm5UlOh5C1dDU9bPb65wmPDFaSQMH4ewc CmTehEFBRjfQRTvgu3Ygtgmzhb6x89ehXL5vEjr61G9qZDGYqiWmEfp93miBLvTmVBZf svSw== X-Gm-Message-State: ACgBeo2+W/I0WVADAk3dpIt3FebS6Wwan74eWx3YE+r9O1ONwJ1qpuqp Bvr6q0NA/qcVQSXD9TKX2azI+4BQyQ== X-Google-Smtp-Source: AA6agR6mjLHHe6v8lJspMAXsC84vRaEP6nUqGZxInQUwQrF8DKnZLiGJY7VnOBMh0yWsleS0xeEpvdvhgQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:196d:4fc7:fa9c:62e3]) (user=elver job=sendgmr) by 2002:a5d:5048:0:b0:226:df3b:72f4 with SMTP id h8-20020a5d5048000000b00226df3b72f4mr1036162wrt.205.1661777314053; Mon, 29 Aug 2022 05:48:34 -0700 (PDT) Date: Mon, 29 Aug 2022 14:47:18 +0200 In-Reply-To: <20220829124719.675715-1-elver@google.com> Mime-Version: 1.0 References: <20220829124719.675715-1-elver@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220829124719.675715-14-elver@google.com> Subject: [PATCH v4 13/14] perf/hw_breakpoint: Optimize max_bp_pinned_slots() for CPU-independent task targets From: Marco Elver To: elver@google.com, Peter Zijlstra , Frederic Weisbecker , Ingo Molnar Cc: Thomas Gleixner , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Dmitry Vyukov , Michael Ellerman , linuxppc-dev@lists.ozlabs.org, linux-perf-users@vger.kernel.org, x86@kernel.org, linux-sh@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, Ian Rogers Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Running the perf benchmark with (note: more aggressive parameters vs. preceding changes, but same 256 CPUs host): | $> perf bench -r 100 breakpoint thread -b 4 -p 128 -t 512 | # Running 'breakpoint/thread' benchmark: | # Created/joined 100 threads with 4 breakpoints and 128 parallelism | Total time: 1.989 [sec] | | 38.854160 usecs/op | 4973.332500 usecs/op/cpu 20.43% [kernel] [k] queued_spin_lock_slowpath 18.75% [kernel] [k] osq_lock 16.98% [kernel] [k] rhashtable_jhash2 8.34% [kernel] [k] task_bp_pinned 4.23% [kernel] [k] smp_cfm_core_cond 3.65% [kernel] [k] bcmp 2.83% [kernel] [k] toggle_bp_slot 1.87% [kernel] [k] find_next_bit 1.49% [kernel] [k] __reserve_bp_slot We can see that a majority of the time is now spent hashing task pointers to index into task_bps_ht in task_bp_pinned(). Obtaining the max_bp_pinned_slots() for CPU-independent task targets currently is O(#cpus), and calls task_bp_pinned() for each CPU, even if the result of task_bp_pinned() is CPU-independent. The loop in max_bp_pinned_slots() wants to compute the maximum slots across all CPUs. If task_bp_pinned() is CPU-independent, we can do so by obtaining the max slots across all CPUs and adding task_bp_pinned(). To do so in O(1), use a bp_slots_histogram for CPU-pinned slots. After this optimization: | $> perf bench -r 100 breakpoint thread -b 4 -p 128 -t 512 | # Running 'breakpoint/thread' benchmark: | # Created/joined 100 threads with 4 breakpoints and 128 parallelism | Total time: 1.930 [sec] | | 37.697832 usecs/op | 4825.322500 usecs/op/cpu 19.13% [kernel] [k] queued_spin_lock_slowpath 18.21% [kernel] [k] rhashtable_jhash2 15.46% [kernel] [k] osq_lock 6.27% [kernel] [k] toggle_bp_slot 5.91% [kernel] [k] task_bp_pinned 5.05% [kernel] [k] smp_cfm_core_cond 1.78% [kernel] [k] update_sg_lb_stats 1.36% [kernel] [k] llist_reverse_order 1.34% [kernel] [k] find_next_bit 1.19% [kernel] [k] bcmp Suggesting that time spent in task_bp_pinned() has been reduced. However, we're still hashing too much, which will be addressed in the subsequent change. Signed-off-by: Marco Elver Reviewed-by: Dmitry Vyukov Acked-by: Ian Rogers --- v3: * Update hw_breakpoint_is_used() to include global cpu_pinned. v2: * New patch. --- kernel/events/hw_breakpoint.c | 57 ++++++++++++++++++++++++++++++++--- 1 file changed, 53 insertions(+), 4 deletions(-) diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c index 03ebecf048c0..a489f31fe147 100644 --- a/kernel/events/hw_breakpoint.c +++ b/kernel/events/hw_breakpoint.c @@ -64,6 +64,9 @@ static struct bp_cpuinfo *get_bp_info(int cpu, enum bp_type_idx type) return per_cpu_ptr(bp_cpuinfo + type, cpu); } +/* Number of pinned CPU breakpoints globally. */ +static struct bp_slots_histogram cpu_pinned[TYPE_MAX]; + /* Keep track of the breakpoints attached to tasks */ static struct rhltable task_bps_ht; static const struct rhashtable_params task_bps_ht_params = { @@ -194,6 +197,10 @@ static __init int init_breakpoint_slots(void) goto err; } } + for (i = 0; i < TYPE_MAX; i++) { + if (!bp_slots_histogram_alloc(&cpu_pinned[i], i)) + goto err; + } return 0; err: @@ -203,6 +210,8 @@ static __init int init_breakpoint_slots(void) if (err_cpu == cpu) break; } + for (i = 0; i < TYPE_MAX; i++) + bp_slots_histogram_free(&cpu_pinned[i]); return -ENOMEM; } @@ -270,6 +279,9 @@ static unsigned int max_task_bp_pinned(int cpu, enum bp_type_idx type) /* * Count the number of breakpoints of the same type and same task. * The given event must be not on the list. + * + * If @cpu is -1, but the result of task_bp_pinned() is not CPU-independent, + * returns a negative value. */ static int task_bp_pinned(int cpu, struct perf_event *bp, enum bp_type_idx type) { @@ -288,9 +300,18 @@ static int task_bp_pinned(int cpu, struct perf_event *bp, enum bp_type_idx type) goto out; rhl_for_each_entry_rcu(iter, pos, head, hw.bp_list) { - if (find_slot_idx(iter->attr.bp_type) == type && - (iter->cpu < 0 || cpu == iter->cpu)) - count += hw_breakpoint_weight(iter); + if (find_slot_idx(iter->attr.bp_type) != type) + continue; + + if (iter->cpu >= 0) { + if (cpu == -1) { + count = -1; + goto out; + } else if (cpu != iter->cpu) + continue; + } + + count += hw_breakpoint_weight(iter); } out: @@ -316,6 +337,19 @@ max_bp_pinned_slots(struct perf_event *bp, enum bp_type_idx type) int pinned_slots = 0; int cpu; + if (bp->hw.target && bp->cpu < 0) { + int max_pinned = task_bp_pinned(-1, bp, type); + + if (max_pinned >= 0) { + /* + * Fast path: task_bp_pinned() is CPU-independent and + * returns the same value for any CPU. + */ + max_pinned += bp_slots_histogram_max(&cpu_pinned[type], type); + return max_pinned; + } + } + for_each_cpu(cpu, cpumask) { struct bp_cpuinfo *info = get_bp_info(cpu, type); int nr; @@ -366,8 +400,11 @@ toggle_bp_slot(struct perf_event *bp, bool enable, enum bp_type_idx type, /* Pinned counter cpu profiling */ if (!bp->hw.target) { + struct bp_cpuinfo *info = get_bp_info(bp->cpu, type); + lockdep_assert_held_write(&bp_cpuinfo_sem); - get_bp_info(bp->cpu, type)->cpu_pinned += weight; + bp_slots_histogram_add(&cpu_pinned[type], info->cpu_pinned, weight); + info->cpu_pinned += weight; return 0; } @@ -804,6 +841,18 @@ bool hw_breakpoint_is_used(void) } } + for (int type = 0; type < TYPE_MAX; ++type) { + for (int slot = 0; slot < hw_breakpoint_slots_cached(type); ++slot) { + /* + * Warn, because if there are CPU pinned counters, + * should never get here; bp_cpuinfo::cpu_pinned should + * be consistent with the global cpu_pinned histogram. + */ + if (WARN_ON(atomic_read(&cpu_pinned[type].count[slot]))) + return true; + } + } + return false; } From patchwork Mon Aug 29 12:47:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12957764 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E491ECAAD2 for ; Mon, 29 Aug 2022 12:58:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230497AbiH2M57 (ORCPT ); Mon, 29 Aug 2022 08:57:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49434 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230106AbiH2M4i (ORCPT ); Mon, 29 Aug 2022 08:56:38 -0400 Received: from mail-ej1-x649.google.com (mail-ej1-x649.google.com [IPv6:2a00:1450:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 88C401CFD1 for ; Mon, 29 Aug 2022 05:48:38 -0700 (PDT) Received: by mail-ej1-x649.google.com with SMTP id ho13-20020a1709070e8d00b00730a655e173so2254908ejc.8 for ; Mon, 29 Aug 2022 05:48:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=0cyxDQ8FzaGmiYK/SL7v1Jw9OhmpUGtWUZkd+KNQhJo=; b=tfLpVgevgyUWCpinr/bPJen1Y7mxv7T2OUNqzxFHqoPZGjRgPnwKFoI0kap5nsBt0w ooKdx1sI5YTaNEWYrjriGyKIwFqWmLD2O/kp+H8scFcFeAx+xnbE12Jo+2sfGcLnRhWB dfXPBFE/rxUI1E30CJkUnvBfXyDjuqYGEQEYH0jbBxk4H1UelVhPEnT5X/Bz6OSWIc+O Dec9M1ZG0OeRSScfC/UyIJ3a2b1s/LmOBlFgaj1L58qLZNWXeHjTSLec/eOLhZVot8La FUwzUQ+SSzhHNyUB8s1JjCNF8dHmNXlpCeZFxvTeseNlsWNGj2/y7SPaIhpBlhmIMaax UjCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=0cyxDQ8FzaGmiYK/SL7v1Jw9OhmpUGtWUZkd+KNQhJo=; b=19/BFAKT53ztE1ghjuRJVUTNpTeFCP6bpwqmPBJT7fvKh1gZJ7A5UDfhf7tpARL9wL O0zCkJQja91gqaZnmbvXd++/LUw6pmDotBgO/XGoB6OWv0nNTYC7s+pkQgXISEnenIiI 0ZGfcxyJuWOp5juT9+O/D4nbnvnkBSRKUi5ehXfZQYNnjQreHF0tVbb+YILoUmVWfkFo 9875iKqrp46TS+BXbkn6/hR4IeEvvOdZ3mchqA3/I963ln387VjHKorz8dPRs660cX8N HGfAXGeIuY9xk3E231y37xkHKawIzEOZm4EBz02rZjMsJZbueVCZBjohMqINWVFZwSBb tthg== X-Gm-Message-State: ACgBeo0UDVzfcJgaf2ndVwNnk/GqqxXyDqBjBkcWv/UWzzi7Q5cYAaiU jb8+25QGVN/N4RL4FguPQrqmlhJ7aw== X-Google-Smtp-Source: AA6agR5c717UUDLHVSPscLptOF/303IwMuHB9YeDPbTB6DVXEAk7Xq1Cmp+NclMnkLhW/SWnsJ6oHZj/+Q== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:196d:4fc7:fa9c:62e3]) (user=elver job=sendgmr) by 2002:a17:907:7b9f:b0:741:9ae3:89a6 with SMTP id ne31-20020a1709077b9f00b007419ae389a6mr2487790ejc.311.1661777316752; Mon, 29 Aug 2022 05:48:36 -0700 (PDT) Date: Mon, 29 Aug 2022 14:47:19 +0200 In-Reply-To: <20220829124719.675715-1-elver@google.com> Mime-Version: 1.0 References: <20220829124719.675715-1-elver@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220829124719.675715-15-elver@google.com> Subject: [PATCH v4 14/14] perf/hw_breakpoint: Optimize toggle_bp_slot() for CPU-independent task targets From: Marco Elver To: elver@google.com, Peter Zijlstra , Frederic Weisbecker , Ingo Molnar Cc: Thomas Gleixner , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Dmitry Vyukov , Michael Ellerman , linuxppc-dev@lists.ozlabs.org, linux-perf-users@vger.kernel.org, x86@kernel.org, linux-sh@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, Ian Rogers Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org We can still see that a majority of the time is spent hashing task pointers: ... 16.98% [kernel] [k] rhashtable_jhash2 ... Doing the bookkeeping in toggle_bp_slots() is currently O(#cpus), calling task_bp_pinned() for each CPU, even if task_bp_pinned() is CPU-independent. The reason for this is to update the per-CPU 'tsk_pinned' histogram. To optimize the CPU-independent case to O(1), keep a separate CPU-independent 'tsk_pinned_all' histogram. The major source of complexity are transitions between "all CPU-independent task breakpoints" and "mixed CPU-independent and CPU-dependent task breakpoints". The code comments list all cases that require handling. After this optimization: | $> perf bench -r 100 breakpoint thread -b 4 -p 128 -t 512 | # Running 'breakpoint/thread' benchmark: | # Created/joined 100 threads with 4 breakpoints and 128 parallelism | Total time: 1.758 [sec] | | 34.336621 usecs/op | 4395.087500 usecs/op/cpu 38.08% [kernel] [k] queued_spin_lock_slowpath 10.81% [kernel] [k] smp_cfm_core_cond 3.01% [kernel] [k] update_sg_lb_stats 2.58% [kernel] [k] osq_lock 2.57% [kernel] [k] llist_reverse_order 1.45% [kernel] [k] find_next_bit 1.21% [kernel] [k] flush_tlb_func_common 1.01% [kernel] [k] arch_install_hw_breakpoint Showing that the time spent hashing keys has become insignificant. With the given benchmark parameters, that's an improvement of 12% compared with the old O(#cpus) version. And finally, using the less aggressive parameters from the preceding changes, we now observe: | $> perf bench -r 30 breakpoint thread -b 4 -p 64 -t 64 | # Running 'breakpoint/thread' benchmark: | # Created/joined 30 threads with 4 breakpoints and 64 parallelism | Total time: 0.067 [sec] | | 35.292187 usecs/op | 2258.700000 usecs/op/cpu Which is an improvement of 12% compared to without the histogram optimizations (baseline is 40 usecs/op). This is now on par with the theoretical ideal (constraints disabled), and only 12% slower than no breakpoints at all. Signed-off-by: Marco Elver Reviewed-by: Dmitry Vyukov Acked-by: Ian Rogers --- v3: * Fix typo "5 cases" -> "4 cases". * Update hw_breakpoint_is_used() to check tsk_pinned_all. v2: * New patch. --- kernel/events/hw_breakpoint.c | 155 +++++++++++++++++++++++++++------- 1 file changed, 124 insertions(+), 31 deletions(-) diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c index a489f31fe147..7ef0e98d31e2 100644 --- a/kernel/events/hw_breakpoint.c +++ b/kernel/events/hw_breakpoint.c @@ -66,6 +66,8 @@ static struct bp_cpuinfo *get_bp_info(int cpu, enum bp_type_idx type) /* Number of pinned CPU breakpoints globally. */ static struct bp_slots_histogram cpu_pinned[TYPE_MAX]; +/* Number of pinned CPU-independent task breakpoints. */ +static struct bp_slots_histogram tsk_pinned_all[TYPE_MAX]; /* Keep track of the breakpoints attached to tasks */ static struct rhltable task_bps_ht; @@ -200,6 +202,8 @@ static __init int init_breakpoint_slots(void) for (i = 0; i < TYPE_MAX; i++) { if (!bp_slots_histogram_alloc(&cpu_pinned[i], i)) goto err; + if (!bp_slots_histogram_alloc(&tsk_pinned_all[i], i)) + goto err; } return 0; @@ -210,8 +214,10 @@ static __init int init_breakpoint_slots(void) if (err_cpu == cpu) break; } - for (i = 0; i < TYPE_MAX; i++) + for (i = 0; i < TYPE_MAX; i++) { bp_slots_histogram_free(&cpu_pinned[i]); + bp_slots_histogram_free(&tsk_pinned_all[i]); + } return -ENOMEM; } @@ -245,6 +251,26 @@ bp_slots_histogram_max(struct bp_slots_histogram *hist, enum bp_type_idx type) return 0; } +static int +bp_slots_histogram_max_merge(struct bp_slots_histogram *hist1, struct bp_slots_histogram *hist2, + enum bp_type_idx type) +{ + for (int i = hw_breakpoint_slots_cached(type) - 1; i >= 0; i--) { + const int count1 = atomic_read(&hist1->count[i]); + const int count2 = atomic_read(&hist2->count[i]); + + /* Catch unexpected writers; we want a stable snapshot. */ + ASSERT_EXCLUSIVE_WRITER(hist1->count[i]); + ASSERT_EXCLUSIVE_WRITER(hist2->count[i]); + if (count1 + count2 > 0) + return i + 1; + WARN(count1 < 0, "inconsistent breakpoint slots histogram"); + WARN(count2 < 0, "inconsistent breakpoint slots histogram"); + } + + return 0; +} + #ifndef hw_breakpoint_weight static inline int hw_breakpoint_weight(struct perf_event *bp) { @@ -273,7 +299,7 @@ static unsigned int max_task_bp_pinned(int cpu, enum bp_type_idx type) * toggle_bp_task_slot() to tsk_pinned, and we get a stable snapshot. */ lockdep_assert_held_write(&bp_cpuinfo_sem); - return bp_slots_histogram_max(tsk_pinned, type); + return bp_slots_histogram_max_merge(tsk_pinned, &tsk_pinned_all[type], type); } /* @@ -366,40 +392,22 @@ max_bp_pinned_slots(struct perf_event *bp, enum bp_type_idx type) return pinned_slots; } -/* - * Add a pinned breakpoint for the given task in our constraint table - */ -static void toggle_bp_task_slot(struct perf_event *bp, int cpu, - enum bp_type_idx type, int weight) -{ - struct bp_slots_histogram *tsk_pinned = &get_bp_info(cpu, type)->tsk_pinned; - - /* - * If bp->hw.target, tsk_pinned is only modified, but not used - * otherwise. We can permit concurrent updates as long as there are no - * other uses: having acquired bp_cpuinfo_sem as a reader allows - * concurrent updates here. Uses of tsk_pinned will require acquiring - * bp_cpuinfo_sem as a writer to stabilize tsk_pinned's value. - */ - lockdep_assert_held_read(&bp_cpuinfo_sem); - bp_slots_histogram_add(tsk_pinned, task_bp_pinned(cpu, bp, type), weight); -} - /* * Add/remove the given breakpoint in our constraint table */ static int -toggle_bp_slot(struct perf_event *bp, bool enable, enum bp_type_idx type, - int weight) +toggle_bp_slot(struct perf_event *bp, bool enable, enum bp_type_idx type, int weight) { - const struct cpumask *cpumask = cpumask_of_bp(bp); - int cpu; + int cpu, next_tsk_pinned; if (!enable) weight = -weight; - /* Pinned counter cpu profiling */ if (!bp->hw.target) { + /* + * Update the pinned CPU slots, in per-CPU bp_cpuinfo and in the + * global histogram. + */ struct bp_cpuinfo *info = get_bp_info(bp->cpu, type); lockdep_assert_held_write(&bp_cpuinfo_sem); @@ -408,9 +416,91 @@ toggle_bp_slot(struct perf_event *bp, bool enable, enum bp_type_idx type, return 0; } - /* Pinned counter task profiling */ - for_each_cpu(cpu, cpumask) - toggle_bp_task_slot(bp, cpu, type, weight); + /* + * If bp->hw.target, tsk_pinned is only modified, but not used + * otherwise. We can permit concurrent updates as long as there are no + * other uses: having acquired bp_cpuinfo_sem as a reader allows + * concurrent updates here. Uses of tsk_pinned will require acquiring + * bp_cpuinfo_sem as a writer to stabilize tsk_pinned's value. + */ + lockdep_assert_held_read(&bp_cpuinfo_sem); + + /* + * Update the pinned task slots, in per-CPU bp_cpuinfo and in the global + * histogram. We need to take care of 4 cases: + * + * 1. This breakpoint targets all CPUs (cpu < 0), and there may only + * exist other task breakpoints targeting all CPUs. In this case we + * can simply update the global slots histogram. + * + * 2. This breakpoint targets a specific CPU (cpu >= 0), but there may + * only exist other task breakpoints targeting all CPUs. + * + * a. On enable: remove the existing breakpoints from the global + * slots histogram and use the per-CPU histogram. + * + * b. On disable: re-insert the existing breakpoints into the global + * slots histogram and remove from per-CPU histogram. + * + * 3. Some other existing task breakpoints target specific CPUs. Only + * update the per-CPU slots histogram. + */ + + if (!enable) { + /* + * Remove before updating histograms so we can determine if this + * was the last task breakpoint for a specific CPU. + */ + int ret = rhltable_remove(&task_bps_ht, &bp->hw.bp_list, task_bps_ht_params); + + if (ret) + return ret; + } + /* + * Note: If !enable, next_tsk_pinned will not count the to-be-removed breakpoint. + */ + next_tsk_pinned = task_bp_pinned(-1, bp, type); + + if (next_tsk_pinned >= 0) { + if (bp->cpu < 0) { /* Case 1: fast path */ + if (!enable) + next_tsk_pinned += hw_breakpoint_weight(bp); + bp_slots_histogram_add(&tsk_pinned_all[type], next_tsk_pinned, weight); + } else if (enable) { /* Case 2.a: slow path */ + /* Add existing to per-CPU histograms. */ + for_each_possible_cpu(cpu) { + bp_slots_histogram_add(&get_bp_info(cpu, type)->tsk_pinned, + 0, next_tsk_pinned); + } + /* Add this first CPU-pinned task breakpoint. */ + bp_slots_histogram_add(&get_bp_info(bp->cpu, type)->tsk_pinned, + next_tsk_pinned, weight); + /* Rebalance global task pinned histogram. */ + bp_slots_histogram_add(&tsk_pinned_all[type], next_tsk_pinned, + -next_tsk_pinned); + } else { /* Case 2.b: slow path */ + /* Remove this last CPU-pinned task breakpoint. */ + bp_slots_histogram_add(&get_bp_info(bp->cpu, type)->tsk_pinned, + next_tsk_pinned + hw_breakpoint_weight(bp), weight); + /* Remove all from per-CPU histograms. */ + for_each_possible_cpu(cpu) { + bp_slots_histogram_add(&get_bp_info(cpu, type)->tsk_pinned, + next_tsk_pinned, -next_tsk_pinned); + } + /* Rebalance global task pinned histogram. */ + bp_slots_histogram_add(&tsk_pinned_all[type], 0, next_tsk_pinned); + } + } else { /* Case 3: slow path */ + const struct cpumask *cpumask = cpumask_of_bp(bp); + + for_each_cpu(cpu, cpumask) { + next_tsk_pinned = task_bp_pinned(cpu, bp, type); + if (!enable) + next_tsk_pinned += hw_breakpoint_weight(bp); + bp_slots_histogram_add(&get_bp_info(cpu, type)->tsk_pinned, + next_tsk_pinned, weight); + } + } /* * Readers want a stable snapshot of the per-task breakpoint list. @@ -419,8 +509,8 @@ toggle_bp_slot(struct perf_event *bp, bool enable, enum bp_type_idx type, if (enable) return rhltable_insert(&task_bps_ht, &bp->hw.bp_list, task_bps_ht_params); - else - return rhltable_remove(&task_bps_ht, &bp->hw.bp_list, task_bps_ht_params); + + return 0; } __weak int arch_reserve_bp_slot(struct perf_event *bp) @@ -850,6 +940,9 @@ bool hw_breakpoint_is_used(void) */ if (WARN_ON(atomic_read(&cpu_pinned[type].count[slot]))) return true; + + if (atomic_read(&tsk_pinned_all[type].count[slot])) + return true; } }