From patchwork Wed Jun 7 01:43:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ian Rogers X-Patchwork-Id: 13269896 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 346BDC7EE2F for ; Wed, 7 Jun 2023 01:45:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:To:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=KTu7SVAwz9O+7NyjFab1csws9vMiYJSLLRXIb2mhU04=; b=4i97mqfLQnPTXS bOnpQfjY1gg7lBeiUQgMk+myoFbnqvdOCKMvx+pE56NO+s+vPsgsjDu6zyK8TMs3UtO6Nna36i8Ux fwxfdoazl/Iw5q9QsHZgLLDpPJ2URUikoUPJHr8ixDFmtV9U47Uyj96OCQRxrzR4Z3V1SO0TZX/sS IBsfsy2vBpQXJ/Kyx/Vot6YIsR1wPvvi2gfscnCeUtlypHeDdlrfCiDuMfZd55Ia2ez6X2hyE2zVp HvHQB0CI6cpvmJMQN3R2HdDyTrpkAQzZV/WNmAvfbEo85oWpInTEF5KTXWhA1QvgvCquekTj2gLvt xl0w3mL61cbIxJ7gl62w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q6iDk-003vPq-2Y; Wed, 07 Jun 2023 01:44:21 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q6iDg-003vNc-1v for linux-arm-kernel@lists.infradead.org; Wed, 07 Jun 2023 01:44:18 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-564fb1018bcso112076817b3.0 for ; Tue, 06 Jun 2023 18:44:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686102255; x=1688694255; h=to:from:subject:references:mime-version:message-id:in-reply-to:date :from:to:cc:subject:date:message-id:reply-to; bh=JJ5Trkf+zwJ+Qw/wPApXjMzIheakkK4FyusRcx5mBBo=; b=ECx+5clfS0XpvyLraFjIR93kikbwRhS4uD2PmePSls40y/2lPWUp9AmA/f2kl2mT6t EKcfdKepfqrxYekoTt83zl63nUFfM6tG96Yg+mEpCNP8pml0Cfy/anOzaFdbWYUvMg6I uxuTj1LTmltGGxXGmo0tCfOCTRpPbXBKuTYhKjdKdHZfUywYtSfiK+IKn4xDuXrkicKS hJfuuNnE0StKvFdlBb7X1TDMDcD2rxqUuAAGe4CMWk64pNnqHOmjr1oPdWcIBeOLzjLk PNV3OPhliVBP+XSibjLnrInnHHkDjofiWnLYW0qZg321zk09rfv0Kuc6qiuFyVz6EIin JVWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686102255; x=1688694255; h=to:from:subject:references:mime-version:message-id:in-reply-to:date :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=JJ5Trkf+zwJ+Qw/wPApXjMzIheakkK4FyusRcx5mBBo=; b=Eff/rbrrESY0wnuU0dOt+DjVMUH5jtHojBuIcpdbarzaD2hLPr5yEWOoAptbTecauE lUzX3/d39fWlM0WzMb03HA68egpBiVgfyQgttFXyz2+iymPFwAUduEPJxi+NEgEV+4Z8 VzZ+B6GVXHh9lsMSpGP2HDGSmeMCZ1U1BHyceR/q1cTUd6zbx0fz9oS8dyLkuMRGtGT6 5q1F1lHAdIi84Jzj3sTih8eEYByx4laQgPC+D9OY/56UIb3sw7ea6lN/w/SQB4j4RidS 8bPyCq0a0iwgICIbpyNdqGhsNSxWzlxKUYdQ238L+Lu1xUSb+HMt2co8CYdfAyRIB5qs pUVg== X-Gm-Message-State: AC+VfDwBl1bEinRr/wh3dUbj1SyTXpKR22Z5CwfrNFe/BDxVCbo2ioaM Pg2mvQskb/phLeCA8aTc2WoZShTNCUOn X-Google-Smtp-Source: ACHHUZ76rg8WrUptE+/K1Hf5i+VQz8ETyzhQr+BRqQcU4eOUY4bp/axgfBMEzvwL3cITsPfWn1mIxOxl9WvN X-Received: from irogers.svl.corp.google.com ([2620:15c:2d4:203:3c35:209f:5d38:b7a1]) (user=irogers job=sendgmr) by 2002:a81:b289:0:b0:562:837:122f with SMTP id q131-20020a81b289000000b005620837122fmr2020660ywh.9.1686102255131; Tue, 06 Jun 2023 18:44:15 -0700 (PDT) Date: Tue, 6 Jun 2023 18:43:35 -0700 In-Reply-To: <20230607014353.3172466-1-irogers@google.com> Message-Id: <20230607014353.3172466-3-irogers@google.com> Mime-Version: 1.0 References: <20230607014353.3172466-1-irogers@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Subject: [PATCH v1 02/20] perf thread: Make threads rbtree non-invasive From: Ian Rogers To: John Garry , Will Deacon , James Clark , Mike Leach , Leo Yan , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter , Suzuki K Poulose , "Naveen N. Rao" , Kan Liang , German Gomez , Ali Saidi , Jing Zhang , " =?utf-8?q?Martin_Li=C5=A1ka?= " , Athira Rajeev , Miguel Ojeda , ye xingchen , Liam Howlett , Dmitrii Dolgov <9erthalion6@gmail.com>, "Shawn M. Chapla" , Yang Jihong , K Prateek Nayak , Changbin Du , Ravi Bangoria , Sean Christopherson , Raul Silvera , Andi Kleen , "Steinar H. Gunderson" , Yuan Can , Brian Robbins , liuwenyu , Ivan Babrou , Fangrui Song , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-perf-users@vger.kernel.org, coresight@lists.linaro.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230606_184416_661852_EEB480EE X-CRM114-Status: GOOD ( 26.30 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Separate the rbtree out of thread and into a new struct thread_rb_node. The refcnt is in thread and the rbtree is responsible for a single count. Signed-off-by: Ian Rogers --- tools/perf/builtin-report.c | 2 +- tools/perf/builtin-trace.c | 2 +- tools/perf/util/machine.c | 101 +++++++++++++++++++++++------------- tools/perf/util/thread.c | 3 -- tools/perf/util/thread.h | 6 ++- 5 files changed, 73 insertions(+), 41 deletions(-) diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c index 92c6797e7cba..c7d526283baf 100644 --- a/tools/perf/builtin-report.c +++ b/tools/perf/builtin-report.c @@ -911,7 +911,7 @@ static int tasks_print(struct report *rep, FILE *fp) nd = rb_next(nd)) { task = tasks + itask++; - task->thread = rb_entry(nd, struct thread, rb_node); + task->thread = rb_entry(nd, struct thread_rb_node, rb_node)->thread; INIT_LIST_HEAD(&task->children); INIT_LIST_HEAD(&task->list); thread__set_priv(task->thread, task); diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c index 62c7c99a0fe4..b0dd202d14eb 100644 --- a/tools/perf/builtin-trace.c +++ b/tools/perf/builtin-trace.c @@ -4348,7 +4348,7 @@ DEFINE_RESORT_RB(threads, (thread__nr_events(a->thread->priv) < thread__nr_event struct thread *thread; ) { - entry->thread = rb_entry(nd, struct thread, rb_node); + entry->thread = rb_entry(nd, struct thread_rb_node, rb_node)->thread; } static size_t trace__fprintf_thread_summary(struct trace *trace, FILE *fp) diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c index a1954ac85f59..cbf092e32ee9 100644 --- a/tools/perf/util/machine.c +++ b/tools/perf/util/machine.c @@ -43,7 +43,8 @@ #include #include -static void __machine__remove_thread(struct machine *machine, struct thread *th, bool lock); +static void __machine__remove_thread(struct machine *machine, struct thread_rb_node *nd, + struct thread *th, bool lock); static int append_inlines(struct callchain_cursor *cursor, struct map_symbol *ms, u64 ip); static struct dso *machine__kernel_dso(struct machine *machine) @@ -72,6 +73,21 @@ static void machine__threads_init(struct machine *machine) } } +static int thread_rb_node__cmp_tid(const void *key, const struct rb_node *nd) +{ + int to_find = (int) *((pid_t *)key); + + return to_find - (int)rb_entry(nd, struct thread_rb_node, rb_node)->thread->tid; +} + +static struct thread_rb_node *thread_rb_node__find(const struct thread *th, + struct rb_root *tree) +{ + struct rb_node *nd = rb_find(&th->tid, tree, thread_rb_node__cmp_tid); + + return rb_entry(nd, struct thread_rb_node, rb_node); +} + static int machine__set_mmap_name(struct machine *machine) { if (machine__is_host(machine)) @@ -214,10 +230,10 @@ void machine__delete_threads(struct machine *machine) down_write(&threads->lock); nd = rb_first_cached(&threads->entries); while (nd) { - struct thread *t = rb_entry(nd, struct thread, rb_node); + struct thread_rb_node *trb = rb_entry(nd, struct thread_rb_node, rb_node); nd = rb_next(nd); - __machine__remove_thread(machine, t, false); + __machine__remove_thread(machine, trb, trb->thread, false); } up_write(&threads->lock); } @@ -605,6 +621,7 @@ static struct thread *____machine__findnew_thread(struct machine *machine, struct rb_node **p = &threads->entries.rb_root.rb_node; struct rb_node *parent = NULL; struct thread *th; + struct thread_rb_node *nd; bool leftmost = true; th = threads__get_last_match(threads, machine, pid, tid); @@ -613,7 +630,7 @@ static struct thread *____machine__findnew_thread(struct machine *machine, while (*p != NULL) { parent = *p; - th = rb_entry(parent, struct thread, rb_node); + th = rb_entry(parent, struct thread_rb_node, rb_node)->thread; if (th->tid == tid) { threads__set_last_match(threads, th); @@ -633,30 +650,39 @@ static struct thread *____machine__findnew_thread(struct machine *machine, return NULL; th = thread__new(pid, tid); - if (th != NULL) { - rb_link_node(&th->rb_node, parent, p); - rb_insert_color_cached(&th->rb_node, &threads->entries, leftmost); + if (th == NULL) + return NULL; - /* - * We have to initialize maps separately after rb tree is updated. - * - * The reason is that we call machine__findnew_thread - * within thread__init_maps to find the thread - * leader and that would screwed the rb tree. - */ - if (thread__init_maps(th, machine)) { - rb_erase_cached(&th->rb_node, &threads->entries); - RB_CLEAR_NODE(&th->rb_node); - thread__put(th); - return NULL; - } - /* - * It is now in the rbtree, get a ref - */ - thread__get(th); - threads__set_last_match(threads, th); - ++threads->nr; + nd = malloc(sizeof(*nd)); + if (nd == NULL) { + thread__put(th); + return NULL; + } + nd->thread = th; + + rb_link_node(&nd->rb_node, parent, p); + rb_insert_color_cached(&nd->rb_node, &threads->entries, leftmost); + + /* + * We have to initialize maps separately after rb tree is updated. + * + * The reason is that we call machine__findnew_thread within + * thread__init_maps to find the thread leader and that would screwed + * the rb tree. + */ + if (thread__init_maps(th, machine)) { + rb_erase_cached(&nd->rb_node, &threads->entries); + RB_CLEAR_NODE(&nd->rb_node); + free(nd); + thread__put(th); + return NULL; } + /* + * It is now in the rbtree, get a ref + */ + thread__get(th); + threads__set_last_match(threads, th); + ++threads->nr; return th; } @@ -1109,7 +1135,7 @@ size_t machine__fprintf(struct machine *machine, FILE *fp) for (nd = rb_first_cached(&threads->entries); nd; nd = rb_next(nd)) { - struct thread *pos = rb_entry(nd, struct thread, rb_node); + struct thread *pos = rb_entry(nd, struct thread_rb_node, rb_node)->thread; ret += thread__fprintf(pos, fp); } @@ -2020,10 +2046,14 @@ int machine__process_mmap_event(struct machine *machine, union perf_event *event return 0; } -static void __machine__remove_thread(struct machine *machine, struct thread *th, bool lock) +static void __machine__remove_thread(struct machine *machine, struct thread_rb_node *nd, + struct thread *th, bool lock) { struct threads *threads = machine__threads(machine, th->tid); + if (!nd) + nd = thread_rb_node__find(th, &threads->entries.rb_root); + if (threads->last_match == th) threads__set_last_match(threads, NULL); @@ -2032,11 +2062,12 @@ static void __machine__remove_thread(struct machine *machine, struct thread *th, BUG_ON(refcount_read(&th->refcnt) == 0); - rb_erase_cached(&th->rb_node, &threads->entries); - RB_CLEAR_NODE(&th->rb_node); + thread__put(nd->thread); + rb_erase_cached(&nd->rb_node, &threads->entries); + RB_CLEAR_NODE(&nd->rb_node); --threads->nr; - thread__put(th); + free(nd); if (lock) up_write(&threads->lock); @@ -2044,7 +2075,7 @@ static void __machine__remove_thread(struct machine *machine, struct thread *th, void machine__remove_thread(struct machine *machine, struct thread *th) { - return __machine__remove_thread(machine, th, true); + return __machine__remove_thread(machine, NULL, th, true); } int machine__process_fork_event(struct machine *machine, union perf_event *event, @@ -3167,7 +3198,6 @@ int machine__for_each_thread(struct machine *machine, { struct threads *threads; struct rb_node *nd; - struct thread *thread; int rc = 0; int i; @@ -3175,8 +3205,9 @@ int machine__for_each_thread(struct machine *machine, threads = &machine->threads[i]; for (nd = rb_first_cached(&threads->entries); nd; nd = rb_next(nd)) { - thread = rb_entry(nd, struct thread, rb_node); - rc = fn(thread, priv); + struct thread_rb_node *trb = rb_entry(nd, struct thread_rb_node, rb_node); + + rc = fn(trb->thread, priv); if (rc != 0) return rc; } diff --git a/tools/perf/util/thread.c b/tools/perf/util/thread.c index d949bffc0ed6..38d300e3e4d3 100644 --- a/tools/perf/util/thread.c +++ b/tools/perf/util/thread.c @@ -66,7 +66,6 @@ struct thread *thread__new(pid_t pid, pid_t tid) list_add(&comm->list, &thread->comm_list); refcount_set(&thread->refcnt, 1); - RB_CLEAR_NODE(&thread->rb_node); /* Thread holds first ref to nsdata. */ thread->nsinfo = nsinfo__new(pid); srccode_state_init(&thread->srccode_state); @@ -84,8 +83,6 @@ void thread__delete(struct thread *thread) struct namespaces *namespaces, *tmp_namespaces; struct comm *comm, *tmp_comm; - BUG_ON(!RB_EMPTY_NODE(&thread->rb_node)); - thread_stack__free(thread); if (thread->maps) { diff --git a/tools/perf/util/thread.h b/tools/perf/util/thread.h index 86737812e06b..3b3f9fb5a916 100644 --- a/tools/perf/util/thread.h +++ b/tools/perf/util/thread.h @@ -29,8 +29,12 @@ struct lbr_stitch { struct callchain_cursor_node *prev_lbr_cursor; }; +struct thread_rb_node { + struct rb_node rb_node; + struct thread *thread; +}; + struct thread { - struct rb_node rb_node; struct maps *maps; pid_t pid_; /* Not all tools update this */ pid_t tid;