From patchwork Fri Jun 2 10:21:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrzej Hajda X-Patchwork-Id: 13265029 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0C49E182BE for ; Fri, 2 Jun 2023 10:22:00 +0000 (UTC) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2778AE46; Fri, 2 Jun 2023 03:21:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685701319; x=1717237319; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=mqfual4qhwBGATemiquiJwy5ApxQS+xFr1kM/b4gHaA=; b=hh+hZkylqBJps5gBBqcyt41SapR0L0ClMUfhK8X3QXfC9LXiAMchv/jC izDKQLi+9G4h3AuBxD33gcPZSOB64FGanNY4U/r4rLNqewUssApOMkhBw 21ENoAU5I1zsyl9lzvCFFps+c3ui3zfNpdgl5qvO8v+wtzw8LaIIKgOou fP059TiAKxI31UBXpD+P1gKNBhFVXmQ1afjjI66nrNeeZ5LUo6zDeFgOx Ym/mNNesdcQE4JeuWlDZOzyP4hA3As918nwqqgIIlJMNUrKLu9NtenxuT Xf1dwcQyU2HjyfLg094zWSjI0xwbTAMmFL22kve8Mtrkbh/+iA05P4zPD A==; X-IronPort-AV: E=McAfee;i="6600,9927,10728"; a="358267602" X-IronPort-AV: E=Sophos;i="6.00,212,1681196400"; d="scan'208";a="358267602" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jun 2023 03:21:58 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10728"; a="707804966" X-IronPort-AV: E=Sophos;i="6.00,212,1681196400"; d="scan'208";a="707804966" Received: from lab-ah.igk.intel.com ([10.102.138.202]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jun 2023 03:21:55 -0700 From: Andrzej Hajda Date: Fri, 02 Jun 2023 12:21:33 +0200 Subject: [PATCH v9 1/4] lib/ref_tracker: add unlocked leak print helper Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20230224-track_gt-v9-1-5b47a33f55d1@intel.com> References: <20230224-track_gt-v9-0-5b47a33f55d1@intel.com> In-Reply-To: <20230224-track_gt-v9-0-5b47a33f55d1@intel.com> To: Eric Dumazet , Jakub Kicinski , "David S. Miller" Cc: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Chris Wilson , netdev@vger.kernel.org, Dmitry Vyukov , Andi Shyti , Andrzej Hajda X-Mailer: b4 0.12.2 X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net To have reliable detection of leaks, caller must be able to check under the same lock both: tracked counter and the leaks. dir.lock is natural candidate for such lock and unlocked print helper can be called with this lock taken. As a bonus we can reuse this helper in ref_tracker_dir_exit. Signed-off-by: Andrzej Hajda Reviewed-by: Andi Shyti Reviewed-by: Eric Dumazet --- include/linux/ref_tracker.h | 8 ++++++ lib/ref_tracker.c | 66 ++++++++++++++++++++++++++------------------- 2 files changed, 46 insertions(+), 28 deletions(-) diff --git a/include/linux/ref_tracker.h b/include/linux/ref_tracker.h index 9ca353ab712b5e..87a92f2bec1b88 100644 --- a/include/linux/ref_tracker.h +++ b/include/linux/ref_tracker.h @@ -36,6 +36,9 @@ static inline void ref_tracker_dir_init(struct ref_tracker_dir *dir, void ref_tracker_dir_exit(struct ref_tracker_dir *dir); +void ref_tracker_dir_print_locked(struct ref_tracker_dir *dir, + unsigned int display_limit); + void ref_tracker_dir_print(struct ref_tracker_dir *dir, unsigned int display_limit); @@ -56,6 +59,11 @@ static inline void ref_tracker_dir_exit(struct ref_tracker_dir *dir) { } +static inline void ref_tracker_dir_print_locked(struct ref_tracker_dir *dir, + unsigned int display_limit) +{ +} + static inline void ref_tracker_dir_print(struct ref_tracker_dir *dir, unsigned int display_limit) { diff --git a/lib/ref_tracker.c b/lib/ref_tracker.c index dc7b14aa3431e2..d4eb0929af8f96 100644 --- a/lib/ref_tracker.c +++ b/lib/ref_tracker.c @@ -14,6 +14,38 @@ struct ref_tracker { depot_stack_handle_t free_stack_handle; }; +void ref_tracker_dir_print_locked(struct ref_tracker_dir *dir, + unsigned int display_limit) +{ + struct ref_tracker *tracker; + unsigned int i = 0; + + lockdep_assert_held(&dir->lock); + + list_for_each_entry(tracker, &dir->list, head) { + if (i < display_limit) { + pr_err("leaked reference.\n"); + if (tracker->alloc_stack_handle) + stack_depot_print(tracker->alloc_stack_handle); + i++; + } else { + break; + } + } +} +EXPORT_SYMBOL(ref_tracker_dir_print_locked); + +void ref_tracker_dir_print(struct ref_tracker_dir *dir, + unsigned int display_limit) +{ + unsigned long flags; + + spin_lock_irqsave(&dir->lock, flags); + ref_tracker_dir_print_locked(dir, display_limit); + spin_unlock_irqrestore(&dir->lock, flags); +} +EXPORT_SYMBOL(ref_tracker_dir_print); + void ref_tracker_dir_exit(struct ref_tracker_dir *dir) { struct ref_tracker *tracker, *n; @@ -27,13 +59,13 @@ void ref_tracker_dir_exit(struct ref_tracker_dir *dir) kfree(tracker); dir->quarantine_avail++; } - list_for_each_entry_safe(tracker, n, &dir->list, head) { - pr_err("leaked reference.\n"); - if (tracker->alloc_stack_handle) - stack_depot_print(tracker->alloc_stack_handle); + if (!list_empty(&dir->list)) { + ref_tracker_dir_print_locked(dir, 16); leak = true; - list_del(&tracker->head); - kfree(tracker); + list_for_each_entry_safe(tracker, n, &dir->list, head) { + list_del(&tracker->head); + kfree(tracker); + } } spin_unlock_irqrestore(&dir->lock, flags); WARN_ON_ONCE(leak); @@ -42,28 +74,6 @@ void ref_tracker_dir_exit(struct ref_tracker_dir *dir) } EXPORT_SYMBOL(ref_tracker_dir_exit); -void ref_tracker_dir_print(struct ref_tracker_dir *dir, - unsigned int display_limit) -{ - struct ref_tracker *tracker; - unsigned long flags; - unsigned int i = 0; - - spin_lock_irqsave(&dir->lock, flags); - list_for_each_entry(tracker, &dir->list, head) { - if (i < display_limit) { - pr_err("leaked reference.\n"); - if (tracker->alloc_stack_handle) - stack_depot_print(tracker->alloc_stack_handle); - i++; - } else { - break; - } - } - spin_unlock_irqrestore(&dir->lock, flags); -} -EXPORT_SYMBOL(ref_tracker_dir_print); - int ref_tracker_alloc(struct ref_tracker_dir *dir, struct ref_tracker **trackerp, gfp_t gfp) From patchwork Fri Jun 2 10:21:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrzej Hajda X-Patchwork-Id: 13265030 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B5B5E182B9 for ; Fri, 2 Jun 2023 10:22:04 +0000 (UTC) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9B68FE50; Fri, 2 Jun 2023 03:22:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685701322; x=1717237322; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=2j0W7UL/pqSZ0Qe9hVrIOqCXRmXLMvc0HdEs6KkKVwQ=; b=SpvEq4eTNsilpqEJx9mPgtgQSmf/Pa9yVSn/m3rF0PMPHybNgxHftv7Z nvMTqlx03aiPwtHR1MWTURDAtCfTNU9IMekdHq8kukWYZ8GAlUcNNX5C7 pAnsjS7zoXguIxucMBa8Fca3X0APFiyPJ2r64bihNPjwnK7PqZ/Di5Mk+ 53UbD2+hiW+uLgd+hytMHc8V9RWsm6UG7yojF6X6mrmUL+5cz8ccjXVeo nTvi83LeNWDYZqPqUpWiElajqESHYVIaqKSnfjqiR7kNawguNfKXeIZkF Rdl92rvhUVTUfvwfMmA9Erv87/SALLoJvgExIZWunzMjEqfHP0AhhtXyw w==; X-IronPort-AV: E=McAfee;i="6600,9927,10728"; a="358267622" X-IronPort-AV: E=Sophos;i="6.00,212,1681196400"; d="scan'208";a="358267622" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jun 2023 03:22:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10728"; a="707804972" X-IronPort-AV: E=Sophos;i="6.00,212,1681196400"; d="scan'208";a="707804972" Received: from lab-ah.igk.intel.com ([10.102.138.202]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jun 2023 03:21:59 -0700 From: Andrzej Hajda Date: Fri, 02 Jun 2023 12:21:34 +0200 Subject: [PATCH v9 2/4] lib/ref_tracker: improve printing stats Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20230224-track_gt-v9-2-5b47a33f55d1@intel.com> References: <20230224-track_gt-v9-0-5b47a33f55d1@intel.com> In-Reply-To: <20230224-track_gt-v9-0-5b47a33f55d1@intel.com> To: Eric Dumazet , Jakub Kicinski , "David S. Miller" Cc: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Chris Wilson , netdev@vger.kernel.org, Dmitry Vyukov , Andi Shyti , Andrzej Hajda X-Mailer: b4 0.12.2 X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org In case the library is tracking busy subsystem, simply printing stack for every active reference will spam log with long, hard to read, redundant stack traces. To improve readabilty following changes have been made: - reports are printed per stack_handle - log is more compact, - added display name for ref_tracker_dir - it will differentiate multiple subsystems, - stack trace is printed indented, in the same printk call, - info about dropped references is printed as well. Signed-off-by: Andrzej Hajda Reviewed-by: Andi Shyti Reviewed-by: Eric Dumazet --- include/linux/ref_tracker.h | 9 ++++- lib/ref_tracker.c | 90 +++++++++++++++++++++++++++++++++++++++------ lib/test_ref_tracker.c | 2 +- net/core/dev.c | 2 +- net/core/net_namespace.c | 4 +- 5 files changed, 90 insertions(+), 17 deletions(-) diff --git a/include/linux/ref_tracker.h b/include/linux/ref_tracker.h index 87a92f2bec1b88..19a69e7809d6c1 100644 --- a/include/linux/ref_tracker.h +++ b/include/linux/ref_tracker.h @@ -17,12 +17,15 @@ struct ref_tracker_dir { bool dead; struct list_head list; /* List of active trackers */ struct list_head quarantine; /* List of dead trackers */ + char name[32]; #endif }; #ifdef CONFIG_REF_TRACKER + static inline void ref_tracker_dir_init(struct ref_tracker_dir *dir, - unsigned int quarantine_count) + unsigned int quarantine_count, + const char *name) { INIT_LIST_HEAD(&dir->list); INIT_LIST_HEAD(&dir->quarantine); @@ -31,6 +34,7 @@ static inline void ref_tracker_dir_init(struct ref_tracker_dir *dir, dir->dead = false; refcount_set(&dir->untracked, 1); refcount_set(&dir->no_tracker, 1); + strscpy(dir->name, name, sizeof(dir->name)); stack_depot_init(); } @@ -51,7 +55,8 @@ int ref_tracker_free(struct ref_tracker_dir *dir, #else /* CONFIG_REF_TRACKER */ static inline void ref_tracker_dir_init(struct ref_tracker_dir *dir, - unsigned int quarantine_count) + unsigned int quarantine_count, + const char *name) { } diff --git a/lib/ref_tracker.c b/lib/ref_tracker.c index d4eb0929af8f96..2ffe79c90c1771 100644 --- a/lib/ref_tracker.c +++ b/lib/ref_tracker.c @@ -1,11 +1,16 @@ // SPDX-License-Identifier: GPL-2.0-or-later + +#define pr_fmt(fmt) "ref_tracker: " fmt + #include +#include #include #include #include #include #define REF_TRACKER_STACK_ENTRIES 16 +#define STACK_BUF_SIZE 1024 struct ref_tracker { struct list_head head; /* anchor into dir->list or dir->quarantine */ @@ -14,24 +19,87 @@ struct ref_tracker { depot_stack_handle_t free_stack_handle; }; -void ref_tracker_dir_print_locked(struct ref_tracker_dir *dir, - unsigned int display_limit) +struct ref_tracker_dir_stats { + int total; + int count; + struct { + depot_stack_handle_t stack_handle; + unsigned int count; + } stacks[]; +}; + +static struct ref_tracker_dir_stats * +ref_tracker_get_stats(struct ref_tracker_dir *dir, unsigned int limit) { + struct ref_tracker_dir_stats *stats; struct ref_tracker *tracker; - unsigned int i = 0; - lockdep_assert_held(&dir->lock); + stats = kmalloc(struct_size(stats, stacks, limit), + GFP_NOWAIT | __GFP_NOWARN); + if (!stats) + return ERR_PTR(-ENOMEM); + stats->total = 0; + stats->count = 0; list_for_each_entry(tracker, &dir->list, head) { - if (i < display_limit) { - pr_err("leaked reference.\n"); - if (tracker->alloc_stack_handle) - stack_depot_print(tracker->alloc_stack_handle); - i++; - } else { - break; + depot_stack_handle_t stack = tracker->alloc_stack_handle; + int i; + + ++stats->total; + for (i = 0; i < stats->count; ++i) + if (stats->stacks[i].stack_handle == stack) + break; + if (i >= limit) + continue; + if (i >= stats->count) { + stats->stacks[i].stack_handle = stack; + stats->stacks[i].count = 0; + ++stats->count; } + ++stats->stacks[i].count; + } + + return stats; +} + +void ref_tracker_dir_print_locked(struct ref_tracker_dir *dir, + unsigned int display_limit) +{ + struct ref_tracker_dir_stats *stats; + unsigned int i = 0, skipped; + depot_stack_handle_t stack; + char *sbuf; + + lockdep_assert_held(&dir->lock); + + if (list_empty(&dir->list)) + return; + + stats = ref_tracker_get_stats(dir, display_limit); + if (IS_ERR(stats)) { + pr_err("%s@%pK: couldn't get stats, error %pe\n", + dir->name, dir, stats); + return; } + + sbuf = kmalloc(STACK_BUF_SIZE, GFP_NOWAIT | __GFP_NOWARN); + + for (i = 0, skipped = stats->total; i < stats->count; ++i) { + stack = stats->stacks[i].stack_handle; + if (sbuf && !stack_depot_snprint(stack, sbuf, STACK_BUF_SIZE, 4)) + sbuf[0] = 0; + pr_err("%s@%pK has %d/%d users at\n%s\n", dir->name, dir, + stats->stacks[i].count, stats->total, sbuf); + skipped -= stats->stacks[i].count; + } + + if (skipped) + pr_err("%s@%pK skipped reports about %d/%d users.\n", + dir->name, dir, skipped, stats->total); + + kfree(sbuf); + + kfree(stats); } EXPORT_SYMBOL(ref_tracker_dir_print_locked); diff --git a/lib/test_ref_tracker.c b/lib/test_ref_tracker.c index 19d7dec70cc62f..49970a7c96f3f4 100644 --- a/lib/test_ref_tracker.c +++ b/lib/test_ref_tracker.c @@ -64,7 +64,7 @@ static int __init test_ref_tracker_init(void) { int i; - ref_tracker_dir_init(&ref_dir, 100); + ref_tracker_dir_init(&ref_dir, 100, "selftest"); timer_setup(&test_ref_tracker_timer, test_ref_tracker_timer_func, 0); mod_timer(&test_ref_tracker_timer, jiffies + 1); diff --git a/net/core/dev.c b/net/core/dev.c index 99d99b247bc976..8870eeb5a2ae1f 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -10635,7 +10635,7 @@ struct net_device *alloc_netdev_mqs(int sizeof_priv, const char *name, dev = PTR_ALIGN(p, NETDEV_ALIGN); dev->padded = (char *)dev - (char *)p; - ref_tracker_dir_init(&dev->refcnt_tracker, 128); + ref_tracker_dir_init(&dev->refcnt_tracker, 128, name); #ifdef CONFIG_PCPU_DEV_REFCNT dev->pcpu_refcnt = alloc_percpu(int); if (!dev->pcpu_refcnt) diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c index 3e3598cd49f23e..f4183c4c1ec82f 100644 --- a/net/core/net_namespace.c +++ b/net/core/net_namespace.c @@ -308,7 +308,7 @@ EXPORT_SYMBOL_GPL(get_net_ns_by_id); /* init code that must occur even if setup_net() is not called. */ static __net_init void preinit_net(struct net *net) { - ref_tracker_dir_init(&net->notrefcnt_tracker, 128); + ref_tracker_dir_init(&net->notrefcnt_tracker, 128, "net notrefcnt"); } /* @@ -322,7 +322,7 @@ static __net_init int setup_net(struct net *net, struct user_namespace *user_ns) LIST_HEAD(net_exit_list); refcount_set(&net->ns.count, 1); - ref_tracker_dir_init(&net->refcnt_tracker, 128); + ref_tracker_dir_init(&net->refcnt_tracker, 128, "net refcnt"); refcount_set(&net->passive, 1); get_random_bytes(&net->hash_mix, sizeof(u32)); From patchwork Fri Jun 2 10:21:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrzej Hajda X-Patchwork-Id: 13265031 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CD8765687 for ; Fri, 2 Jun 2023 10:22:08 +0000 (UTC) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 24AD6E5C; Fri, 2 Jun 2023 03:22:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685701326; x=1717237326; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=V829dPDXLt7WqeU+6inlDIEaA7ebMjnZBGRC6GePQ5U=; b=Y9ZskJImOqc8HAHxSSauJw2Dl+A+GiRMMT79HRpWGxaNhkU6S5Xk3h8P X1EIlSeK9sxqRBrSRGUEFSFrkeHnnXJoYQJqq2tB2duuuVvK9LGtbLVsz 8sIWeR/Q1RAdKp6cdz+PNGgiOw/6xh/fO/6i3uv83591sz1aNKZ5WSxqC XHb6coHI7xCLn/IZ+rYCllVs0rjA1PzcG3kZxp4T4Ke71HP/gVBrXR8Uh Ea/MXsKBgqwz/PTwhE8u4ESwtHfUhQeRqZaLx/rFAbGrv8kA4O0cbDdOc IxGcgrIcZ1tW8aMndaB7A8UR2rXD1XklqqhOC2AKSJdY6ycSisCHXKTNu Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10728"; a="358267631" X-IronPort-AV: E=Sophos;i="6.00,212,1681196400"; d="scan'208";a="358267631" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jun 2023 03:22:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10728"; a="707804984" X-IronPort-AV: E=Sophos;i="6.00,212,1681196400"; d="scan'208";a="707804984" Received: from lab-ah.igk.intel.com ([10.102.138.202]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jun 2023 03:22:02 -0700 From: Andrzej Hajda Date: Fri, 02 Jun 2023 12:21:35 +0200 Subject: [PATCH v9 3/4] lib/ref_tracker: add printing to memory buffer Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20230224-track_gt-v9-3-5b47a33f55d1@intel.com> References: <20230224-track_gt-v9-0-5b47a33f55d1@intel.com> In-Reply-To: <20230224-track_gt-v9-0-5b47a33f55d1@intel.com> To: Eric Dumazet , Jakub Kicinski , "David S. Miller" Cc: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Chris Wilson , netdev@vger.kernel.org, Dmitry Vyukov , Andi Shyti , Andrzej Hajda X-Mailer: b4 0.12.2 X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Similar to stack_(depot|trace)_snprint the patch adds helper to printing stats to memory buffer. It will be helpful in case of debugfs. Signed-off-by: Andrzej Hajda Reviewed-by: Andi Shyti Reviewed-by: Eric Dumazet --- include/linux/ref_tracker.h | 8 +++++++ lib/ref_tracker.c | 56 ++++++++++++++++++++++++++++++++++++++------- 2 files changed, 56 insertions(+), 8 deletions(-) diff --git a/include/linux/ref_tracker.h b/include/linux/ref_tracker.h index 19a69e7809d6c1..8eac4f3d52547c 100644 --- a/include/linux/ref_tracker.h +++ b/include/linux/ref_tracker.h @@ -46,6 +46,8 @@ void ref_tracker_dir_print_locked(struct ref_tracker_dir *dir, void ref_tracker_dir_print(struct ref_tracker_dir *dir, unsigned int display_limit); +int ref_tracker_dir_snprint(struct ref_tracker_dir *dir, char *buf, size_t size); + int ref_tracker_alloc(struct ref_tracker_dir *dir, struct ref_tracker **trackerp, gfp_t gfp); @@ -74,6 +76,12 @@ static inline void ref_tracker_dir_print(struct ref_tracker_dir *dir, { } +static inline int ref_tracker_dir_snprint(struct ref_tracker_dir *dir, + char *buf, size_t size) +{ + return 0; +} + static inline int ref_tracker_alloc(struct ref_tracker_dir *dir, struct ref_tracker **trackerp, gfp_t gfp) diff --git a/lib/ref_tracker.c b/lib/ref_tracker.c index 2ffe79c90c1771..cce4614b07940f 100644 --- a/lib/ref_tracker.c +++ b/lib/ref_tracker.c @@ -62,8 +62,27 @@ ref_tracker_get_stats(struct ref_tracker_dir *dir, unsigned int limit) return stats; } -void ref_tracker_dir_print_locked(struct ref_tracker_dir *dir, - unsigned int display_limit) +struct ostream { + char *buf; + int size, used; +}; + +#define pr_ostream(stream, fmt, args...) \ +({ \ + struct ostream *_s = (stream); \ +\ + if (!_s->buf) { \ + pr_err(fmt, ##args); \ + } else { \ + int ret, len = _s->size - _s->used; \ + ret = snprintf(_s->buf + _s->used, len, pr_fmt(fmt), ##args); \ + _s->used += min(ret, len); \ + } \ +}) + +static void +__ref_tracker_dir_pr_ostream(struct ref_tracker_dir *dir, + unsigned int display_limit, struct ostream *s) { struct ref_tracker_dir_stats *stats; unsigned int i = 0, skipped; @@ -77,8 +96,8 @@ void ref_tracker_dir_print_locked(struct ref_tracker_dir *dir, stats = ref_tracker_get_stats(dir, display_limit); if (IS_ERR(stats)) { - pr_err("%s@%pK: couldn't get stats, error %pe\n", - dir->name, dir, stats); + pr_ostream(s, "%s@%pK: couldn't get stats, error %pe\n", + dir->name, dir, stats); return; } @@ -88,19 +107,27 @@ void ref_tracker_dir_print_locked(struct ref_tracker_dir *dir, stack = stats->stacks[i].stack_handle; if (sbuf && !stack_depot_snprint(stack, sbuf, STACK_BUF_SIZE, 4)) sbuf[0] = 0; - pr_err("%s@%pK has %d/%d users at\n%s\n", dir->name, dir, - stats->stacks[i].count, stats->total, sbuf); + pr_ostream(s, "%s@%pK has %d/%d users at\n%s\n", dir->name, dir, + stats->stacks[i].count, stats->total, sbuf); skipped -= stats->stacks[i].count; } if (skipped) - pr_err("%s@%pK skipped reports about %d/%d users.\n", - dir->name, dir, skipped, stats->total); + pr_ostream(s, "%s@%pK skipped reports about %d/%d users.\n", + dir->name, dir, skipped, stats->total); kfree(sbuf); kfree(stats); } + +void ref_tracker_dir_print_locked(struct ref_tracker_dir *dir, + unsigned int display_limit) +{ + struct ostream os = {}; + + __ref_tracker_dir_pr_ostream(dir, display_limit, &os); +} EXPORT_SYMBOL(ref_tracker_dir_print_locked); void ref_tracker_dir_print(struct ref_tracker_dir *dir, @@ -114,6 +141,19 @@ void ref_tracker_dir_print(struct ref_tracker_dir *dir, } EXPORT_SYMBOL(ref_tracker_dir_print); +int ref_tracker_dir_snprint(struct ref_tracker_dir *dir, char *buf, size_t size) +{ + struct ostream os = { .buf = buf, .size = size }; + unsigned long flags; + + spin_lock_irqsave(&dir->lock, flags); + __ref_tracker_dir_pr_ostream(dir, 16, &os); + spin_unlock_irqrestore(&dir->lock, flags); + + return os.used; +} +EXPORT_SYMBOL(ref_tracker_dir_snprint); + void ref_tracker_dir_exit(struct ref_tracker_dir *dir) { struct ref_tracker *tracker, *n; From patchwork Fri Jun 2 10:21:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrzej Hajda X-Patchwork-Id: 13265032 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A263D18C0F for ; Fri, 2 Jun 2023 10:22:10 +0000 (UTC) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7A309132; Fri, 2 Jun 2023 03:22:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685701329; x=1717237329; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=aF+00fMmCxB5g2z9dZZXQyc7aZrPeManS84M0BOgQmc=; b=Ji+g8NPtFP5ReehOvBW/GcMc04TY4r2x3deAIMvzv4ckGHuvGHcLx45T hasD/slbVCPzClzwi9u6Es8Kk1aXdp3fNrbFWjmrWWWVqRCkYbXwhlJjp ZJwrkkYtOXLptgp7bMN/5WAs3T41h0Re+Z/XXpO3o7p0+YaqicBEdn51F nfWtUn8DBVBZF+U8JFSrmFfiRKhGRIAE4StFeYJpwA3TdJWj4AP5CzGdj h5rqsfHCemFxWNF7BkFcBVP6IETQQ5GgiizjFedeTxRynGqpJ29kI5I0x K6lvN62Df4w4PELQfEAmyqRy9AONUTme3P0BotjucPbnpxy7Hhc2bEsIs A==; X-IronPort-AV: E=McAfee;i="6600,9927,10728"; a="358267640" X-IronPort-AV: E=Sophos;i="6.00,212,1681196400"; d="scan'208";a="358267640" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jun 2023 03:22:09 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10728"; a="707805001" X-IronPort-AV: E=Sophos;i="6.00,212,1681196400"; d="scan'208";a="707805001" Received: from lab-ah.igk.intel.com ([10.102.138.202]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jun 2023 03:22:05 -0700 From: Andrzej Hajda Date: Fri, 02 Jun 2023 12:21:36 +0200 Subject: [PATCH v9 4/4] lib/ref_tracker: remove warnings in case of allocation failure Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20230224-track_gt-v9-4-5b47a33f55d1@intel.com> References: <20230224-track_gt-v9-0-5b47a33f55d1@intel.com> In-Reply-To: <20230224-track_gt-v9-0-5b47a33f55d1@intel.com> To: Eric Dumazet , Jakub Kicinski , "David S. Miller" Cc: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Chris Wilson , netdev@vger.kernel.org, Dmitry Vyukov , Andi Shyti , Andrzej Hajda X-Mailer: b4 0.12.2 X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Library can handle allocation failures. To avoid allocation warnings __GFP_NOWARN has been added everywhere. Moreover GFP_ATOMIC has been replaced with GFP_NOWAIT in case of stack allocation on tracker free call. Signed-off-by: Andrzej Hajda Reviewed-by: Andi Shyti Reviewed-by: Eric Dumazet --- lib/ref_tracker.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/lib/ref_tracker.c b/lib/ref_tracker.c index cce4614b07940f..cf5609b1ca7936 100644 --- a/lib/ref_tracker.c +++ b/lib/ref_tracker.c @@ -189,7 +189,7 @@ int ref_tracker_alloc(struct ref_tracker_dir *dir, unsigned long entries[REF_TRACKER_STACK_ENTRIES]; struct ref_tracker *tracker; unsigned int nr_entries; - gfp_t gfp_mask = gfp; + gfp_t gfp_mask = gfp | __GFP_NOWARN; unsigned long flags; WARN_ON_ONCE(dir->dead); @@ -237,7 +237,8 @@ int ref_tracker_free(struct ref_tracker_dir *dir, return -EEXIST; } nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 1); - stack_handle = stack_depot_save(entries, nr_entries, GFP_ATOMIC); + stack_handle = stack_depot_save(entries, nr_entries, + GFP_NOWAIT | __GFP_NOWARN); spin_lock_irqsave(&dir->lock, flags); if (tracker->dead) {