diff mbox series

[RFC] net: rtnetlink: remove local list in __linkwatch_run_queue()

Message ID 20231204211952.01b2d4ff587d.I698b72219d9f6ce789bd209b8f6dffd0ca32a8f2@changeid (mailing list archive)
State Superseded
Delegated to: Netdev Maintainers
Headers show
Series [RFC] net: rtnetlink: remove local list in __linkwatch_run_queue() | expand

Checks

Context Check Description
netdev/series_format warning Single patches do not need cover letters; Target tree name not specified in the subject
netdev/tree_selection success Guessed tree name to be net-next
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 1117 this patch: 1117
netdev/cc_maintainers warning 3 maintainers not CCed: kuba@kernel.org pabeni@redhat.com edumazet@google.com
netdev/build_clang success Errors and warnings before: 1143 this patch: 1143
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 1144 this patch: 1144
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 32 lines checked
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Johannes Berg Dec. 4, 2023, 8:19 p.m. UTC
From: Johannes Berg <johannes.berg@intel.com>

Due to linkwatch_forget_dev() (and perhaps others?) checking for
list_empty(&dev->link_watch_list), we must have all manipulations
of even the local on-stack list 'wrk' here under spinlock, since
even that list can be reached otherwise via dev->link_watch_list.

This is already the case, but makes this a bit counter-intuitive,
often local lists are used to _not_ have to use locking for their
local use.

Remove the local list as it doesn't seem to serve any purpose.
While at it, move a variable declaration into the loop using it.

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
---
 net/core/link_watch.c | 13 ++++---------
 1 file changed, 4 insertions(+), 9 deletions(-)

Comments

Jiri Pirko Dec. 5, 2023, 11:22 a.m. UTC | #1
Mon, Dec 04, 2023 at 09:19:53PM CET, johannes@sipsolutions.net wrote:
>From: Johannes Berg <johannes.berg@intel.com>

Why rfc?


>
>Due to linkwatch_forget_dev() (and perhaps others?) checking for
>list_empty(&dev->link_watch_list), we must have all manipulations
>of even the local on-stack list 'wrk' here under spinlock, since
>even that list can be reached otherwise via dev->link_watch_list.
>
>This is already the case, but makes this a bit counter-intuitive,
>often local lists are used to _not_ have to use locking for their
>local use.
>
>Remove the local list as it doesn't seem to serve any purpose.
>While at it, move a variable declaration into the loop using it.
>
>Signed-off-by: Johannes Berg <johannes.berg@intel.com>

Reviewed-by: Jiri Pirko <jiri@nvidia.com>
Johannes Berg Dec. 5, 2023, 11:24 a.m. UTC | #2
On Tue, 2023-12-05 at 12:22 +0100, Jiri Pirko wrote:
> Mon, Dec 04, 2023 at 09:19:53PM CET, johannes@sipsolutions.net wrote:
> > From: Johannes Berg <johannes.berg@intel.com>
> 
> Why rfc?

I thought maybe someone could come up with a reason it actually makes
sense? :)

johannes
diff mbox series

Patch

diff --git a/net/core/link_watch.c b/net/core/link_watch.c
index c469d1c4db5d..ed3e5391fa79 100644
--- a/net/core/link_watch.c
+++ b/net/core/link_watch.c
@@ -192,8 +192,6 @@  static void __linkwatch_run_queue(int urgent_only)
 #define MAX_DO_DEV_PER_LOOP	100
 
 	int do_dev = MAX_DO_DEV_PER_LOOP;
-	struct net_device *dev;
-	LIST_HEAD(wrk);
 
 	/* Give urgent case more budget */
 	if (urgent_only)
@@ -215,11 +213,11 @@  static void __linkwatch_run_queue(int urgent_only)
 	clear_bit(LW_URGENT, &linkwatch_flags);
 
 	spin_lock_irq(&lweventlist_lock);
-	list_splice_init(&lweventlist, &wrk);
+	while (!list_empty(&lweventlist) && do_dev > 0) {
+		struct net_device *dev;
 
-	while (!list_empty(&wrk) && do_dev > 0) {
-
-		dev = list_first_entry(&wrk, struct net_device, link_watch_list);
+		dev = list_first_entry(&lweventlist, struct net_device,
+				       link_watch_list);
 		list_del_init(&dev->link_watch_list);
 
 		if (!netif_device_present(dev) ||
@@ -237,9 +235,6 @@  static void __linkwatch_run_queue(int urgent_only)
 		spin_lock_irq(&lweventlist_lock);
 	}
 
-	/* Add the remaining work back to lweventlist */
-	list_splice_init(&wrk, &lweventlist);
-
 	if (!list_empty(&lweventlist))
 		linkwatch_schedule_work(0);
 	spin_unlock_irq(&lweventlist_lock);