diff mbox series

[net-next] Revert "net: rtnetlink: remove local list in __linkwatch_run_queue()"

Message ID 20231208105214.42304677dc64.I9be9486d2fa97a396d0c73e455d5cab5f376b837@changeid (mailing list archive)
State Accepted
Commit 9a64d4c93eee6b2efb7a02ec98d9480946424509
Delegated to: Netdev Maintainers
Headers show
Series [net-next] Revert "net: rtnetlink: remove local list in __linkwatch_run_queue()" | expand

Checks

Context Check Description
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for net-next
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 1117 this patch: 1117
netdev/cc_maintainers fail 2 blamed authors not CCed: kuba@kernel.org jiri@resnulli.us; 4 maintainers not CCed: kuba@kernel.org edumazet@google.com pabeni@redhat.com jiri@resnulli.us
netdev/build_clang success Errors and warnings before: 1143 this patch: 1143
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success Fixes tag looks correct
netdev/build_allmodconfig_warn success Errors and warnings before: 1144 this patch: 1144
netdev/checkpatch warning WARNING: line length of 81 exceeds 80 columns
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Johannes Berg Dec. 8, 2023, 9:52 a.m. UTC
From: Johannes Berg <johannes.berg@intel.com>

This reverts commit b8dbbbc535a9 ("net: rtnetlink: remove local list
in __linkwatch_run_queue()"). It's evidently broken when there's a
non-urgent work that gets added back, and then the loop can never
finish.

While reverting, add a note about that.

Reported-by: Marek Szyprowski <m.szyprowski@samsung.com>
Fixes: b8dbbbc535a9 ("net: rtnetlink: remove local list in __linkwatch_run_queue()")
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
---
Clear case of me being asleep at the wheel ... sorry about that!
---
 net/core/link_watch.c | 15 ++++++++++++---
 1 file changed, 12 insertions(+), 3 deletions(-)

Comments

patchwork-bot+netdevbpf@kernel.org Dec. 11, 2023, 11 a.m. UTC | #1
Hello:

This patch was applied to netdev/net-next.git (main)
by David S. Miller <davem@davemloft.net>:

On Fri,  8 Dec 2023 10:52:15 +0100 you wrote:
> From: Johannes Berg <johannes.berg@intel.com>
> 
> This reverts commit b8dbbbc535a9 ("net: rtnetlink: remove local list
> in __linkwatch_run_queue()"). It's evidently broken when there's a
> non-urgent work that gets added back, and then the loop can never
> finish.
> 
> [...]

Here is the summary with links:
  - [net-next] Revert "net: rtnetlink: remove local list in __linkwatch_run_queue()"
    https://git.kernel.org/netdev/net-next/c/9a64d4c93eee

You are awesome, thank you!
diff mbox series

Patch

diff --git a/net/core/link_watch.c b/net/core/link_watch.c
index 7be5b3ab32bd..429571c258da 100644
--- a/net/core/link_watch.c
+++ b/net/core/link_watch.c
@@ -192,6 +192,11 @@  static void __linkwatch_run_queue(int urgent_only)
 #define MAX_DO_DEV_PER_LOOP	100
 
 	int do_dev = MAX_DO_DEV_PER_LOOP;
+	/* Use a local list here since we add non-urgent
+	 * events back to the global one when called with
+	 * urgent_only=1.
+	 */
+	LIST_HEAD(wrk);
 
 	/* Give urgent case more budget */
 	if (urgent_only)
@@ -213,11 +218,12 @@  static void __linkwatch_run_queue(int urgent_only)
 	clear_bit(LW_URGENT, &linkwatch_flags);
 
 	spin_lock_irq(&lweventlist_lock);
-	while (!list_empty(&lweventlist) && do_dev > 0) {
+	list_splice_init(&lweventlist, &wrk);
+
+	while (!list_empty(&wrk) && do_dev > 0) {
 		struct net_device *dev;
 
-		dev = list_first_entry(&lweventlist, struct net_device,
-				       link_watch_list);
+		dev = list_first_entry(&wrk, struct net_device, link_watch_list);
 		list_del_init(&dev->link_watch_list);
 
 		if (!netif_device_present(dev) ||
@@ -235,6 +241,9 @@  static void __linkwatch_run_queue(int urgent_only)
 		spin_lock_irq(&lweventlist_lock);
 	}
 
+	/* Add the remaining work back to lweventlist */
+	list_splice_init(&wrk, &lweventlist);
+
 	if (!list_empty(&lweventlist))
 		linkwatch_schedule_work(0);
 	spin_unlock_irq(&lweventlist_lock);