diff mbox series

[net] net: fix races in netdev_tx_sent_queue()/dev_watchdog()

Message ID 20241015194118.3951657-1-edumazet@google.com (mailing list archive)
State Accepted
Commit 95ecba62e2fd201bcdcca636f5d774f1cd4f1458
Delegated to: Netdev Maintainers
Headers show
Series [net] net: fix races in netdev_tx_sent_queue()/dev_watchdog() | expand

Checks

Context Check Description
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for net, async
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag present in non-next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 42 this patch: 42
netdev/build_tools success Errors and warnings before: 0 (+0) this patch: 0 (+0)
netdev/cc_maintainers warning 4 maintainers not CCed: andrew+netdev@lunn.ch jhs@mojatatu.com jiri@resnulli.us xiyou.wangcong@gmail.com
netdev/build_clang success Errors and warnings before: 80 this patch: 80
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success Fixes tag looks correct
netdev/build_allmodconfig_warn success Errors and warnings before: 4142 this patch: 4142
netdev/checkpatch warning WARNING: line length of 84 exceeds 80 columns
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 22 this patch: 22
netdev/source_inline success Was 0 now: 0
netdev/contest success net-next-2024-10-19--00-00 (tests: 777)

Commit Message

Eric Dumazet Oct. 15, 2024, 7:41 p.m. UTC
Some workloads hit the infamous dev_watchdog() message:

"NETDEV WATCHDOG: eth0 (xxxx): transmit queue XX timed out"

It seems possible to hit this even for perfectly normal
BQL enabled drivers:

1) Assume a TX queue was idle for more than dev->watchdog_timeo
   (5 seconds unless changed by the driver)

2) Assume a big packet is sent, exceeding current BQL limit.

3) Driver ndo_start_xmit() puts the packet in TX ring,
   and netdev_tx_sent_queue() is called.

4) QUEUE_STATE_STACK_XOFF could be set from netdev_tx_sent_queue()
   before txq->trans_start has been written.

5) txq->trans_start is written later, from netdev_start_xmit()

    if (rc == NETDEV_TX_OK)
          txq_trans_update(txq)

dev_watchdog() running on another cpu could read the old
txq->trans_start, and then see QUEUE_STATE_STACK_XOFF, because 5)
did not happen yet.

To solve the issue, write txq->trans_start right before one XOFF bit
is set :

- _QUEUE_STATE_DRV_XOFF from netif_tx_stop_queue()
- __QUEUE_STATE_STACK_XOFF from netdev_tx_sent_queue()

From dev_watchdog(), we have to read txq->state before txq->trans_start.

Add memory barriers to enforce correct ordering.

In the future, we could avoid writing over txq->trans_start for normal
operations, and rename this field to txq->xoff_start_time.

Fixes: bec251bc8b6a ("net: no longer stop all TX queues in dev_watchdog()")
Signed-off-by: Eric Dumazet <edumazet@google.com>
---
 include/linux/netdevice.h | 12 ++++++++++++
 net/sched/sch_generic.c   |  8 +++++++-
 2 files changed, 19 insertions(+), 1 deletion(-)

Comments

Willem de Bruijn Oct. 16, 2024, 5:11 p.m. UTC | #1
Eric Dumazet wrote:
> Some workloads hit the infamous dev_watchdog() message:
> 
> "NETDEV WATCHDOG: eth0 (xxxx): transmit queue XX timed out"
> 
> It seems possible to hit this even for perfectly normal
> BQL enabled drivers:
> 
> 1) Assume a TX queue was idle for more than dev->watchdog_timeo
>    (5 seconds unless changed by the driver)
> 
> 2) Assume a big packet is sent, exceeding current BQL limit.
> 
> 3) Driver ndo_start_xmit() puts the packet in TX ring,
>    and netdev_tx_sent_queue() is called.
> 
> 4) QUEUE_STATE_STACK_XOFF could be set from netdev_tx_sent_queue()
>    before txq->trans_start has been written.
> 
> 5) txq->trans_start is written later, from netdev_start_xmit()
> 
>     if (rc == NETDEV_TX_OK)
>           txq_trans_update(txq)
> 
> dev_watchdog() running on another cpu could read the old
> txq->trans_start, and then see QUEUE_STATE_STACK_XOFF, because 5)
> did not happen yet.
> 
> To solve the issue, write txq->trans_start right before one XOFF bit
> is set :
> 
> - _QUEUE_STATE_DRV_XOFF from netif_tx_stop_queue()
> - __QUEUE_STATE_STACK_XOFF from netdev_tx_sent_queue()
> 
> From dev_watchdog(), we have to read txq->state before txq->trans_start.
> 
> Add memory barriers to enforce correct ordering.
> 
> In the future, we could avoid writing over txq->trans_start for normal
> operations, and rename this field to txq->xoff_start_time.
> 
> Fixes: bec251bc8b6a ("net: no longer stop all TX queues in dev_watchdog()")
> Signed-off-by: Eric Dumazet <edumazet@google.com>

Reviewed-by: Willem de Bruijn <willemb@google.com>
Toke Høiland-Jørgensen Oct. 17, 2024, 4:11 p.m. UTC | #2
Eric Dumazet <edumazet@google.com> writes:

> Some workloads hit the infamous dev_watchdog() message:
>
> "NETDEV WATCHDOG: eth0 (xxxx): transmit queue XX timed out"
>
> It seems possible to hit this even for perfectly normal
> BQL enabled drivers:
>
> 1) Assume a TX queue was idle for more than dev->watchdog_timeo
>    (5 seconds unless changed by the driver)
>
> 2) Assume a big packet is sent, exceeding current BQL limit.
>
> 3) Driver ndo_start_xmit() puts the packet in TX ring,
>    and netdev_tx_sent_queue() is called.
>
> 4) QUEUE_STATE_STACK_XOFF could be set from netdev_tx_sent_queue()
>    before txq->trans_start has been written.
>
> 5) txq->trans_start is written later, from netdev_start_xmit()
>
>     if (rc == NETDEV_TX_OK)
>           txq_trans_update(txq)
>
> dev_watchdog() running on another cpu could read the old
> txq->trans_start, and then see QUEUE_STATE_STACK_XOFF, because 5)
> did not happen yet.
>
> To solve the issue, write txq->trans_start right before one XOFF bit
> is set :
>
> - _QUEUE_STATE_DRV_XOFF from netif_tx_stop_queue()
> - __QUEUE_STATE_STACK_XOFF from netdev_tx_sent_queue()
>
> From dev_watchdog(), we have to read txq->state before txq->trans_start.
>
> Add memory barriers to enforce correct ordering.
>
> In the future, we could avoid writing over txq->trans_start for normal
> operations, and rename this field to txq->xoff_start_time.
>
> Fixes: bec251bc8b6a ("net: no longer stop all TX queues in dev_watchdog()")
> Signed-off-by: Eric Dumazet <edumazet@google.com>

Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
patchwork-bot+netdevbpf@kernel.org Oct. 21, 2024, 11:10 a.m. UTC | #3
Hello:

This patch was applied to netdev/net.git (main)
by Paolo Abeni <pabeni@redhat.com>:

On Tue, 15 Oct 2024 19:41:18 +0000 you wrote:
> Some workloads hit the infamous dev_watchdog() message:
> 
> "NETDEV WATCHDOG: eth0 (xxxx): transmit queue XX timed out"
> 
> It seems possible to hit this even for perfectly normal
> BQL enabled drivers:
> 
> [...]

Here is the summary with links:
  - [net] net: fix races in netdev_tx_sent_queue()/dev_watchdog()
    https://git.kernel.org/netdev/net/c/95ecba62e2fd

You are awesome, thank you!
diff mbox series

Patch

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 4d20c776a4ff3d0e881b8d9b99901edb35f66da2..8896705ccd638bcb7d2ca8f3905351fc823f71b8 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -3325,6 +3325,12 @@  static inline void netif_tx_wake_all_queues(struct net_device *dev)
 
 static __always_inline void netif_tx_stop_queue(struct netdev_queue *dev_queue)
 {
+	/* Paired with READ_ONCE() from dev_watchdog() */
+	WRITE_ONCE(dev_queue->trans_start, jiffies);
+
+	/* This barrier is paired with smp_mb() from dev_watchdog() */
+	smp_mb__before_atomic();
+
 	/* Must be an atomic op see netif_txq_try_stop() */
 	set_bit(__QUEUE_STATE_DRV_XOFF, &dev_queue->state);
 }
@@ -3451,6 +3457,12 @@  static inline void netdev_tx_sent_queue(struct netdev_queue *dev_queue,
 	if (likely(dql_avail(&dev_queue->dql) >= 0))
 		return;
 
+	/* Paired with READ_ONCE() from dev_watchdog() */
+	WRITE_ONCE(dev_queue->trans_start, jiffies);
+
+	/* This barrier is paired with smp_mb() from dev_watchdog() */
+	smp_mb__before_atomic();
+
 	set_bit(__QUEUE_STATE_STACK_XOFF, &dev_queue->state);
 
 	/*
diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index 2af24547a82c49efc64528fd27087144c4f43b7c..38ec18f73de43aed565c653fffb838f54e7c824b 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -512,9 +512,15 @@  static void dev_watchdog(struct timer_list *t)
 				struct netdev_queue *txq;
 
 				txq = netdev_get_tx_queue(dev, i);
-				trans_start = READ_ONCE(txq->trans_start);
 				if (!netif_xmit_stopped(txq))
 					continue;
+
+				/* Paired with WRITE_ONCE() + smp_mb...() in
+				 * netdev_tx_sent_queue() and netif_tx_stop_queue().
+				 */
+				smp_mb();
+				trans_start = READ_ONCE(txq->trans_start);
+
 				if (time_after(jiffies, trans_start + dev->watchdog_timeo)) {
 					timedout_ms = jiffies_to_msecs(jiffies - trans_start);
 					atomic_long_inc(&txq->trans_timeout);