diff mbox series

[v3,net] net: sched: add barrier to fix packet stuck problem for lockless qdisc

Message ID 20220527091143.120509-1-gjfang@linux.alibaba.com (mailing list archive)
State Superseded
Delegated to: Netdev Maintainers
Headers show
Series [v3,net] net: sched: add barrier to fix packet stuck problem for lockless qdisc | expand

Checks

Context Check Description
netdev/tree_selection success Clearly marked for net
netdev/fixes_present success Fixes tag present in non-next series
netdev/subject_prefix success Link
netdev/cover_letter success Single patches do not need cover letters
netdev/patch_count success Link
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 1450 this patch: 1450
netdev/cc_maintainers success CCed 6 of 6 maintainers
netdev/build_clang success Errors and warnings before: 169 this patch: 169
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success Fixes tag looks correct
netdev/build_allmodconfig_warn success Errors and warnings before: 1463 this patch: 1463
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 9 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Guoju Fang May 27, 2022, 9:11 a.m. UTC
In qdisc_run_end(), the spin_unlock() only has store-release semantic,
which guarantees all earlier memory access are visible before it. But
the subsequent test_bit() may be reordered ahead of the spin_unlock(),
and may cause a packet stuck problem.

The concurrent operations can be described as below,
         CPU 0                      |          CPU 1
   qdisc_run_end()                  |     qdisc_run_begin()
          .                         |           .
 ----> /* may be reorderd here */   |           .
|         .                         |           .
|     spin_unlock()                 |         set_bit()
|         .                         |         smp_mb__after_atomic()
 ---- test_bit()                    |         spin_trylock()
          .                         |          .

Consider the following sequence of events:
    CPU 0 reorder test_bit() ahead and see MISSED = 0
    CPU 1 calls set_bit()
    CPU 1 calls spin_trylock() and return fail
    CPU 0 executes spin_unlock()

At the end of the sequence, CPU 0 calls spin_unlock() and does nothing
because it see MISSED = 0. The skb on CPU 1 has beed enqueued but no one
take it, until the next cpu pushing to the qdisc (if ever ...) will
notice and dequeue it.

So one explicit barrier is needed between spin_unlock() and test_bit()
to ensure the correct order.

Fixes: 89837eb4b246 ("net: sched: add barrier to ensure correct ordering for lockless qdisc")
Signed-off-by: Guoju Fang <gjfang@linux.alibaba.com>
---
V2 -> V3: Not split the Fixes tag across multiple lines
V1 -> V2: Rewrite comments
---
 include/net/sch_generic.h | 3 +++
 1 file changed, 3 insertions(+)

Comments

Yunsheng Lin May 28, 2022, 12:51 a.m. UTC | #1
On 2022/5/27 17:11, Guoju Fang wrote:
> In qdisc_run_end(), the spin_unlock() only has store-release semantic,
> which guarantees all earlier memory access are visible before it. But
> the subsequent test_bit() may be reordered ahead of the spin_unlock(),
> and may cause a packet stuck problem.
> 
> The concurrent operations can be described as below,
>          CPU 0                      |          CPU 1
>    qdisc_run_end()                  |     qdisc_run_begin()
>           .                         |           .
>  ----> /* may be reorderd here */   |           .
> |         .                         |           .
> |     spin_unlock()                 |         set_bit()
> |         .                         |         smp_mb__after_atomic()
>  ---- test_bit()                    |         spin_trylock()
>           .                         |          .
> 
> Consider the following sequence of events:
>     CPU 0 reorder test_bit() ahead and see MISSED = 0
>     CPU 1 calls set_bit()
>     CPU 1 calls spin_trylock() and return fail
>     CPU 0 executes spin_unlock()
> 
> At the end of the sequence, CPU 0 calls spin_unlock() and does nothing
> because it see MISSED = 0. The skb on CPU 1 has beed enqueued but no one
> take it, until the next cpu pushing to the qdisc (if ever ...) will
> notice and dequeue it.
> 
> So one explicit barrier is needed between spin_unlock() and test_bit()
> to ensure the correct order.

It might be better to mention why smp_mb() is used instead of smp_rmb()
or smp_wmb():

spin_unlock() and test_bit() ordering is a store-load ordering, which
requires a full memory barrier as smp_mb().

> 
> Fixes: 89837eb4b246 ("net: sched: add barrier to ensure correct ordering for lockless qdisc")

The Fixes tag should be:

Fixes: a90c57f2cedd ("net: sched: fix packet stuck problem for lockless qdisc")

Other than that, it looks good to me:
Reviewed-by: Yunsheng Lin <linyunsheng@huawei.com>

> Signed-off-by: Guoju Fang <gjfang@linux.alibaba.com>
> ---
> V2 -> V3: Not split the Fixes tag across multiple lines
> V1 -> V2: Rewrite comments
> ---
>  include/net/sch_generic.h | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
> index 9bab396c1f3b..8a8738642ca0 100644
> --- a/include/net/sch_generic.h
> +++ b/include/net/sch_generic.h
> @@ -229,6 +229,9 @@ static inline void qdisc_run_end(struct Qdisc *qdisc)
>  	if (qdisc->flags & TCQ_F_NOLOCK) {
>  		spin_unlock(&qdisc->seqlock);
>  
> +		/* ensure ordering between spin_unlock() and test_bit() */
> +		smp_mb();
> +
>  		if (unlikely(test_bit(__QDISC_STATE_MISSED,
>  				      &qdisc->state)))
>  			__netif_schedule(qdisc);
>
diff mbox series

Patch

diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
index 9bab396c1f3b..8a8738642ca0 100644
--- a/include/net/sch_generic.h
+++ b/include/net/sch_generic.h
@@ -229,6 +229,9 @@  static inline void qdisc_run_end(struct Qdisc *qdisc)
 	if (qdisc->flags & TCQ_F_NOLOCK) {
 		spin_unlock(&qdisc->seqlock);
 
+		/* ensure ordering between spin_unlock() and test_bit() */
+		smp_mb();
+
 		if (unlikely(test_bit(__QDISC_STATE_MISSED,
 				      &qdisc->state)))
 			__netif_schedule(qdisc);