diff mbox series

[net-next,02/14] net_sched: cake: implement lockless cake_dump()

Message ID 20240415132054.3822230-3-edumazet@google.com (mailing list archive)
State Superseded
Delegated to: Netdev Maintainers
Headers show
Series net_sched: first series for RTNL-less qdisc dumps | expand

Checks

Context Check Description
netdev/series_format success Posting correctly formatted
netdev/tree_selection success Clearly marked for net-next
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 926 this patch: 926
netdev/build_tools success No tools touched, skip
netdev/cc_maintainers warning 2 maintainers not CCed: toke@toke.dk cake@lists.bufferbloat.net
netdev/build_clang success Errors and warnings before: 937 this patch: 937
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 937 this patch: 937
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 217 lines checked
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
netdev/contest success net-next-2024-04-18--00-00 (tests: 961)

Commit Message

Eric Dumazet April 15, 2024, 1:20 p.m. UTC
Instead of relying on RTNL, cake_dump() can use READ_ONCE()
annotations, paired with WRITE_ONCE() ones in cake_change().

Signed-off-by: Eric Dumazet <edumazet@google.com>
---
 net/sched/sch_cake.c | 98 +++++++++++++++++++++++++-------------------
 1 file changed, 55 insertions(+), 43 deletions(-)

Comments

Simon Horman April 17, 2024, 8:35 a.m. UTC | #1
+ Toke Høiland-Jørgensen <toke@toke.dk>
  cake@lists.bufferbloat.net

On Mon, Apr 15, 2024 at 01:20:42PM +0000, Eric Dumazet wrote:
> Instead of relying on RTNL, cake_dump() can use READ_ONCE()
> annotations, paired with WRITE_ONCE() ones in cake_change().
> 
> Signed-off-by: Eric Dumazet <edumazet@google.com>

...

> @@ -2774,68 +2783,71 @@ static int cake_dump(struct Qdisc *sch, struct sk_buff *skb)
>  {
>  	struct cake_sched_data *q = qdisc_priv(sch);
>  	struct nlattr *opts;
> +	u16 rate_flags;
>  
>  	opts = nla_nest_start_noflag(skb, TCA_OPTIONS);
>  	if (!opts)
>  		goto nla_put_failure;
>  
> -	if (nla_put_u64_64bit(skb, TCA_CAKE_BASE_RATE64, q->rate_bps,
> -			      TCA_CAKE_PAD))
> +	if (nla_put_u64_64bit(skb, TCA_CAKE_BASE_RATE64,
> +			      READ_ONCE(q->rate_bps), TCA_CAKE_PAD))
>  		goto nla_put_failure;
>  
>  	if (nla_put_u32(skb, TCA_CAKE_FLOW_MODE,
> -			q->flow_mode & CAKE_FLOW_MASK))
> +			READ_ONCE(q->flow_mode) & CAKE_FLOW_MASK))
>  		goto nla_put_failure;

Hi Eric,

q->flow_mode is read twice in this function. Once here...

>  
> -	if (nla_put_u32(skb, TCA_CAKE_RTT, q->interval))
> +	if (nla_put_u32(skb, TCA_CAKE_RTT, READ_ONCE(q->interval)))
>  		goto nla_put_failure;
>  
> -	if (nla_put_u32(skb, TCA_CAKE_TARGET, q->target))
> +	if (nla_put_u32(skb, TCA_CAKE_TARGET, READ_ONCE(q->target)))
>  		goto nla_put_failure;
>  
> -	if (nla_put_u32(skb, TCA_CAKE_MEMORY, q->buffer_config_limit))
> +	if (nla_put_u32(skb, TCA_CAKE_MEMORY,
> +			READ_ONCE(q->buffer_config_limit)))
>  		goto nla_put_failure;
>  
> +	rate_flags = READ_ONCE(q->rate_flags);
>  	if (nla_put_u32(skb, TCA_CAKE_AUTORATE,
> -			!!(q->rate_flags & CAKE_FLAG_AUTORATE_INGRESS)))
> +			!!(rate_flags & CAKE_FLAG_AUTORATE_INGRESS)))
>  		goto nla_put_failure;
>  
>  	if (nla_put_u32(skb, TCA_CAKE_INGRESS,
> -			!!(q->rate_flags & CAKE_FLAG_INGRESS)))
> +			!!(rate_flags & CAKE_FLAG_INGRESS)))
>  		goto nla_put_failure;
>  
> -	if (nla_put_u32(skb, TCA_CAKE_ACK_FILTER, q->ack_filter))
> +	if (nla_put_u32(skb, TCA_CAKE_ACK_FILTER, READ_ONCE(q->ack_filter)))
>  		goto nla_put_failure;
>  
>  	if (nla_put_u32(skb, TCA_CAKE_NAT,
> -			!!(q->flow_mode & CAKE_FLOW_NAT_FLAG)))
> +			!!(READ_ONCE(q->flow_mode) & CAKE_FLOW_NAT_FLAG)))
>  		goto nla_put_failure;

... and once here.

I am assuming that it isn't a big deal, but perhaps it is better to save
q->flow_mode into a local variable.

Also, more importantly, q->flow_mode does not seem to be handled
using WRITE_ONCE() in cake_change(). It's a non-trivial case,
which I guess is well served by a mechanism built around a local variable.

...
Eric Dumazet April 17, 2024, 8:54 a.m. UTC | #2
On Wed, Apr 17, 2024 at 10:35 AM Simon Horman <horms@kernel.org> wrote:
>
> + Toke Høiland-Jørgensen <toke@toke.dk>
>   cake@lists.bufferbloat.net
>
> On Mon, Apr 15, 2024 at 01:20:42PM +0000, Eric Dumazet wrote:
> > Instead of relying on RTNL, cake_dump() can use READ_ONCE()
> > annotations, paired with WRITE_ONCE() ones in cake_change().
> >
> > Signed-off-by: Eric Dumazet <edumazet@google.com>
>
> ...
>
> > @@ -2774,68 +2783,71 @@ static int cake_dump(struct Qdisc *sch, struct sk_buff *skb)
> >  {
> >       struct cake_sched_data *q = qdisc_priv(sch);
> >       struct nlattr *opts;
> > +     u16 rate_flags;
> >
> >       opts = nla_nest_start_noflag(skb, TCA_OPTIONS);
> >       if (!opts)
> >               goto nla_put_failure;
> >
> > -     if (nla_put_u64_64bit(skb, TCA_CAKE_BASE_RATE64, q->rate_bps,
> > -                           TCA_CAKE_PAD))
> > +     if (nla_put_u64_64bit(skb, TCA_CAKE_BASE_RATE64,
> > +                           READ_ONCE(q->rate_bps), TCA_CAKE_PAD))
> >               goto nla_put_failure;
> >
> >       if (nla_put_u32(skb, TCA_CAKE_FLOW_MODE,
> > -                     q->flow_mode & CAKE_FLOW_MASK))
> > +                     READ_ONCE(q->flow_mode) & CAKE_FLOW_MASK))
> >               goto nla_put_failure;
>
> Hi Eric,
>
> q->flow_mode is read twice in this function. Once here...
>
> >
> > -     if (nla_put_u32(skb, TCA_CAKE_RTT, q->interval))
> > +     if (nla_put_u32(skb, TCA_CAKE_RTT, READ_ONCE(q->interval)))
> >               goto nla_put_failure;
> >
> > -     if (nla_put_u32(skb, TCA_CAKE_TARGET, q->target))
> > +     if (nla_put_u32(skb, TCA_CAKE_TARGET, READ_ONCE(q->target)))
> >               goto nla_put_failure;
> >
> > -     if (nla_put_u32(skb, TCA_CAKE_MEMORY, q->buffer_config_limit))
> > +     if (nla_put_u32(skb, TCA_CAKE_MEMORY,
> > +                     READ_ONCE(q->buffer_config_limit)))
> >               goto nla_put_failure;
> >
> > +     rate_flags = READ_ONCE(q->rate_flags);
> >       if (nla_put_u32(skb, TCA_CAKE_AUTORATE,
> > -                     !!(q->rate_flags & CAKE_FLAG_AUTORATE_INGRESS)))
> > +                     !!(rate_flags & CAKE_FLAG_AUTORATE_INGRESS)))
> >               goto nla_put_failure;
> >
> >       if (nla_put_u32(skb, TCA_CAKE_INGRESS,
> > -                     !!(q->rate_flags & CAKE_FLAG_INGRESS)))
> > +                     !!(rate_flags & CAKE_FLAG_INGRESS)))
> >               goto nla_put_failure;
> >
> > -     if (nla_put_u32(skb, TCA_CAKE_ACK_FILTER, q->ack_filter))
> > +     if (nla_put_u32(skb, TCA_CAKE_ACK_FILTER, READ_ONCE(q->ack_filter)))
> >               goto nla_put_failure;
> >
> >       if (nla_put_u32(skb, TCA_CAKE_NAT,
> > -                     !!(q->flow_mode & CAKE_FLOW_NAT_FLAG)))
> > +                     !!(READ_ONCE(q->flow_mode) & CAKE_FLOW_NAT_FLAG)))
> >               goto nla_put_failure;
>
> ... and once here.
>
> I am assuming that it isn't a big deal, but perhaps it is better to save
> q->flow_mode into a local variable.
>
> Also, more importantly, q->flow_mode does not seem to be handled
> using WRITE_ONCE() in cake_change(). It's a non-trivial case,
> which I guess is well served by a mechanism built around a local variable.

Thanks !

I will squash in v2:

diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
index bb37a0dedcc1e4b3418f6681d87108aad7ea066f..9602dafe32e61d38dc00b0a35e1ee3f530989610
100644
--- a/net/sched/sch_cake.c
+++ b/net/sched/sch_cake.c
@@ -2573,6 +2573,7 @@ static int cake_change(struct Qdisc *sch, struct
nlattr *opt,
        struct cake_sched_data *q = qdisc_priv(sch);
        struct nlattr *tb[TCA_CAKE_MAX + 1];
        u16 rate_flags;
+       u8 flow_mode;
        int err;

        err = nla_parse_nested_deprecated(tb, TCA_CAKE_MAX, opt, cake_policy,
@@ -2580,10 +2581,11 @@ static int cake_change(struct Qdisc *sch,
struct nlattr *opt,
        if (err < 0)
                return err;

+       flow_mode = q->flow_mode;
        if (tb[TCA_CAKE_NAT]) {
 #if IS_ENABLED(CONFIG_NF_CONNTRACK)
-               q->flow_mode &= ~CAKE_FLOW_NAT_FLAG;
-               q->flow_mode |= CAKE_FLOW_NAT_FLAG *
+               flow_mode &= ~CAKE_FLOW_NAT_FLAG;
+               flow_mode |= CAKE_FLOW_NAT_FLAG *
                        !!nla_get_u32(tb[TCA_CAKE_NAT]);
 #else
                NL_SET_ERR_MSG_ATTR(extack, tb[TCA_CAKE_NAT],
@@ -2609,7 +2611,7 @@ static int cake_change(struct Qdisc *sch, struct
nlattr *opt,
        }

        if (tb[TCA_CAKE_FLOW_MODE])
-               q->flow_mode = ((q->flow_mode & CAKE_FLOW_NAT_FLAG) |
+               flow_mode = ((flow_mode & CAKE_FLOW_NAT_FLAG) |
                                (nla_get_u32(tb[TCA_CAKE_FLOW_MODE]) &
                                        CAKE_FLOW_MASK));

@@ -2689,6 +2691,7 @@ static int cake_change(struct Qdisc *sch, struct
nlattr *opt,
        }

        WRITE_ONCE(q->rate_flags, rate_flags);
+       WRITE_ONCE(q->flow_mode, flow_mode);
        if (q->tins) {
                sch_tree_lock(sch);
                cake_reconfigure(sch);
@@ -2784,6 +2787,7 @@ static int cake_dump(struct Qdisc *sch, struct
sk_buff *skb)
        struct cake_sched_data *q = qdisc_priv(sch);
        struct nlattr *opts;
        u16 rate_flags;
+       u8 flow_mode;

        opts = nla_nest_start_noflag(skb, TCA_OPTIONS);
        if (!opts)
@@ -2793,8 +2797,8 @@ static int cake_dump(struct Qdisc *sch, struct
sk_buff *skb)
                              READ_ONCE(q->rate_bps), TCA_CAKE_PAD))
                goto nla_put_failure;

-       if (nla_put_u32(skb, TCA_CAKE_FLOW_MODE,
-                       READ_ONCE(q->flow_mode) & CAKE_FLOW_MASK))
+       flow_mode = READ_ONCE(q->flow_mode);
+       if (nla_put_u32(skb, TCA_CAKE_FLOW_MODE, flow_mode & CAKE_FLOW_MASK))
                goto nla_put_failure;

        if (nla_put_u32(skb, TCA_CAKE_RTT, READ_ONCE(q->interval)))
@@ -2820,7 +2824,7 @@ static int cake_dump(struct Qdisc *sch, struct
sk_buff *skb)
                goto nla_put_failure;

        if (nla_put_u32(skb, TCA_CAKE_NAT,
-                       !!(READ_ONCE(q->flow_mode) & CAKE_FLOW_NAT_FLAG)))
+                       !!(flow_mode & CAKE_FLOW_NAT_FLAG)))
                goto nla_put_failure;

        if (nla_put_u32(skb, TCA_CAKE_DIFFSERV_MODE, READ_ONCE(q->tin_mode)))
Simon Horman April 17, 2024, 9:24 a.m. UTC | #3
On Wed, Apr 17, 2024 at 10:54:30AM +0200, Eric Dumazet wrote:
> On Wed, Apr 17, 2024 at 10:35 AM Simon Horman <horms@kernel.org> wrote:
> >
> > + Toke Høiland-Jørgensen <toke@toke.dk>
> >   cake@lists.bufferbloat.net
> >
> > On Mon, Apr 15, 2024 at 01:20:42PM +0000, Eric Dumazet wrote:
> > > Instead of relying on RTNL, cake_dump() can use READ_ONCE()
> > > annotations, paired with WRITE_ONCE() ones in cake_change().
> > >
> > > Signed-off-by: Eric Dumazet <edumazet@google.com>
> >
> > ...
> >
> > > @@ -2774,68 +2783,71 @@ static int cake_dump(struct Qdisc *sch, struct sk_buff *skb)
> > >  {
> > >       struct cake_sched_data *q = qdisc_priv(sch);
> > >       struct nlattr *opts;
> > > +     u16 rate_flags;
> > >
> > >       opts = nla_nest_start_noflag(skb, TCA_OPTIONS);
> > >       if (!opts)
> > >               goto nla_put_failure;
> > >
> > > -     if (nla_put_u64_64bit(skb, TCA_CAKE_BASE_RATE64, q->rate_bps,
> > > -                           TCA_CAKE_PAD))
> > > +     if (nla_put_u64_64bit(skb, TCA_CAKE_BASE_RATE64,
> > > +                           READ_ONCE(q->rate_bps), TCA_CAKE_PAD))
> > >               goto nla_put_failure;
> > >
> > >       if (nla_put_u32(skb, TCA_CAKE_FLOW_MODE,
> > > -                     q->flow_mode & CAKE_FLOW_MASK))
> > > +                     READ_ONCE(q->flow_mode) & CAKE_FLOW_MASK))
> > >               goto nla_put_failure;
> >
> > Hi Eric,
> >
> > q->flow_mode is read twice in this function. Once here...
> >
> > >
> > > -     if (nla_put_u32(skb, TCA_CAKE_RTT, q->interval))
> > > +     if (nla_put_u32(skb, TCA_CAKE_RTT, READ_ONCE(q->interval)))
> > >               goto nla_put_failure;
> > >
> > > -     if (nla_put_u32(skb, TCA_CAKE_TARGET, q->target))
> > > +     if (nla_put_u32(skb, TCA_CAKE_TARGET, READ_ONCE(q->target)))
> > >               goto nla_put_failure;
> > >
> > > -     if (nla_put_u32(skb, TCA_CAKE_MEMORY, q->buffer_config_limit))
> > > +     if (nla_put_u32(skb, TCA_CAKE_MEMORY,
> > > +                     READ_ONCE(q->buffer_config_limit)))
> > >               goto nla_put_failure;
> > >
> > > +     rate_flags = READ_ONCE(q->rate_flags);
> > >       if (nla_put_u32(skb, TCA_CAKE_AUTORATE,
> > > -                     !!(q->rate_flags & CAKE_FLAG_AUTORATE_INGRESS)))
> > > +                     !!(rate_flags & CAKE_FLAG_AUTORATE_INGRESS)))
> > >               goto nla_put_failure;
> > >
> > >       if (nla_put_u32(skb, TCA_CAKE_INGRESS,
> > > -                     !!(q->rate_flags & CAKE_FLAG_INGRESS)))
> > > +                     !!(rate_flags & CAKE_FLAG_INGRESS)))
> > >               goto nla_put_failure;
> > >
> > > -     if (nla_put_u32(skb, TCA_CAKE_ACK_FILTER, q->ack_filter))
> > > +     if (nla_put_u32(skb, TCA_CAKE_ACK_FILTER, READ_ONCE(q->ack_filter)))
> > >               goto nla_put_failure;
> > >
> > >       if (nla_put_u32(skb, TCA_CAKE_NAT,
> > > -                     !!(q->flow_mode & CAKE_FLOW_NAT_FLAG)))
> > > +                     !!(READ_ONCE(q->flow_mode) & CAKE_FLOW_NAT_FLAG)))
> > >               goto nla_put_failure;
> >
> > ... and once here.
> >
> > I am assuming that it isn't a big deal, but perhaps it is better to save
> > q->flow_mode into a local variable.
> >
> > Also, more importantly, q->flow_mode does not seem to be handled
> > using WRITE_ONCE() in cake_change(). It's a non-trivial case,
> > which I guess is well served by a mechanism built around a local variable.
> 
> Thanks !
> 
> I will squash in v2:

Likewise, thanks.
This looks good to me.

...
Toke Høiland-Jørgensen April 17, 2024, 12:25 p.m. UTC | #4
Simon Horman <horms@kernel.org> writes:

> + Toke Høiland-Jørgensen <toke@toke.dk>
>   cake@lists.bufferbloat.net

Thanks!

> On Mon, Apr 15, 2024 at 01:20:42PM +0000, Eric Dumazet wrote:
>> Instead of relying on RTNL, cake_dump() can use READ_ONCE()
>> annotations, paired with WRITE_ONCE() ones in cake_change().
>> 
>> Signed-off-by: Eric Dumazet <edumazet@google.com>

Just to be sure I understand this correctly: the idea is that with
READ/WRITE_ONCE annotations, we can dump the qdisc options without
taking the RTNL lock. This means that a dump not be consistent across a
concurrent reconfig that changes multiple parameters, but each parameter
will be either the new or the old value. Right?

-Toke
diff mbox series

Patch

diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
index 2eabc4dc5b79a83ce423f73c9cccec86f14be7cf..bb37a0dedcc1e4b3418f6681d87108aad7ea066f 100644
--- a/net/sched/sch_cake.c
+++ b/net/sched/sch_cake.c
@@ -2572,6 +2572,7 @@  static int cake_change(struct Qdisc *sch, struct nlattr *opt,
 {
 	struct cake_sched_data *q = qdisc_priv(sch);
 	struct nlattr *tb[TCA_CAKE_MAX + 1];
+	u16 rate_flags;
 	int err;
 
 	err = nla_parse_nested_deprecated(tb, TCA_CAKE_MAX, opt, cake_policy,
@@ -2592,16 +2593,19 @@  static int cake_change(struct Qdisc *sch, struct nlattr *opt,
 	}
 
 	if (tb[TCA_CAKE_BASE_RATE64])
-		q->rate_bps = nla_get_u64(tb[TCA_CAKE_BASE_RATE64]);
+		WRITE_ONCE(q->rate_bps,
+			   nla_get_u64(tb[TCA_CAKE_BASE_RATE64]));
 
 	if (tb[TCA_CAKE_DIFFSERV_MODE])
-		q->tin_mode = nla_get_u32(tb[TCA_CAKE_DIFFSERV_MODE]);
+		WRITE_ONCE(q->tin_mode,
+			   nla_get_u32(tb[TCA_CAKE_DIFFSERV_MODE]));
 
+	rate_flags = q->rate_flags;
 	if (tb[TCA_CAKE_WASH]) {
 		if (!!nla_get_u32(tb[TCA_CAKE_WASH]))
-			q->rate_flags |= CAKE_FLAG_WASH;
+			rate_flags |= CAKE_FLAG_WASH;
 		else
-			q->rate_flags &= ~CAKE_FLAG_WASH;
+			rate_flags &= ~CAKE_FLAG_WASH;
 	}
 
 	if (tb[TCA_CAKE_FLOW_MODE])
@@ -2610,11 +2614,13 @@  static int cake_change(struct Qdisc *sch, struct nlattr *opt,
 					CAKE_FLOW_MASK));
 
 	if (tb[TCA_CAKE_ATM])
-		q->atm_mode = nla_get_u32(tb[TCA_CAKE_ATM]);
+		WRITE_ONCE(q->atm_mode,
+			   nla_get_u32(tb[TCA_CAKE_ATM]));
 
 	if (tb[TCA_CAKE_OVERHEAD]) {
-		q->rate_overhead = nla_get_s32(tb[TCA_CAKE_OVERHEAD]);
-		q->rate_flags |= CAKE_FLAG_OVERHEAD;
+		WRITE_ONCE(q->rate_overhead,
+			   nla_get_s32(tb[TCA_CAKE_OVERHEAD]));
+		rate_flags |= CAKE_FLAG_OVERHEAD;
 
 		q->max_netlen = 0;
 		q->max_adjlen = 0;
@@ -2623,7 +2629,7 @@  static int cake_change(struct Qdisc *sch, struct nlattr *opt,
 	}
 
 	if (tb[TCA_CAKE_RAW]) {
-		q->rate_flags &= ~CAKE_FLAG_OVERHEAD;
+		rate_flags &= ~CAKE_FLAG_OVERHEAD;
 
 		q->max_netlen = 0;
 		q->max_adjlen = 0;
@@ -2632,54 +2638,57 @@  static int cake_change(struct Qdisc *sch, struct nlattr *opt,
 	}
 
 	if (tb[TCA_CAKE_MPU])
-		q->rate_mpu = nla_get_u32(tb[TCA_CAKE_MPU]);
+		WRITE_ONCE(q->rate_mpu,
+			   nla_get_u32(tb[TCA_CAKE_MPU]));
 
 	if (tb[TCA_CAKE_RTT]) {
-		q->interval = nla_get_u32(tb[TCA_CAKE_RTT]);
+		u32 interval = nla_get_u32(tb[TCA_CAKE_RTT]);
 
-		if (!q->interval)
-			q->interval = 1;
+		WRITE_ONCE(q->interval, max(interval, 1U));
 	}
 
 	if (tb[TCA_CAKE_TARGET]) {
-		q->target = nla_get_u32(tb[TCA_CAKE_TARGET]);
+		u32 target = nla_get_u32(tb[TCA_CAKE_TARGET]);
 
-		if (!q->target)
-			q->target = 1;
+		WRITE_ONCE(q->target, max(target, 1U));
 	}
 
 	if (tb[TCA_CAKE_AUTORATE]) {
 		if (!!nla_get_u32(tb[TCA_CAKE_AUTORATE]))
-			q->rate_flags |= CAKE_FLAG_AUTORATE_INGRESS;
+			rate_flags |= CAKE_FLAG_AUTORATE_INGRESS;
 		else
-			q->rate_flags &= ~CAKE_FLAG_AUTORATE_INGRESS;
+			rate_flags &= ~CAKE_FLAG_AUTORATE_INGRESS;
 	}
 
 	if (tb[TCA_CAKE_INGRESS]) {
 		if (!!nla_get_u32(tb[TCA_CAKE_INGRESS]))
-			q->rate_flags |= CAKE_FLAG_INGRESS;
+			rate_flags |= CAKE_FLAG_INGRESS;
 		else
-			q->rate_flags &= ~CAKE_FLAG_INGRESS;
+			rate_flags &= ~CAKE_FLAG_INGRESS;
 	}
 
 	if (tb[TCA_CAKE_ACK_FILTER])
-		q->ack_filter = nla_get_u32(tb[TCA_CAKE_ACK_FILTER]);
+		WRITE_ONCE(q->ack_filter,
+			   nla_get_u32(tb[TCA_CAKE_ACK_FILTER]));
 
 	if (tb[TCA_CAKE_MEMORY])
-		q->buffer_config_limit = nla_get_u32(tb[TCA_CAKE_MEMORY]);
+		WRITE_ONCE(q->buffer_config_limit,
+			   nla_get_u32(tb[TCA_CAKE_MEMORY]));
 
 	if (tb[TCA_CAKE_SPLIT_GSO]) {
 		if (!!nla_get_u32(tb[TCA_CAKE_SPLIT_GSO]))
-			q->rate_flags |= CAKE_FLAG_SPLIT_GSO;
+			rate_flags |= CAKE_FLAG_SPLIT_GSO;
 		else
-			q->rate_flags &= ~CAKE_FLAG_SPLIT_GSO;
+			rate_flags &= ~CAKE_FLAG_SPLIT_GSO;
 	}
 
 	if (tb[TCA_CAKE_FWMARK]) {
-		q->fwmark_mask = nla_get_u32(tb[TCA_CAKE_FWMARK]);
-		q->fwmark_shft = q->fwmark_mask ? __ffs(q->fwmark_mask) : 0;
+		WRITE_ONCE(q->fwmark_mask, nla_get_u32(tb[TCA_CAKE_FWMARK]));
+		WRITE_ONCE(q->fwmark_shft,
+			   q->fwmark_mask ? __ffs(q->fwmark_mask) : 0);
 	}
 
+	WRITE_ONCE(q->rate_flags, rate_flags);
 	if (q->tins) {
 		sch_tree_lock(sch);
 		cake_reconfigure(sch);
@@ -2774,68 +2783,71 @@  static int cake_dump(struct Qdisc *sch, struct sk_buff *skb)
 {
 	struct cake_sched_data *q = qdisc_priv(sch);
 	struct nlattr *opts;
+	u16 rate_flags;
 
 	opts = nla_nest_start_noflag(skb, TCA_OPTIONS);
 	if (!opts)
 		goto nla_put_failure;
 
-	if (nla_put_u64_64bit(skb, TCA_CAKE_BASE_RATE64, q->rate_bps,
-			      TCA_CAKE_PAD))
+	if (nla_put_u64_64bit(skb, TCA_CAKE_BASE_RATE64,
+			      READ_ONCE(q->rate_bps), TCA_CAKE_PAD))
 		goto nla_put_failure;
 
 	if (nla_put_u32(skb, TCA_CAKE_FLOW_MODE,
-			q->flow_mode & CAKE_FLOW_MASK))
+			READ_ONCE(q->flow_mode) & CAKE_FLOW_MASK))
 		goto nla_put_failure;
 
-	if (nla_put_u32(skb, TCA_CAKE_RTT, q->interval))
+	if (nla_put_u32(skb, TCA_CAKE_RTT, READ_ONCE(q->interval)))
 		goto nla_put_failure;
 
-	if (nla_put_u32(skb, TCA_CAKE_TARGET, q->target))
+	if (nla_put_u32(skb, TCA_CAKE_TARGET, READ_ONCE(q->target)))
 		goto nla_put_failure;
 
-	if (nla_put_u32(skb, TCA_CAKE_MEMORY, q->buffer_config_limit))
+	if (nla_put_u32(skb, TCA_CAKE_MEMORY,
+			READ_ONCE(q->buffer_config_limit)))
 		goto nla_put_failure;
 
+	rate_flags = READ_ONCE(q->rate_flags);
 	if (nla_put_u32(skb, TCA_CAKE_AUTORATE,
-			!!(q->rate_flags & CAKE_FLAG_AUTORATE_INGRESS)))
+			!!(rate_flags & CAKE_FLAG_AUTORATE_INGRESS)))
 		goto nla_put_failure;
 
 	if (nla_put_u32(skb, TCA_CAKE_INGRESS,
-			!!(q->rate_flags & CAKE_FLAG_INGRESS)))
+			!!(rate_flags & CAKE_FLAG_INGRESS)))
 		goto nla_put_failure;
 
-	if (nla_put_u32(skb, TCA_CAKE_ACK_FILTER, q->ack_filter))
+	if (nla_put_u32(skb, TCA_CAKE_ACK_FILTER, READ_ONCE(q->ack_filter)))
 		goto nla_put_failure;
 
 	if (nla_put_u32(skb, TCA_CAKE_NAT,
-			!!(q->flow_mode & CAKE_FLOW_NAT_FLAG)))
+			!!(READ_ONCE(q->flow_mode) & CAKE_FLOW_NAT_FLAG)))
 		goto nla_put_failure;
 
-	if (nla_put_u32(skb, TCA_CAKE_DIFFSERV_MODE, q->tin_mode))
+	if (nla_put_u32(skb, TCA_CAKE_DIFFSERV_MODE, READ_ONCE(q->tin_mode)))
 		goto nla_put_failure;
 
 	if (nla_put_u32(skb, TCA_CAKE_WASH,
-			!!(q->rate_flags & CAKE_FLAG_WASH)))
+			!!(rate_flags & CAKE_FLAG_WASH)))
 		goto nla_put_failure;
 
-	if (nla_put_u32(skb, TCA_CAKE_OVERHEAD, q->rate_overhead))
+	if (nla_put_u32(skb, TCA_CAKE_OVERHEAD, READ_ONCE(q->rate_overhead)))
 		goto nla_put_failure;
 
-	if (!(q->rate_flags & CAKE_FLAG_OVERHEAD))
+	if (!(rate_flags & CAKE_FLAG_OVERHEAD))
 		if (nla_put_u32(skb, TCA_CAKE_RAW, 0))
 			goto nla_put_failure;
 
-	if (nla_put_u32(skb, TCA_CAKE_ATM, q->atm_mode))
+	if (nla_put_u32(skb, TCA_CAKE_ATM, READ_ONCE(q->atm_mode)))
 		goto nla_put_failure;
 
-	if (nla_put_u32(skb, TCA_CAKE_MPU, q->rate_mpu))
+	if (nla_put_u32(skb, TCA_CAKE_MPU, READ_ONCE(q->rate_mpu)))
 		goto nla_put_failure;
 
 	if (nla_put_u32(skb, TCA_CAKE_SPLIT_GSO,
-			!!(q->rate_flags & CAKE_FLAG_SPLIT_GSO)))
+			!!(rate_flags & CAKE_FLAG_SPLIT_GSO)))
 		goto nla_put_failure;
 
-	if (nla_put_u32(skb, TCA_CAKE_FWMARK, q->fwmark_mask))
+	if (nla_put_u32(skb, TCA_CAKE_FWMARK, READ_ONCE(q->fwmark_mask)))
 		goto nla_put_failure;
 
 	return nla_nest_end(skb, opts);