diff mbox series

[net,1/4] mqprio: Correct stats in mqprio_dump_class_stats().

Message ID 20211007175000.2334713-2-bigeasy@linutronix.de (mailing list archive)
State Accepted
Delegated to: Netdev Maintainers
Headers show
Series mqprio fixup and simplify code. | expand

Checks

Context Check Description
netdev/cover_letter success Series has a cover letter
netdev/fixes_present success Fixes tag not required for -next series
netdev/patch_count success Link
netdev/tree_selection success Guessed tree name to be net-next
netdev/subject_prefix warning Target tree name not specified in the subject
netdev/cc_maintainers success CCed 7 of 7 maintainers
netdev/source_inline success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/module_param success Was 0 now: 0
netdev/build_32bit success Errors and warnings before: 1 this patch: 1
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/verify_fixes success Fixes tag looks correct
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 40 lines checked
netdev/build_allmodconfig_warn success Errors and warnings before: 1 this patch: 1
netdev/header_inline success No static functions without inline keyword in header files

Commit Message

Sebastian Andrzej Siewior Oct. 7, 2021, 5:49 p.m. UTC
It looks like with the introduction of subqueus the statics broke.
Before the change `bstats' and `qstats' on stack was fed and later this
was copied over to struct gnet_dump.

After the change the `bstats' and `qstats' are only set to 0 and no
longer updated and that is then fed to gnet_dump. Additionally
qdisc->cpu_bstats and qdisc->cpu_qstats is destroeyd for global
stats. For per-CPU stats both __gnet_stats_copy_basic() and
__gnet_stats_copy_queue() add the values but for global stats the value
set and so the previous value is lost and only the last value from the
loop ends up in sch->[bq]stats.

Use the on-stack [bq]stats variables again and add the stats manually in
the global case.

Fixes: ce679e8df7ed2 ("net: sched: add support for TCQ_F_NOLOCK subqueues to sch_mqprio")
Cc: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 net/sched/sch_mqprio.c | 32 +++++++++++++++++++-------------
 1 file changed, 19 insertions(+), 13 deletions(-)

Comments

Jakub Kicinski Oct. 8, 2021, 11:33 p.m. UTC | #1
On Thu,  7 Oct 2021 19:49:57 +0200 Sebastian Andrzej Siewior wrote:
> It looks like with the introduction of subqueus the statics broke.
> Before the change `bstats' and `qstats' on stack was fed and later this
> was copied over to struct gnet_dump.
> 
> After the change the `bstats' and `qstats' are only set to 0 and no
> longer updated and that is then fed to gnet_dump. Additionally
> qdisc->cpu_bstats and qdisc->cpu_qstats is destroeyd for global
> stats. For per-CPU stats both __gnet_stats_copy_basic() and
> __gnet_stats_copy_queue() add the values but for global stats the value
> set and so the previous value is lost and only the last value from the
> loop ends up in sch->[bq]stats.
> 
> Use the on-stack [bq]stats variables again and add the stats manually in
> the global case.
> 
> Fixes: ce679e8df7ed2 ("net: sched: add support for TCQ_F_NOLOCK subqueues to sch_mqprio")
> Cc: John Fastabend <john.fastabend@gmail.com>
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

Applied after significant massaging of the commit message.

Please repost the cleanup in a week (once net gets merged 
into net-next).

Thanks!
diff mbox series

Patch

diff --git a/net/sched/sch_mqprio.c b/net/sched/sch_mqprio.c
index 8766ab5b87880..5eb3b1b7ae5e7 100644
--- a/net/sched/sch_mqprio.c
+++ b/net/sched/sch_mqprio.c
@@ -529,22 +529,28 @@  static int mqprio_dump_class_stats(struct Qdisc *sch, unsigned long cl,
 		for (i = tc.offset; i < tc.offset + tc.count; i++) {
 			struct netdev_queue *q = netdev_get_tx_queue(dev, i);
 			struct Qdisc *qdisc = rtnl_dereference(q->qdisc);
-			struct gnet_stats_basic_cpu __percpu *cpu_bstats = NULL;
-			struct gnet_stats_queue __percpu *cpu_qstats = NULL;
 
 			spin_lock_bh(qdisc_lock(qdisc));
-			if (qdisc_is_percpu_stats(qdisc)) {
-				cpu_bstats = qdisc->cpu_bstats;
-				cpu_qstats = qdisc->cpu_qstats;
-			}
 
-			qlen = qdisc_qlen_sum(qdisc);
-			__gnet_stats_copy_basic(NULL, &sch->bstats,
-						cpu_bstats, &qdisc->bstats);
-			__gnet_stats_copy_queue(&sch->qstats,
-						cpu_qstats,
-						&qdisc->qstats,
-						qlen);
+			if (qdisc_is_percpu_stats(qdisc)) {
+				qlen = qdisc_qlen_sum(qdisc);
+
+				__gnet_stats_copy_basic(NULL, &bstats,
+							qdisc->cpu_bstats,
+							&qdisc->bstats);
+				__gnet_stats_copy_queue(&qstats,
+							qdisc->cpu_qstats,
+							&qdisc->qstats,
+							qlen);
+			} else {
+				qlen		+= qdisc->q.qlen;
+				bstats.bytes	+= qdisc->bstats.bytes;
+				bstats.packets	+= qdisc->bstats.packets;
+				qstats.backlog	+= qdisc->qstats.backlog;
+				qstats.drops	+= qdisc->qstats.drops;
+				qstats.requeues	+= qdisc->qstats.requeues;
+				qstats.overlimits += qdisc->qstats.overlimits;
+			}
 			spin_unlock_bh(qdisc_lock(qdisc));
 		}