diff mbox series

[v2] fq: fix fq_tin tx bytes overflow

Message ID 1552446505-15444-1-git-send-email-yiboz@codeaurora.org (mailing list archive)
State New, archived
Headers show
Series [v2] fq: fix fq_tin tx bytes overflow | expand

Commit Message

Yibo Zhao March 13, 2019, 3:08 a.m. UTC
Currently, we are using u32 for tx_bytes in fq_tin.
If the throughput stays more than 1.2Gbps, tx_bytes
statistics overflow in about 1 min.

In order to allow us to trace the tx_bytes statistics
for longer time in high throughput, change its type
from u32 to u64.

Signed-off-by: Yibo Zhao <yiboz@codeaurora.org>
---
 include/net/fq.h              | 2 +-
 net/mac80211/debugfs_netdev.c | 2 +-
 net/mac80211/debugfs_sta.c    | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

Comments

Johannes Berg March 15, 2019, 9:38 a.m. UTC | #1
On Wed, 2019-03-13 at 11:08 +0800, Yibo Zhao wrote:
> Currently, we are using u32 for tx_bytes in fq_tin.
> If the throughput stays more than 1.2Gbps, tx_bytes
> statistics overflow in about 1 min.
> 
> In order to allow us to trace the tx_bytes statistics
> for longer time in high throughput, change its type
> from u32 to u64.

Hmm. 64-bit values are kinda expensive on 32-bit architectures. How
badly do you need this? I mean ... worst case you just have to capture
every 30 seconds or so if you're doing really high throughput with HE or
something?

johannes
Yibo Zhao March 18, 2019, 4:59 a.m. UTC | #2
On 2019-03-15 17:38, Johannes Berg wrote:
> On Wed, 2019-03-13 at 11:08 +0800, Yibo Zhao wrote:
>> Currently, we are using u32 for tx_bytes in fq_tin.
>> If the throughput stays more than 1.2Gbps, tx_bytes
>> statistics overflow in about 1 min.
>> 
>> In order to allow us to trace the tx_bytes statistics
>> for longer time in high throughput, change its type
>> from u32 to u64.
> 
> Hmm. 64-bit values are kinda expensive on 32-bit architectures. How
> badly do you need this? I mean ... worst case you just have to capture
> every 30 seconds or so if you're doing really high throughput with HE 
> or
> something?
> 
> johannes

Hi Johannes,

I understand your concern. Yes, I am using high end AP for throughput 
test. I'd say 1.2 Gbps is not the worst case since we can achieve max 
1.4Gbps according to our test. AFAIK, for most throughput cases, 1min is 
the minimum requirement. And I think, with more and more high end 
products(even higher throughput) on the ways to the market, it is highly 
possible that 30s is not a safe time before overflow.
Johannes Berg April 8, 2019, 8:01 p.m. UTC | #3
On Mon, 2019-03-18 at 12:59 +0800, Yibo Zhao wrote:
> 
> I understand your concern. Yes, I am using high end AP for throughput 
> test. I'd say 1.2 Gbps is not the worst case since we can achieve max 
> 1.4Gbps according to our test. AFAIK, for most throughput cases, 1min is 
> the minimum requirement. And I think, with more and more high end 
> products(even higher throughput) on the ways to the market, it is highly 
> possible that 30s is not a safe time before overflow.

Well, 2 Gbps (goodput) would make it overflow every 16 seconds, so I'm
not sure where you take the 1 minute from :-) Maybe from 1.2Gbps PHY
rate.

But still, the only place we expose this is debugfs, so I'm not really
sure we want to spend that.

Note that I'm generally pushing back on statistics right now - I really
want to put some trace points or eBPF hooks in places that people can
use to keep their favourite statistics, instead of trying to cover each
and every use case in the stack itself.

johannes
Yibo Zhao April 12, 2019, 10:09 a.m. UTC | #4
On 2019-04-09 04:01, Johannes Berg wrote:
> On Mon, 2019-03-18 at 12:59 +0800, Yibo Zhao wrote:
>> 
>> I understand your concern. Yes, I am using high end AP for throughput
>> test. I'd say 1.2 Gbps is not the worst case since we can achieve max
>> 1.4Gbps according to our test. AFAIK, for most throughput cases, 1min 
>> is
>> the minimum requirement. And I think, with more and more high end
>> products(even higher throughput) on the ways to the market, it is 
>> highly
>> possible that 30s is not a safe time before overflow.
> 
> Well, 2 Gbps (goodput) would make it overflow every 16 seconds, so I'm
> not sure where you take the 1 minute from :-) Maybe from 1.2Gbps PHY
> rate.
> 
> But still, the only place we expose this is debugfs, so I'm not really
> sure we want to spend that.
> 
> Note that I'm generally pushing back on statistics right now - I really
> want to put some trace points or eBPF hooks in places that people can
> use to keep their favourite statistics, instead of trying to cover each
> and every use case in the stack itself.

Cool, great!looking forward to that change. :-)
> 
> johannes
diff mbox series

Patch

diff --git a/include/net/fq.h b/include/net/fq.h
index ac944a6..70f8b12 100644
--- a/include/net/fq.h
+++ b/include/net/fq.h
@@ -53,7 +53,7 @@  struct fq_tin {
 	u32 overlimit;
 	u32 collisions;
 	u32 flows;
-	u32 tx_bytes;
+	u64 tx_bytes;
 	u32 tx_packets;
 };
 
diff --git a/net/mac80211/debugfs_netdev.c b/net/mac80211/debugfs_netdev.c
index cff0fb3..8d66f41 100644
--- a/net/mac80211/debugfs_netdev.c
+++ b/net/mac80211/debugfs_netdev.c
@@ -499,7 +499,7 @@  static ssize_t ieee80211_if_fmt_aqm(
 	len = scnprintf(buf,
 			buflen,
 			"ac backlog-bytes backlog-packets new-flows drops marks overlimit collisions tx-bytes tx-packets\n"
-			"%u %u %u %u %u %u %u %u %u %u\n",
+			"%u %u %u %u %u %u %u %u %llu %u\n",
 			txqi->txq.ac,
 			txqi->tin.backlog_bytes,
 			txqi->tin.backlog_packets,
diff --git a/net/mac80211/debugfs_sta.c b/net/mac80211/debugfs_sta.c
index 3aa618d..e54a6d6 100644
--- a/net/mac80211/debugfs_sta.c
+++ b/net/mac80211/debugfs_sta.c
@@ -168,7 +168,7 @@  static ssize_t sta_aqm_read(struct file *file, char __user *userbuf,
 			continue;
 		txqi = to_txq_info(sta->sta.txq[i]);
 		p += scnprintf(p, bufsz+buf-p,
-			       "%d %d %u %u %u %u %u %u %u %u %u 0x%lx(%s%s%s)\n",
+			       "%d %d %u %u %u %u %u %u %u %llu %u 0x%lx(%s%s%s)\n",
 			       txqi->txq.tid,
 			       txqi->txq.ac,
 			       txqi->tin.backlog_bytes,