mbox series

[v8,0/2] Implement Airtime-based Queue Limit (AQL)

Message ID 20191115014846.126007-1-kyan@google.com (mailing list archive)
Headers show
Series Implement Airtime-based Queue Limit (AQL) | expand

Message

Kan Yan Nov. 15, 2019, 1:48 a.m. UTC
This patch series port the Airtime Queue Limits concept from the out-of-tree
ath10k implementation[0] to mac80211. This version takes my patch to do the
throttling in mac80211, and replaces the driver API with the mechanism from
Toke's series, which instead calculated the expected airtime at dequeue time
inside mac80211, storing it in the SKB cb field.

This version has been tested on QCA9984 platform.

[0] https://chromium-review.googlesource.com/c/chromiumos/third_party/kernel/+/1703105/7

Changelog:

v8:
  - Includes Toke's v7 version of "mac80211: Import airtime calculation code from mt76"
  - Don't clobber sta's customized queue limit when configuring the default via debugfs
  - Fix a racing condition when reset aql_tx_pending.

v7:
  - Fix aql_total_pending_airtime underflow due to insufficient locking.

v6:
  - Fix sta lookup in ieee80211_report_used_skb().
  - Move call to ieee80211_sta_update_pending_airtime() to a bit later in
    __ieee80211_tx_status()
v5:
  - Add missing export of ieee80211_calc_rx_airtime() and make
    ieee80211_calc_tx_airtime_rate() static (kbuildbot).
  - Use skb_get_queue_mapping() to get the AC from the skb.
  - Take basic rate configuration for the BSS into account when calculating
    multicast rate.
v4:
  - Fix calculation that clamps the maximum airtime to fit into 10 bits
  - Incorporate Rich Brown's nits for the commit message in Kan's patch
  - Add fewer local variables to ieee80211_tx_dequeue()
v3:
  - Move the tx_time_est field so it's shared with ack_frame_id, and use units
    of 4us for the value stored in it.
  - Move the addition of the Ethernet header size into ieee80211_calc_expected_tx_airtime()
v2:
  - Integrate Kan's approach to airtime throttling.
  - Hopefully fix the cb struct alignment on big-endian architectures.



Kan Yan (1):
  mac80211: Implement Airtime-based Queue Limit (AQL)

Toke Høiland-Jørgensen (1):
  mac80211: Import airtime calculation code from mt76

 include/net/cfg80211.h     |   7 +
 include/net/mac80211.h     |  41 +++
 net/mac80211/Makefile      |   3 +-
 net/mac80211/airtime.c     | 597 +++++++++++++++++++++++++++++++++++++
 net/mac80211/debugfs.c     |  85 ++++++
 net/mac80211/debugfs_sta.c |  43 ++-
 net/mac80211/ieee80211_i.h |   8 +
 net/mac80211/main.c        |  10 +-
 net/mac80211/sta_info.c    |  38 +++
 net/mac80211/sta_info.h    |   8 +
 net/mac80211/tx.c          |  47 ++-
 11 files changed, 872 insertions(+), 15 deletions(-)
 create mode 100644 net/mac80211/airtime.c

Comments

Kan Yan Nov. 15, 2019, 2:04 a.m. UTC | #1
I have tested it with Toke's patch "[PATCH v6 4/4] mac80211: Use
Airtime-based Queue Limits (AQL) on packet dequeue", but didn't
include it here, as it is self contained and Toke has plan to update
it.

The platform (QCA9984) used in my test doesn't support 802.11ax, so I
was not able to test the HE mode support added in v7 update of "Import
airtime calculation code from mt76" from Toke.

On Thu, Nov 14, 2019 at 5:48 PM Kan Yan <kyan@google.com> wrote:
>
> This patch series port the Airtime Queue Limits concept from the out-of-tree
> ath10k implementation[0] to mac80211. This version takes my patch to do the
> throttling in mac80211, and replaces the driver API with the mechanism from
> Toke's series, which instead calculated the expected airtime at dequeue time
> inside mac80211, storing it in the SKB cb field.
>
> This version has been tested on QCA9984 platform.
>
> [0] https://chromium-review.googlesource.com/c/chromiumos/third_party/kernel/+/1703105/7
>
> Changelog:
>
> v8:
>   - Includes Toke's v7 version of "mac80211: Import airtime calculation code from mt76"
>   - Don't clobber sta's customized queue limit when configuring the default via debugfs
>   - Fix a racing condition when reset aql_tx_pending.
>
> v7:
>   - Fix aql_total_pending_airtime underflow due to insufficient locking.
>
> v6:
>   - Fix sta lookup in ieee80211_report_used_skb().
>   - Move call to ieee80211_sta_update_pending_airtime() to a bit later in
>     __ieee80211_tx_status()
> v5:
>   - Add missing export of ieee80211_calc_rx_airtime() and make
>     ieee80211_calc_tx_airtime_rate() static (kbuildbot).
>   - Use skb_get_queue_mapping() to get the AC from the skb.
>   - Take basic rate configuration for the BSS into account when calculating
>     multicast rate.
> v4:
>   - Fix calculation that clamps the maximum airtime to fit into 10 bits
>   - Incorporate Rich Brown's nits for the commit message in Kan's patch
>   - Add fewer local variables to ieee80211_tx_dequeue()
> v3:
>   - Move the tx_time_est field so it's shared with ack_frame_id, and use units
>     of 4us for the value stored in it.
>   - Move the addition of the Ethernet header size into ieee80211_calc_expected_tx_airtime()
> v2:
>   - Integrate Kan's approach to airtime throttling.
>   - Hopefully fix the cb struct alignment on big-endian architectures.
>
>
>
> Kan Yan (1):
>   mac80211: Implement Airtime-based Queue Limit (AQL)
>
> Toke Høiland-Jørgensen (1):
>   mac80211: Import airtime calculation code from mt76
>
>  include/net/cfg80211.h     |   7 +
>  include/net/mac80211.h     |  41 +++
>  net/mac80211/Makefile      |   3 +-
>  net/mac80211/airtime.c     | 597 +++++++++++++++++++++++++++++++++++++
>  net/mac80211/debugfs.c     |  85 ++++++
>  net/mac80211/debugfs_sta.c |  43 ++-
>  net/mac80211/ieee80211_i.h |   8 +
>  net/mac80211/main.c        |  10 +-
>  net/mac80211/sta_info.c    |  38 +++
>  net/mac80211/sta_info.h    |   8 +
>  net/mac80211/tx.c          |  47 ++-
>  11 files changed, 872 insertions(+), 15 deletions(-)
>  create mode 100644 net/mac80211/airtime.c
>
> --
> 2.24.0.rc1.363.gb1bccd3e3d-goog
>
Dave Taht Nov. 15, 2019, 2:07 a.m. UTC | #2
On Thu, Nov 14, 2019 at 6:04 PM Kan Yan <kyan@google.com> wrote:
>
> I have tested it with Toke's patch "[PATCH v6 4/4] mac80211: Use
> Airtime-based Queue Limits (AQL) on packet dequeue", but didn't
> include it here, as it is self contained and Toke has plan to update
> it.
>
> The platform (QCA9984) used in my test

I do keep hoping for pretty pictures. Got any? :-P

>  doesn't support 802.11ax, so I
> was not able to test the HE mode support added in v7 update of "Import
> airtime calculation code from mt76" from Toke.

Is there an ax QCAXXXX platform, m.2 card, or mini-pci card worth
testing at this point?

How are they handling mu-mimo?

I have a round of tests scheduled for intel's ax200 chips, soon. Not sure
what, if any, of this new work might apply.

> On Thu, Nov 14, 2019 at 5:48 PM Kan Yan <kyan@google.com> wrote:
> >
> > This patch series port the Airtime Queue Limits concept from the out-of-tree
> > ath10k implementation[0] to mac80211. This version takes my patch to do the
> > throttling in mac80211, and replaces the driver API with the mechanism from
> > Toke's series, which instead calculated the expected airtime at dequeue time
> > inside mac80211, storing it in the SKB cb field.
> >
> > This version has been tested on QCA9984 platform.
> >
> > [0] https://chromium-review.googlesource.com/c/chromiumos/third_party/kernel/+/1703105/7
> >
> > Changelog:
> >
> > v8:
> >   - Includes Toke's v7 version of "mac80211: Import airtime calculation code from mt76"
> >   - Don't clobber sta's customized queue limit when configuring the default via debugfs
> >   - Fix a racing condition when reset aql_tx_pending.
> >
> > v7:
> >   - Fix aql_total_pending_airtime underflow due to insufficient locking.
> >
> > v6:
> >   - Fix sta lookup in ieee80211_report_used_skb().
> >   - Move call to ieee80211_sta_update_pending_airtime() to a bit later in
> >     __ieee80211_tx_status()
> > v5:
> >   - Add missing export of ieee80211_calc_rx_airtime() and make
> >     ieee80211_calc_tx_airtime_rate() static (kbuildbot).
> >   - Use skb_get_queue_mapping() to get the AC from the skb.
> >   - Take basic rate configuration for the BSS into account when calculating
> >     multicast rate.
> > v4:
> >   - Fix calculation that clamps the maximum airtime to fit into 10 bits
> >   - Incorporate Rich Brown's nits for the commit message in Kan's patch
> >   - Add fewer local variables to ieee80211_tx_dequeue()
> > v3:
> >   - Move the tx_time_est field so it's shared with ack_frame_id, and use units
> >     of 4us for the value stored in it.
> >   - Move the addition of the Ethernet header size into ieee80211_calc_expected_tx_airtime()
> > v2:
> >   - Integrate Kan's approach to airtime throttling.
> >   - Hopefully fix the cb struct alignment on big-endian architectures.
> >
> >
> >
> > Kan Yan (1):
> >   mac80211: Implement Airtime-based Queue Limit (AQL)
> >
> > Toke Høiland-Jørgensen (1):
> >   mac80211: Import airtime calculation code from mt76
> >
> >  include/net/cfg80211.h     |   7 +
> >  include/net/mac80211.h     |  41 +++
> >  net/mac80211/Makefile      |   3 +-
> >  net/mac80211/airtime.c     | 597 +++++++++++++++++++++++++++++++++++++
> >  net/mac80211/debugfs.c     |  85 ++++++
> >  net/mac80211/debugfs_sta.c |  43 ++-
> >  net/mac80211/ieee80211_i.h |   8 +
> >  net/mac80211/main.c        |  10 +-
> >  net/mac80211/sta_info.c    |  38 +++
> >  net/mac80211/sta_info.h    |   8 +
> >  net/mac80211/tx.c          |  47 ++-
> >  11 files changed, 872 insertions(+), 15 deletions(-)
> >  create mode 100644 net/mac80211/airtime.c
> >
> > --
> > 2.24.0.rc1.363.gb1bccd3e3d-goog
> >
> _______________________________________________
> Make-wifi-fast mailing list
> Make-wifi-fast@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/make-wifi-fast
Dave Taht Nov. 18, 2019, 9:08 p.m. UTC | #3
On Fri, Nov 15, 2019 at 4:10 PM Kan Yan <kyan@google.com> wrote:
>
> > I do keep hoping for pretty pictures. Got any? :-P
>
> Certainly! I do have some :). Here is the link:
> https://drive.google.com/corp/drive/folders/14OIuQEHOUiIoNrVnKprj6rBYFNZ0Coif

Those were lovely, thanks!!!! Big win. Since you are on patch v10
now.... Any chance you could turn ecn on and off and give it a go
again in your next test run?

Also:

--step-size=.04 --socket-stats # the first is helpful to gain more
detail, the second as to the behavior of the tcp stack. You might need
to run as root (and It's only useful on the tcp_nup test) for the
latter (and have the right ss utility)

Secondly - and AFTER this patchset stablizes, I'd like us to look into
returning the codel default to 10ms or less
from it's currently 20ms or worse setting. Tis another easy test

And y'all know how much I love the rrul_be and rrul tests.....


> >
> > Is there an ax QCAXXXX platform, m.2 card, or mini-pci card worth
> > testing at this point?
>
> It will be great if someone with 11.ax platform can help give it a try.
>
> > How are they handling mu-mimo?
>
> I think it should still work. The queue length in airtime for each individual queue is unchanged, even the multiple queues are allowed to transmit concurrently with mu-mimo.
>
>> I have a round of tests scheduled for intel's ax200 chips, soon. Not sure
>> what, if any, of this new work might apply.
>
> It will be very interesting to know how it performance on 802.11ax platforms. Supposedly 802.11ax already fixed the latency problem so the benefit of this patch should be less significant.
>
>
> On Thu, Nov 14, 2019 at 6:07 PM Dave Taht <dave.taht@gmail.com> wrote:
>>
>> On Thu, Nov 14, 2019 at 6:04 PM Kan Yan <kyan@google.com> wrote:
>> >
>> > I have tested it with Toke's patch "[PATCH v6 4/4] mac80211: Use
>> > Airtime-based Queue Limits (AQL) on packet dequeue", but didn't
>> > include it here, as it is self contained and Toke has plan to update
>> > it.
>> >
>> > The platform (QCA9984) used in my test
>>
>> I do keep hoping for pretty pictures. Got any? :-P
>>
>> >  doesn't support 802.11ax, so I
>> > was not able to test the HE mode support added in v7 update of "Import
>> > airtime calculation code from mt76" from Toke.
>>
>> Is there an ax QCAXXXX platform, m.2 card, or mini-pci card worth
>> testing at this point?
>>
>> How are they handling mu-mimo?
>>
>> I have a round of tests scheduled for intel's ax200 chips, soon. Not sure
>> what, if any, of this new work might apply.
>>
>> > On Thu, Nov 14, 2019 at 5:48 PM Kan Yan <kyan@google.com> wrote:
>> > >
>> > > This patch series port the Airtime Queue Limits concept from the out-of-tree
>> > > ath10k implementation[0] to mac80211. This version takes my patch to do the
>> > > throttling in mac80211, and replaces the driver API with the mechanism from
>> > > Toke's series, which instead calculated the expected airtime at dequeue time
>> > > inside mac80211, storing it in the SKB cb field.
>> > >
>> > > This version has been tested on QCA9984 platform.
>> > >
>> > > [0] https://chromium-review.googlesource.com/c/chromiumos/third_party/kernel/+/1703105/7
>> > >
>> > > Changelog:
>> > >
>> > > v8:
>> > >   - Includes Toke's v7 version of "mac80211: Import airtime calculation code from mt76"
>> > >   - Don't clobber sta's customized queue limit when configuring the default via debugfs
>> > >   - Fix a racing condition when reset aql_tx_pending.
>> > >
>> > > v7:
>> > >   - Fix aql_total_pending_airtime underflow due to insufficient locking.
>> > >
>> > > v6:
>> > >   - Fix sta lookup in ieee80211_report_used_skb().
>> > >   - Move call to ieee80211_sta_update_pending_airtime() to a bit later in
>> > >     __ieee80211_tx_status()
>> > > v5:
>> > >   - Add missing export of ieee80211_calc_rx_airtime() and make
>> > >     ieee80211_calc_tx_airtime_rate() static (kbuildbot).
>> > >   - Use skb_get_queue_mapping() to get the AC from the skb.
>> > >   - Take basic rate configuration for the BSS into account when calculating
>> > >     multicast rate.
>> > > v4:
>> > >   - Fix calculation that clamps the maximum airtime to fit into 10 bits
>> > >   - Incorporate Rich Brown's nits for the commit message in Kan's patch
>> > >   - Add fewer local variables to ieee80211_tx_dequeue()
>> > > v3:
>> > >   - Move the tx_time_est field so it's shared with ack_frame_id, and use units
>> > >     of 4us for the value stored in it.
>> > >   - Move the addition of the Ethernet header size into ieee80211_calc_expected_tx_airtime()
>> > > v2:
>> > >   - Integrate Kan's approach to airtime throttling.
>> > >   - Hopefully fix the cb struct alignment on big-endian architectures.
>> > >
>> > >
>> > >
>> > > Kan Yan (1):
>> > >   mac80211: Implement Airtime-based Queue Limit (AQL)
>> > >
>> > > Toke Høiland-Jørgensen (1):
>> > >   mac80211: Import airtime calculation code from mt76
>> > >
>> > >  include/net/cfg80211.h     |   7 +
>> > >  include/net/mac80211.h     |  41 +++
>> > >  net/mac80211/Makefile      |   3 +-
>> > >  net/mac80211/airtime.c     | 597 +++++++++++++++++++++++++++++++++++++
>> > >  net/mac80211/debugfs.c     |  85 ++++++
>> > >  net/mac80211/debugfs_sta.c |  43 ++-
>> > >  net/mac80211/ieee80211_i.h |   8 +
>> > >  net/mac80211/main.c        |  10 +-
>> > >  net/mac80211/sta_info.c    |  38 +++
>> > >  net/mac80211/sta_info.h    |   8 +
>> > >  net/mac80211/tx.c          |  47 ++-
>> > >  11 files changed, 872 insertions(+), 15 deletions(-)
>> > >  create mode 100644 net/mac80211/airtime.c
>> > >
>> > > --
>> > > 2.24.0.rc1.363.gb1bccd3e3d-goog
>> > >
>> > _______________________________________________
>> > Make-wifi-fast mailing list
>> > Make-wifi-fast@lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/make-wifi-fast
>>
>>
>>
>> --
>>
>> Dave Täht
>> CTO, TekLibre, LLC
>> http://www.teklibre.com
>> Tel: 1-831-205-9740
Kan Yan Nov. 20, 2019, 12:40 a.m. UTC | #4
> Those were lovely, thanks!!!! Big win. Since you are on patch v10
> now.... Any chance you could turn ecn on and off and give it a go
> again in your next test run?
>
>
> Also:
>
> --step-size=.04 --socket-stats # the first is helpful to gain more
> detail, the second as to the behavior of the tcp stack.

Thanks for the feedback! I will do more tests in a few days.


> Secondly - and AFTER this patchset stablizes, I'd like us to look into
> returning the codel default to 10ms or less
> from it's currently 20ms or worse setting. Tis another easy test

Smaller CoDel "target" doesn't work well with wireless because the
dequeue behavior in wireless driver is very bursty. It is quite often
dequeues dozens of packets in one burst after one large aggregation is
completed, so smaller CoDel "target" can cause unnecessary packet
drop.


On Mon, Nov 18, 2019 at 1:08 PM Dave Taht <dave.taht@gmail.com> wrote:
>
> On Fri, Nov 15, 2019 at 4:10 PM Kan Yan <kyan@google.com> wrote:
> >
> > > I do keep hoping for pretty pictures. Got any? :-P
> >
> > Certainly! I do have some :). Here is the link:
> > https://drive.google.com/corp/drive/folders/14OIuQEHOUiIoNrVnKprj6rBYFNZ0Coif
>
> Those were lovely, thanks!!!! Big win. Since you are on patch v10
> now.... Any chance you could turn ecn on and off and give it a go
> again in your next test run?
>
> Also:
>
> --step-size=.04 --socket-stats # the first is helpful to gain more
> detail, the second as to the behavior of the tcp stack. You might need
> to run as root (and It's only useful on the tcp_nup test) for the
> latter (and have the right ss utility)
>
> Secondly - and AFTER this patchset stablizes, I'd like us to look into
> returning the codel default to 10ms or less
> from it's currently 20ms or worse setting. Tis another easy test
>
> And y'all know how much I love the rrul_be and rrul tests.....
>
>
> > >
> > > Is there an ax QCAXXXX platform, m.2 card, or mini-pci card worth
> > > testing at this point?
> >
> > It will be great if someone with 11.ax platform can help give it a try.
> >
> > > How are they handling mu-mimo?
> >
> > I think it should still work. The queue length in airtime for each individual queue is unchanged, even the multiple queues are allowed to transmit concurrently with mu-mimo.
> >
> >> I have a round of tests scheduled for intel's ax200 chips, soon. Not sure
> >> what, if any, of this new work might apply.
> >
> > It will be very interesting to know how it performance on 802.11ax platforms. Supposedly 802.11ax already fixed the latency problem so the benefit of this patch should be less significant.
> >
> >
> > On Thu, Nov 14, 2019 at 6:07 PM Dave Taht <dave.taht@gmail.com> wrote:
> >>
> >> On Thu, Nov 14, 2019 at 6:04 PM Kan Yan <kyan@google.com> wrote:
> >> >
> >> > I have tested it with Toke's patch "[PATCH v6 4/4] mac80211: Use
> >> > Airtime-based Queue Limits (AQL) on packet dequeue", but didn't
> >> > include it here, as it is self contained and Toke has plan to update
> >> > it.
> >> >
> >> > The platform (QCA9984) used in my test
> >>
> >> I do keep hoping for pretty pictures. Got any? :-P
> >>
> >> >  doesn't support 802.11ax, so I
> >> > was not able to test the HE mode support added in v7 update of "Import
> >> > airtime calculation code from mt76" from Toke.
> >>
> >> Is there an ax QCAXXXX platform, m.2 card, or mini-pci card worth
> >> testing at this point?
> >>
> >> How are they handling mu-mimo?
> >>
> >> I have a round of tests scheduled for intel's ax200 chips, soon. Not sure
> >> what, if any, of this new work might apply.
> >>
> >> > On Thu, Nov 14, 2019 at 5:48 PM Kan Yan <kyan@google.com> wrote:
> >> > >
> >> > > This patch series port the Airtime Queue Limits concept from the out-of-tree
> >> > > ath10k implementation[0] to mac80211. This version takes my patch to do the
> >> > > throttling in mac80211, and replaces the driver API with the mechanism from
> >> > > Toke's series, which instead calculated the expected airtime at dequeue time
> >> > > inside mac80211, storing it in the SKB cb field.
> >> > >
> >> > > This version has been tested on QCA9984 platform.
> >> > >
> >> > > [0] https://chromium-review.googlesource.com/c/chromiumos/third_party/kernel/+/1703105/7
> >> > >
> >> > > Changelog:
> >> > >
> >> > > v8:
> >> > >   - Includes Toke's v7 version of "mac80211: Import airtime calculation code from mt76"
> >> > >   - Don't clobber sta's customized queue limit when configuring the default via debugfs
> >> > >   - Fix a racing condition when reset aql_tx_pending.
> >> > >
> >> > > v7:
> >> > >   - Fix aql_total_pending_airtime underflow due to insufficient locking.
> >> > >
> >> > > v6:
> >> > >   - Fix sta lookup in ieee80211_report_used_skb().
> >> > >   - Move call to ieee80211_sta_update_pending_airtime() to a bit later in
> >> > >     __ieee80211_tx_status()
> >> > > v5:
> >> > >   - Add missing export of ieee80211_calc_rx_airtime() and make
> >> > >     ieee80211_calc_tx_airtime_rate() static (kbuildbot).
> >> > >   - Use skb_get_queue_mapping() to get the AC from the skb.
> >> > >   - Take basic rate configuration for the BSS into account when calculating
> >> > >     multicast rate.
> >> > > v4:
> >> > >   - Fix calculation that clamps the maximum airtime to fit into 10 bits
> >> > >   - Incorporate Rich Brown's nits for the commit message in Kan's patch
> >> > >   - Add fewer local variables to ieee80211_tx_dequeue()
> >> > > v3:
> >> > >   - Move the tx_time_est field so it's shared with ack_frame_id, and use units
> >> > >     of 4us for the value stored in it.
> >> > >   - Move the addition of the Ethernet header size into ieee80211_calc_expected_tx_airtime()
> >> > > v2:
> >> > >   - Integrate Kan's approach to airtime throttling.
> >> > >   - Hopefully fix the cb struct alignment on big-endian architectures.
> >> > >
> >> > >
> >> > >
> >> > > Kan Yan (1):
> >> > >   mac80211: Implement Airtime-based Queue Limit (AQL)
> >> > >
> >> > > Toke Høiland-Jørgensen (1):
> >> > >   mac80211: Import airtime calculation code from mt76
> >> > >
> >> > >  include/net/cfg80211.h     |   7 +
> >> > >  include/net/mac80211.h     |  41 +++
> >> > >  net/mac80211/Makefile      |   3 +-
> >> > >  net/mac80211/airtime.c     | 597 +++++++++++++++++++++++++++++++++++++
> >> > >  net/mac80211/debugfs.c     |  85 ++++++
> >> > >  net/mac80211/debugfs_sta.c |  43 ++-
> >> > >  net/mac80211/ieee80211_i.h |   8 +
> >> > >  net/mac80211/main.c        |  10 +-
> >> > >  net/mac80211/sta_info.c    |  38 +++
> >> > >  net/mac80211/sta_info.h    |   8 +
> >> > >  net/mac80211/tx.c          |  47 ++-
> >> > >  11 files changed, 872 insertions(+), 15 deletions(-)
> >> > >  create mode 100644 net/mac80211/airtime.c
> >> > >
> >> > > --
> >> > > 2.24.0.rc1.363.gb1bccd3e3d-goog
> >> > >
> >> > _______________________________________________
> >> > Make-wifi-fast mailing list
> >> > Make-wifi-fast@lists.bufferbloat.net
> >> > https://lists.bufferbloat.net/listinfo/make-wifi-fast
> >>
> >>
> >>
> >> --
> >>
> >> Dave Täht
> >> CTO, TekLibre, LLC
> >> http://www.teklibre.com
> >> Tel: 1-831-205-9740
>
>
>
> --
>
> Dave Täht
> CTO, TekLibre, LLC
> http://www.teklibre.com
> Tel: 1-831-205-9740
Toke Høiland-Jørgensen Nov. 20, 2019, 10:14 a.m. UTC | #5
Kan Yan <kyan@google.com> writes:

>> Those were lovely, thanks!!!! Big win. Since you are on patch v10
>> now.... Any chance you could turn ecn on and off and give it a go
>> again in your next test run?
>>
>>
>> Also:
>>
>> --step-size=.04 --socket-stats # the first is helpful to gain more
>> detail, the second as to the behavior of the tcp stack.
>
> Thanks for the feedback! I will do more tests in a few days.
>
>
>> Secondly - and AFTER this patchset stablizes, I'd like us to look into
>> returning the codel default to 10ms or less
>> from it's currently 20ms or worse setting. Tis another easy test
>
> Smaller CoDel "target" doesn't work well with wireless because the
> dequeue behavior in wireless driver is very bursty. It is quite often
> dequeues dozens of packets in one burst after one large aggregation is
> completed, so smaller CoDel "target" can cause unnecessary packet
> drop.

It would be interesting to get some samples of the actual sojourn time
as seen by CoDel in mac80211. Might be doable with bpftrace...

-Toke
Kan Yan Nov. 21, 2019, 2:05 a.m. UTC | #6
> It would be interesting to get some samples of the actual sojourn time
> as seen by CoDel in mac80211. Might be doable with bpftrace...

I will try to add some trace event to get the sojourn time for the
next round of tests.


On Wed, Nov 20, 2019 at 2:14 AM Toke Høiland-Jørgensen <toke@redhat.com> wrote:
>
> Kan Yan <kyan@google.com> writes:
>
> >> Those were lovely, thanks!!!! Big win. Since you are on patch v10
> >> now.... Any chance you could turn ecn on and off and give it a go
> >> again in your next test run?
> >>
> >>
> >> Also:
> >>
> >> --step-size=.04 --socket-stats # the first is helpful to gain more
> >> detail, the second as to the behavior of the tcp stack.
> >
> > Thanks for the feedback! I will do more tests in a few days.
> >
> >
> >> Secondly - and AFTER this patchset stablizes, I'd like us to look into
> >> returning the codel default to 10ms or less
> >> from it's currently 20ms or worse setting. Tis another easy test
> >
> > Smaller CoDel "target" doesn't work well with wireless because the
> > dequeue behavior in wireless driver is very bursty. It is quite often
> > dequeues dozens of packets in one burst after one large aggregation is
> > completed, so smaller CoDel "target" can cause unnecessary packet
> > drop.
>
> It would be interesting to get some samples of the actual sojourn time
> as seen by CoDel in mac80211. Might be doable with bpftrace...
>
> -Toke
>
Toke Høiland-Jørgensen Nov. 21, 2019, 10:05 a.m. UTC | #7
Kan Yan <kyan@google.com> writes:

>> It would be interesting to get some samples of the actual sojourn time
>> as seen by CoDel in mac80211. Might be doable with bpftrace...
>
> I will try to add some trace event to get the sojourn time for the
> next round of tests.

In theory, this ought to produce a histogram of sojourn times (in
microseconds):

bpftrace -e 'kretprobe:codel_skb_time_func { @sojourn = lhist((nsecs - (retval << 10))/1000, 0, 100000, 1000); }'


Can't get the CoDel drop mechanism to trigger on my system at all,
though (a laptop running on iwl). I guess because there's queue
backpressure to userspace first?

It would be interesting to see if it works for you, assuming you can get
bpftrace to work on your test system :)

-Toke
Toke Høiland-Jørgensen Nov. 22, 2019, 10:45 a.m. UTC | #8
Kan Yan <kyan@google.com> writes:

>> In theory, this ought to produce a histogram of sojourn times (in
>> microseconds):
>> bpftrace -e 'kretprobe:codel_skb_time_func { @sojourn = lhist((nsecs -
> (retval << 10))/1000, 0, 100000, 1000); }'
>
> Thanks for the tips!
>
>> Can't get the CoDel drop mechanism to trigger on my system at all,
>> though (a laptop running on iwl). I guess because there's queue
>> backpressure to userspace first?
>
> What's the tcp_congestion_control in your system? Maybe it is BBR that
> prevents bufferbloat.

It's not BBR, just plain old CUBIC. I've seen the issue before that it's
almost impossible to build a queue in the mac80211 layer when the TCP
session is originated on the local machine, though...

>> It would be interesting to see if it works for you, assuming you can get
>> bpftrace to work on your test system :)
>
> I can enable required kernel configuration easily, but cross-compile
> bpftrace for an ARM64 platform may take some time and effort.

Yeah, bpftrace can be a bit of a pain to get running; but it may be
worth the investment longer term as well. It really is quite useful! :)

Some links:

Install guide:
https://github.com/iovisor/bpftrace/blob/master/INSTALL.md

Tutorial by one-liners:
https://github.com/iovisor/bpftrace/blob/master/docs/tutorial_one_liners.md

Reference guide:
https://github.com/iovisor/bpftrace/blob/master/docs/reference_guide.md#5-tracepoint-static-tracing-kernel-level

-Toke
Kan Yan Nov. 26, 2019, 5:04 a.m. UTC | #9
> Yeah, bpftrace can be a bit of a pain to get running; but it may be
> worth the investment longer term as well. It really is quite useful! :)

My attempt to build bpftrace didn't work out, so I just got the
sojourn time using old fashioned trace event.
The raw trace, parsed data in csv format and plots can be found here:
https://drive.google.com/open?id=1Mg_wHu7elYAdkXz4u--42qGCVE1nrILV

All tests are done with 2 TCP download sessions that oversubscribed
the link bandwidth.
With AQL on, the mean sojourn time about ~20000us, matches the default
codel "target".
With AQL off, the mean sojourn time is less than 4us even the latency
is off the charts, just as we expected that fd_codel with mac80211
alone is not effective for drivers with deep firmware/hardware queues.

> Any chance you could turn ecn on and off and give it a go
> again in your next test run?

ECN on shows very similar results as with ECN off. "aqm" stats from
debugfs shows it is doing ECN marking instead of dropping packets as
expected. Flent test data also is in the same link.



On Fri, Nov 22, 2019 at 2:45 AM Toke Høiland-Jørgensen <toke@redhat.com> wrote:
>
> Kan Yan <kyan@google.com> writes:
>
> >> In theory, this ought to produce a histogram of sojourn times (in
> >> microseconds):
> >> bpftrace -e 'kretprobe:codel_skb_time_func { @sojourn = lhist((nsecs -
> > (retval << 10))/1000, 0, 100000, 1000); }'
> >
> > Thanks for the tips!
> >
> >> Can't get the CoDel drop mechanism to trigger on my system at all,
> >> though (a laptop running on iwl). I guess because there's queue
> >> backpressure to userspace first?
> >
> > What's the tcp_congestion_control in your system? Maybe it is BBR that
> > prevents bufferbloat.
>
> It's not BBR, just plain old CUBIC. I've seen the issue before that it's
> almost impossible to build a queue in the mac80211 layer when the TCP
> session is originated on the local machine, though...
>
> >> It would be interesting to see if it works for you, assuming you can get
> >> bpftrace to work on your test system :)
> >
> > I can enable required kernel configuration easily, but cross-compile
> > bpftrace for an ARM64 platform may take some time and effort.
>
> Yeah, bpftrace can be a bit of a pain to get running; but it may be
> worth the investment longer term as well. It really is quite useful! :)
>
> Some links:
>
> Install guide:
> https://github.com/iovisor/bpftrace/blob/master/INSTALL.md
>
> Tutorial by one-liners:
> https://github.com/iovisor/bpftrace/blob/master/docs/tutorial_one_liners.md
>
> Reference guide:
> https://github.com/iovisor/bpftrace/blob/master/docs/reference_guide.md#5-tracepoint-static-tracing-kernel-level
>
> -Toke
>
Toke Høiland-Jørgensen Nov. 26, 2019, 9:19 a.m. UTC | #10
Kan Yan <kyan@google.com> writes:

>> Yeah, bpftrace can be a bit of a pain to get running; but it may be
>> worth the investment longer term as well. It really is quite useful! :)
>
> My attempt to build bpftrace didn't work out, so I just got the
> sojourn time using old fashioned trace event.
> The raw trace, parsed data in csv format and plots can be found here:
> https://drive.google.com/open?id=1Mg_wHu7elYAdkXz4u--42qGCVE1nrILV
>
> All tests are done with 2 TCP download sessions that oversubscribed
> the link bandwidth.
> With AQL on, the mean sojourn time about ~20000us, matches the default
> codel "target".

Yeah, since CoDel is trying to control the latency to 20ms, it makes
sense that the value is clustered around that. That means that the
algorithm is working as they're supposed to :)

While you're running tests, could you do one with the target changed to
10ms, just to see what it looks like? Both sojourn time values and
throughput would be interesting here, of course.

> With AQL off, the mean sojourn time is less than 4us even the latency
> is off the charts, just as we expected that fd_codel with mac80211
> alone is not effective for drivers with deep firmware/hardware queues.

Yup, also kinda expected; but another good way to visualise the impact.
Nice!

-Toke
Dave Taht Nov. 27, 2019, 2:13 a.m. UTC | #11
Toke Høiland-Jørgensen <toke@redhat.com> writes:

> Kan Yan <kyan@google.com> writes:
>
>>> Yeah, bpftrace can be a bit of a pain to get running; but it may be
>>> worth the investment longer term as well. It really is quite useful! :)
>>
>> My attempt to build bpftrace didn't work out, so I just got the
>> sojourn time using old fashioned trace event.
>> The raw trace, parsed data in csv format and plots can be found here:
>> https://drive.google.com/open?id=1Mg_wHu7elYAdkXz4u--42qGCVE1nrILV
>>
>> All tests are done with 2 TCP download sessions that oversubscribed
>> the link bandwidth.
>> With AQL on, the mean sojourn time about ~20000us, matches the default
>> codel "target".
>
> Yeah, since CoDel is trying to control the latency to 20ms, it makes
> sense that the value is clustered around that. That means that the
> algorithm is working as they're supposed to :)
>
> While you're running tests, could you do one with the target changed to
> 10ms, just to see what it looks like? Both sojourn time values and
> throughput would be interesting here, of course.
>
>> With AQL off, the mean sojourn time is less than 4us even the latency
>> is off the charts, just as we expected that fd_codel with mac80211
>> alone is not effective for drivers with deep firmware/hardware queues

I hope to take a close look at the iwl ax200 chips soon. Unless
someone beats me to it. Can we get these sort of stats out of it?

Has anyone looked at the marvell chips of late?

>
> Yup, also kinda expected; but another good way to visualise the impact.
> Nice!
>
> -Toke
>
> _______________________________________________
> Make-wifi-fast mailing list
> Make-wifi-fast@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/make-wifi-fast
Kan Yan Dec. 3, 2019, 7:02 p.m. UTC | #12
Dave Taht <dave@taht.net> writes:

> I hope to take a close look at the iwl ax200 chips soon. Unless
> someone beats me to it. Can we get these sort of stats out of it?

Here is a patch for the trace event I used to get the sojourn time:
https://drive.google.com/open?id=1Mq8BO_kcneXBqf3m5Rz5xhEMj9jNbcJv

Toke Høiland-Jørgensen <toke@redhat.com> writes:

> While you're running tests, could you do one with the target changed to
> 10ms, just to see what it looks like? Both sojourn time values and
> throughput would be interesting here, of course.

Apologize for the late reply. Here is the test results with target set to 10ms.
The trace for the sojourn time:
https://drive.google.com/open?id=1MEy_wbKKdl22yF17hZaGzpv3uOz6orTi

Flent test for 20 ms target time vs 10 ms target time:
https://drive.google.com/open?id=1leIWe0-L0XE78eFvlmRJlNmYgbpoH8xZ

The sojourn time measured during throughput test with a relative good
5G connection has mean value around 11 ms, pretty close to the 10 ms
target.

A smaller CoDel "target" time could help reduce latency, but it may
drop packets too aggressively for stations with low data rate and
hurts throughput, as shown in one of the tests with 2.4 GHz client.

Overall, I think AQL and fq_codel works well, at least with ath10k.
The current target value of 20 ms is a reasonable default.  It is
relatively conservative that helps stations with weak signal to
maintain stable throughput. Although, a debugfs entry that allows
runtime adjustment of target value could be useful.

On Tue, Nov 26, 2019 at 6:13 PM Dave Taht <dave@taht.net> wrote:
>
> Toke Høiland-Jørgensen <toke@redhat.com> writes:
>
> > Kan Yan <kyan@google.com> writes:
> >
> >>> Yeah, bpftrace can be a bit of a pain to get running; but it may be
> >>> worth the investment longer term as well. It really is quite useful! :)
> >>
> >> My attempt to build bpftrace didn't work out, so I just got the
> >> sojourn time using old fashioned trace event.
> >> The raw trace, parsed data in csv format and plots can be found here:
> >> https://drive.google.com/open?id=1Mg_wHu7elYAdkXz4u--42qGCVE1nrILV
> >>
> >> All tests are done with 2 TCP download sessions that oversubscribed
> >> the link bandwidth.
> >> With AQL on, the mean sojourn time about ~20000us, matches the default
> >> codel "target".
> >
> > Yeah, since CoDel is trying to control the latency to 20ms, it makes
> > sense that the value is clustered around that. That means that the
> > algorithm is working as they're supposed to :)
> >
> > While you're running tests, could you do one with the target changed to
> > 10ms, just to see what it looks like? Both sojourn time values and
> > throughput would be interesting here, of course.
> >
> >> With AQL off, the mean sojourn time is less than 4us even the latency
> >> is off the charts, just as we expected that fd_codel with mac80211
> >> alone is not effective for drivers with deep firmware/hardware queues
>
> I hope to take a close look at the iwl ax200 chips soon. Unless
> someone beats me to it. Can we get these sort of stats out of it?
>
> Has anyone looked at the marvell chips of late?
>
> >
> > Yup, also kinda expected; but another good way to visualise the impact.
> > Nice!
> >
> > -Toke
> >
> > _______________________________________________
> > Make-wifi-fast mailing list
> > Make-wifi-fast@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/make-wifi-fast
Kalle Valo Dec. 4, 2019, 4:47 a.m. UTC | #13
Kan Yan <kyan@google.com> writes:

> Dave Taht <dave@taht.net> writes:
>
>> I hope to take a close look at the iwl ax200 chips soon. Unless
>> someone beats me to it. Can we get these sort of stats out of it?
>
> Here is a patch for the trace event I used to get the sojourn time:
> https://drive.google.com/open?id=1Mq8BO_kcneXBqf3m5Rz5xhEMj9jNbcJv
>
> Toke Høiland-Jørgensen <toke@redhat.com> writes:
>
>> While you're running tests, could you do one with the target changed to
>> 10ms, just to see what it looks like? Both sojourn time values and
>> throughput would be interesting here, of course.
>
> Apologize for the late reply. Here is the test results with target set to 10ms.
> The trace for the sojourn time:
> https://drive.google.com/open?id=1MEy_wbKKdl22yF17hZaGzpv3uOz6orTi
>
> Flent test for 20 ms target time vs 10 ms target time:
> https://drive.google.com/open?id=1leIWe0-L0XE78eFvlmRJlNmYgbpoH8xZ
>
> The sojourn time measured during throughput test with a relative good
> 5G connection has mean value around 11 ms, pretty close to the 10 ms
> target.
>
> A smaller CoDel "target" time could help reduce latency, but it may
> drop packets too aggressively for stations with low data rate and
> hurts throughput, as shown in one of the tests with 2.4 GHz client.
>
> Overall, I think AQL and fq_codel works well, at least with ath10k.
> The current target value of 20 ms is a reasonable default.  It is
> relatively conservative that helps stations with weak signal to
> maintain stable throughput. Although, a debugfs entry that allows
> runtime adjustment of target value could be useful.

Why not make it configurable via nl80211? We should use debugfs only for
testing and debugging, not in production builds, and to me the use case
for this value sounds like more than just testing.
Johannes Berg Dec. 4, 2019, 8:07 a.m. UTC | #14
On Wed, 2019-12-04 at 04:47 +0000, Kalle Valo wrote:
> 
> > Overall, I think AQL and fq_codel works well, at least with ath10k.
> > The current target value of 20 ms is a reasonable default.  It is
> > relatively conservative that helps stations with weak signal to
> > maintain stable throughput. Although, a debugfs entry that allows
> > runtime adjustment of target value could be useful.
> 
> Why not make it configurable via nl80211? We should use debugfs only for
> testing and debugging, not in production builds, and to me the use case
> for this value sounds like more than just testing.

On the other hand, what application/tool or even user would be able to
set this correctly?

johannes
Toke Høiland-Jørgensen Dec. 4, 2019, 2:34 p.m. UTC | #15
Johannes Berg <johannes@sipsolutions.net> writes:

> On Wed, 2019-12-04 at 04:47 +0000, Kalle Valo wrote:
>> 
>> > Overall, I think AQL and fq_codel works well, at least with ath10k.
>> > The current target value of 20 ms is a reasonable default.  It is
>> > relatively conservative that helps stations with weak signal to
>> > maintain stable throughput. Although, a debugfs entry that allows
>> > runtime adjustment of target value could be useful.
>> 
>> Why not make it configurable via nl80211? We should use debugfs only for
>> testing and debugging, not in production builds, and to me the use case
>> for this value sounds like more than just testing.
>
> On the other hand, what application/tool or even user would be able to
> set this correctly?

Well, it's not inconceivable that someone might write a tool to
dynamically tune this; we do allow it to be changed at the qdisc layer
after all.

But until such a time as someone does that, I agree that it's not
terribly likely such a knob is going to see much use. As Kan's results
show, for inter-flow latency (like what the separate ping in his test is
showing), the FQ part takes care of the latency, and what's left there
is the AQL buffering. So I'm a little bit "meh" about this; wouldn't
object to making it a knob, but don't think I'm going to spend the time
writing that patch myself :)

-Toke
Dave Taht Dec. 6, 2019, 7:53 p.m. UTC | #16
Johannes Berg <johannes@sipsolutions.net> writes:

> On Wed, 2019-12-04 at 04:47 +0000, Kalle Valo wrote:
>> 
>> > Overall, I think AQL and fq_codel works well, at least with ath10k.
>> > The current target value of 20 ms is a reasonable default.

>> > It is
>> > relatively conservative that helps stations with weak signal to
>> > maintain stable throughput.

This statement is overbroad and largely incorrect.

>>> Although, a debugfs entry that allows
>> > runtime adjustment of target value could be useful.
>> 
>> Why not make it configurable via nl80211? We should use debugfs only for
>> testing and debugging, not in production builds, and to me the use case
>> for this value sounds like more than just testing.

I certainly lean towards making it configurable AND autotuning it
better.

> On the other hand, what application/tool or even user would be able to
> set this correctly?

The guideline from the theory ("Power") is the target should 5-10% of
the interval, and the interval fairly close to the most commonly
observed max RTT. I should try to stress (based on some statements made
here) - that you have to *consistently* exceed the target for the
interval, in order for codel to have any effect at all. Please try to
internalize that - the smoothing comes from the interval... 100ms is
quite a large interval....

Judging from kan's (rather noisy) data set 10ms is a good default on
5ghz. There is zero difference in throughput as near as I can tell.

It would be interesting to try 3ms (as there's up to 8ms of
buffering in the driver) to add to this dataset, helpful also
to be measuring the actual tcp rtt rather in addition to the fq behavior.

I see what looks like channel scan behavior in the data. (on the
client?) Running tests for 5 minutes will show the impact and frequency
of channel scans better.

The 20ms figure we used initially was due to a variety of factors:

* This was the first ever attempt at applying an AQM technology to wifi!!!
** FIXED: http://blog.cerowrt.org/post/real_results/
* We were debugging the FQ component, primarily.
** FIXED: http://blog.cerowrt.org/post/crypto_fq_bug/
* We were working on backports and on integrating a zillion other pieces
  all in motion.
** sorta FIXED. I know dang full well how many darn variables there
   are, as well as how much the network stack has changed since the initial work.
*  We were working on 2.4ghz which has a baseline rate of 1Mbit (13ms target)
   Our rule of thumb is that min target needs to MTU*1.5. There was also a
   a fudge factor to account for half duplex operation and the minimum
   size of a txop. 
** FIXED: 5ghz has a baseline rate of 6mbits.
* We didn't have tools to look at tcp rtts at the time
** FIXED: flent --socket-stats tcp_nup
* We had issues with power save
** Everybody has issues with powersave...
** These are still extant on many platforms, notably ones that wake up
   and dump all their accumulated mcast data into the link. Not our problem.
* channel scans: http://blog.cerowrt.org/post/disabling_channel_scans/
**  Non background channel scans are very damaging. I am unsure from this
    data if that's what we are seeing from the client? Or the ath10k?
    the ability to do these in the background or notmight be a factor in
    autotuning things better.
* We had MAJOR issues with TSQ
** FIXED: https://lwn.net/Articles/757643/

Honestly the TSQ interaction was the biggest barrier to figuring out
what was going wrong at the time we upstreamed this, and a tcp_nup test,
now, with TSQ closer to "right", AQL in place and the reduced target
should be interesting. I think the data we have now on TSQ vs wifi on
this chip, is now totally obsolete.

* We had issues with mcast
** I think we still have many issues with multicast but improving that
   is a separate problem entirely.
* We ran out of time and money, and had hit it so far out of the park
  ( https://lwn.net/Articles/705884/ ) 
  that it seemed like sleeping more and tweaking things less was a win.

Judging from the results we now get on 5ghz and on ac, it seems good to
reduce the target to 10ms (or less!) on 5ghz ghz, especially on ac,
which will result in a less path inflation and no loss in throughput.

I have been running with a 6ms target for several years now on my
802.11n 5ghz devices. (I advertise a 3ms rather than the default txop
size also) These are, admittedly, mostly used as backhaul
links (so I didn't have tsq, aql, rate changes, etc) , but seing a path
inflation of no more than 30ms under full bidirectional load is
nice. (and still 22ms worse than it could be in a more perfect world)

Another thing I keep trying to stress: TCP's ability to grab more
bandwidth is quadratic relative the delay.

>
> johannes
Kan Yan Dec. 6, 2019, 10:04 p.m. UTC | #17
Dave Taht (taht.net) writes:

> Judging from kan's (rather noisy) data set 10ms is a good default on
> 5ghz. There is zero difference in throughput as near as I can tell.
> It would be interesting to try 3ms (as there's up to 8ms of
> buffering in the driver) to add to this dataset, helpful also
> to be measuring the actual tcp rtt rather in addition to the fq behavior.

One large aggregation in 11ac can last 4-5 ms, with bursting,
firmware/hardware can complete as much 8 -10 ms worth of frames in one
shot and then try to dequeue more frames, so the jitter for the
sojourn time can be as high as 8-10 ms. Setting the default target to
something less than 10ms can cause unnecessary packet drop in some
occasions.


On Fri, Dec 6, 2019 at 11:53 AM Dave Taht <dave@taht.net> wrote:
>
> Johannes Berg <johannes@sipsolutions.net> writes:
>
> > On Wed, 2019-12-04 at 04:47 +0000, Kalle Valo wrote:
> >>
> >> > Overall, I think AQL and fq_codel works well, at least with ath10k.
> >> > The current target value of 20 ms is a reasonable default.
>
> >> > It is
> >> > relatively conservative that helps stations with weak signal to
> >> > maintain stable throughput.
>
> This statement is overbroad and largely incorrect.
>
> >>> Although, a debugfs entry that allows
> >> > runtime adjustment of target value could be useful.
> >>
> >> Why not make it configurable via nl80211? We should use debugfs only for
> >> testing and debugging, not in production builds, and to me the use case
> >> for this value sounds like more than just testing.
>
> I certainly lean towards making it configurable AND autotuning it
> better.
>
> > On the other hand, what application/tool or even user would be able to
> > set this correctly?
>
> The guideline from the theory ("Power") is the target should 5-10% of
> the interval, and the interval fairly close to the most commonly
> observed max RTT. I should try to stress (based on some statements made
> here) - that you have to *consistently* exceed the target for the
> interval, in order for codel to have any effect at all. Please try to
> internalize that - the smoothing comes from the interval... 100ms is
> quite a large interval....
>
> Judging from kan's (rather noisy) data set 10ms is a good default on
> 5ghz. There is zero difference in throughput as near as I can tell.
>
> It would be interesting to try 3ms (as there's up to 8ms of
> buffering in the driver) to add to this dataset, helpful also
> to be measuring the actual tcp rtt rather in addition to the fq behavior.
>
> I see what looks like channel scan behavior in the data. (on the
> client?) Running tests for 5 minutes will show the impact and frequency
> of channel scans better.
>
> The 20ms figure we used initially was due to a variety of factors:
>
> * This was the first ever attempt at applying an AQM technology to wifi!!!
> ** FIXED: http://blog.cerowrt.org/post/real_results/
> * We were debugging the FQ component, primarily.
> ** FIXED: http://blog.cerowrt.org/post/crypto_fq_bug/
> * We were working on backports and on integrating a zillion other pieces
>   all in motion.
> ** sorta FIXED. I know dang full well how many darn variables there
>    are, as well as how much the network stack has changed since the initial work.
> *  We were working on 2.4ghz which has a baseline rate of 1Mbit (13ms target)
>    Our rule of thumb is that min target needs to MTU*1.5. There was also a
>    a fudge factor to account for half duplex operation and the minimum
>    size of a txop.
> ** FIXED: 5ghz has a baseline rate of 6mbits.
> * We didn't have tools to look at tcp rtts at the time
> ** FIXED: flent --socket-stats tcp_nup
> * We had issues with power save
> ** Everybody has issues with powersave...
> ** These are still extant on many platforms, notably ones that wake up
>    and dump all their accumulated mcast data into the link. Not our problem.
> * channel scans: http://blog.cerowrt.org/post/disabling_channel_scans/
> **  Non background channel scans are very damaging. I am unsure from this
>     data if that's what we are seeing from the client? Or the ath10k?
>     the ability to do these in the background or notmight be a factor in
>     autotuning things better.
> * We had MAJOR issues with TSQ
> ** FIXED: https://lwn.net/Articles/757643/
>
> Honestly the TSQ interaction was the biggest barrier to figuring out
> what was going wrong at the time we upstreamed this, and a tcp_nup test,
> now, with TSQ closer to "right", AQL in place and the reduced target
> should be interesting. I think the data we have now on TSQ vs wifi on
> this chip, is now totally obsolete.
>
> * We had issues with mcast
> ** I think we still have many issues with multicast but improving that
>    is a separate problem entirely.
> * We ran out of time and money, and had hit it so far out of the park
>   ( https://lwn.net/Articles/705884/ )
>   that it seemed like sleeping more and tweaking things less was a win.
>
> Judging from the results we now get on 5ghz and on ac, it seems good to
> reduce the target to 10ms (or less!) on 5ghz ghz, especially on ac,
> which will result in a less path inflation and no loss in throughput.
>
> I have been running with a 6ms target for several years now on my
> 802.11n 5ghz devices. (I advertise a 3ms rather than the default txop
> size also) These are, admittedly, mostly used as backhaul
> links (so I didn't have tsq, aql, rate changes, etc) , but seing a path
> inflation of no more than 30ms under full bidirectional load is
> nice. (and still 22ms worse than it could be in a more perfect world)
>
> Another thing I keep trying to stress: TCP's ability to grab more
> bandwidth is quadratic relative the delay.
>
> >
> > johannes