diff mbox series

[RFC,1/1] dql: add dql_set_min_limit()

Message ID 20210309152354.95309-2-mailhol.vincent@wanadoo.fr (mailing list archive)
State RFC
Headers show
Series Modify dql.min_limit value inside the driver | expand

Checks

Context Check Description
netdev/tree_selection success Not a local patch

Commit Message

Vincent Mailhol March 9, 2021, 3:23 p.m. UTC
Add a function to set the dynamic queue limit minimum value.

This function is to be used by network drivers which are able to
prove, at least through empirical tests, that they reach better
performances with a specific predefined dql.min_limit value.

Signed-off-by: Vincent Mailhol <mailhol.vincent@wanadoo.fr>
---
 include/linux/dynamic_queue_limits.h | 3 +++
 lib/dynamic_queue_limits.c           | 8 ++++++++
 2 files changed, 11 insertions(+)

Comments

Vincent Mailhol March 9, 2021, 6 p.m. UTC | #1
On Wed. 10 Mar 2021 at 00:23, Vincent Mailhol
<mailhol.vincent@wanadoo.fr> wrote:
>
> Add a function to set the dynamic queue limit minimum value.
>
> This function is to be used by network drivers which are able to
> prove, at least through empirical tests, that they reach better
> performances with a specific predefined dql.min_limit value.
>
> Signed-off-by: Vincent Mailhol <mailhol.vincent@wanadoo.fr>
> ---
>  include/linux/dynamic_queue_limits.h | 3 +++
>  lib/dynamic_queue_limits.c           | 8 ++++++++
>  2 files changed, 11 insertions(+)
>
> diff --git a/include/linux/dynamic_queue_limits.h b/include/linux/dynamic_queue_limits.h
> index 407c2f281b64..32437f168a35 100644
> --- a/include/linux/dynamic_queue_limits.h
> +++ b/include/linux/dynamic_queue_limits.h
> @@ -103,6 +103,9 @@ void dql_reset(struct dql *dql);
>  /* Initialize dql state */
>  void dql_init(struct dql *dql, unsigned int hold_time);
>
> +/* Set the dql minimum limit */
> +void dql_set_min_limit(struct dql *dql, unsigned int min_limit);
> +
>  #endif /* _KERNEL_ */
>
>  #endif /* _LINUX_DQL_H */
> diff --git a/lib/dynamic_queue_limits.c b/lib/dynamic_queue_limits.c
> index fde0aa244148..8b6ad1e0a2e3 100644
> --- a/lib/dynamic_queue_limits.c
> +++ b/lib/dynamic_queue_limits.c
> @@ -136,3 +136,11 @@ void dql_init(struct dql *dql, unsigned int hold_time)
>         dql_reset(dql);
>  }
>  EXPORT_SYMBOL(dql_init);
> +
> +void dql_set_min_limit(struct dql *dql, unsigned int min_limit)
> +{
> +#ifdef CONFIG_BQL
> +       dql->min_limit = min_limit;
> +#endif

Marc pointed some issue on the #ifdef in a separate thread:
https://lore.kernel.org/linux-can/20210309153547.q7zspf46k6terxqv@pengutronix.de/

I will come back with a v2 tomorrow.

> +}
> +EXPORT_SYMBOL(dql_set_min_limit);
Dave Taht March 9, 2021, 7:44 p.m. UTC | #2
I note that "proof" is very much in the developer's opinion and
limited testing base.

Actual operational experience, as in a real deployment, with other applications,
heavy context switching, or virtualization, might yield better results.

There's lots of defaults in the linux kernel that are just swags, the
default NAPI and rx/tx ring buffer sizes being two where devs just
copy/paste stuff, which either doesn't scale up, or doesn't scale
down.

This does not mean I oppose your patch! However I have two points I'd
like to make
regarding bql and dql in general that I have long longed be explored.

0) Me being an advocate of low latency in general, does mean that I
have no problem
and even prefer, starving the device rather than always keeping it busy.

/me hides

1) BQL is MIAD - multiplicative increase, additive decrease. While in
practice so far this does not seem to matter much (and also measuring
things down to "us" really hard), a stabler algorithm is AIMD. BQL
often absorbs a large TSO burst - usually a minimum of 128k is
observed on gbit, where a stabler state (without GSO) seemed to be
around 40k on many of the chipsets I worked with, back when I was
working in this area.

(cake's gso-splitting also gets lower bql values in general, if you
have enough cpu to run cake)

2) BQL + hardware mq is increasingly an issue in my mind in that, say,
you are hitting
64 hw queues, each with 128k stored in there, is additive, where in
order to service interrupts properly and keep the media busy might
only require 128k total, spread across the active queues and flows. I
have often thought that making BQL scale better to multiple hw queues
by globally sharing the buffering state(s), would lead to lower
latency, but
also that probably sharing that state would be too high overhead.

Having not worked out a solution to 2), and preferring to start with
1), and not having a whole lot of support for item 0) in the world, I
just thought I'd mention it, in the hope
someone might give it a go.
Vincent Mailhol March 10, 2021, 3:56 p.m. UTC | #3
Hi Dave,

Thanks for the comprehensive comments!

On Wed. 10 Mar 2021 at 04:44, Dave Taht <dave.taht@gmail.com> wrote:
>
> I note that "proof" is very much in the developer's opinion and
> limited testing base.
>
> Actual operational experience, as in a real deployment, with other applications,
> heavy context switching, or virtualization, might yield better results.

Agree. I was not thorough in my description, but what you pointed
here is actually what I had in mind (and what I did for my
driver).  Let me borrow your exemple and include those in the v2
of the patch.

> There's lots of defaults in the linux kernel that are just swags, the
> default NAPI and rx/tx ring buffer sizes being two where devs just
> copy/paste stuff, which either doesn't scale up, or doesn't scale
> down.
>
> This does not mean I oppose your patch! However I have two points I'd
> like to make
> regarding bql and dql in general that I have long longed be explored.
>
> 0) Me being an advocate of low latency in general, does mean that I
> have no problem
> and even prefer, starving the device rather than always keeping it busy.
>
> /me hides

Fully agree. The intent of this patch is for specific use cases
where setting a default dql.min_limit has minimum latency impact
for a noticeable throughput increase.

My use case is a CAN driver for a USB interface module. The
maximum PDU of CAN protocol is roughly 16 bytes, the USB maximum
packet size is 512 bytes. If I force dql.min_limit to be around
240 bytes (i.e. roughly 15 CAN frames), all 15 frames easily fit
in a single USB packet. Preparing a packet of 240 bytes is
relatively fast (small latency issue) but the gain of not having
to send 15 separate USB packets is huge (big throughput
increase).

My patch was really written for this specific context. However, I
am not knowledgeable enough on other network protocols to give
other examples where this new function could be applied (my blind
guess is that most of the protocol should *not* use it).

> 1) BQL is MIAD - multiplicative increase, additive decrease. While in
> practice so far this does not seem to matter much (and also measuring
> things down to "us" really hard), a stabler algorithm is AIMD. BQL
> often absorbs a large TSO burst - usually a minimum of 128k is
> observed on gbit, where a stabler state (without GSO) seemed to be
> around 40k on many of the chipsets I worked with, back when I was
> working in this area.
>
> (cake's gso-splitting also gets lower bql values in general, if you
> have enough cpu to run cake)
>
> 2) BQL + hardware mq is increasingly an issue in my mind in that, say,
> you are hitting
> 64 hw queues, each with 128k stored in there, is additive, where in
> order to service interrupts properly and keep the media busy might
> only require 128k total, spread across the active queues and flows. I
> have often thought that making BQL scale better to multiple hw queues
> by globally sharing the buffering state(s), would lead to lower
> latency, but
> also that probably sharing that state would be too high overhead.
>
> Having not worked out a solution to 2), and preferring to start with
> 1), and not having a whole lot of support for item 0) in the world, I
> just thought I'd mention it, in the hope
> someone might give it a go.

Thank you for the comments, however, I will be of small help
here. As mentioned above, my use cases are in bytes, not in
kilobytes. I lack experience here.

My experience is that BQL is not adapted for protocol with small
PDU and also not adapted for interfaces with a high
latency (e.g. USB) by default. But, modifying the dql.min_limit
solves it.

So, I let over people continue the discussion on points 1) and 2)


Yours sincerely,
Vincent
diff mbox series

Patch

diff --git a/include/linux/dynamic_queue_limits.h b/include/linux/dynamic_queue_limits.h
index 407c2f281b64..32437f168a35 100644
--- a/include/linux/dynamic_queue_limits.h
+++ b/include/linux/dynamic_queue_limits.h
@@ -103,6 +103,9 @@  void dql_reset(struct dql *dql);
 /* Initialize dql state */
 void dql_init(struct dql *dql, unsigned int hold_time);
 
+/* Set the dql minimum limit */
+void dql_set_min_limit(struct dql *dql, unsigned int min_limit);
+
 #endif /* _KERNEL_ */
 
 #endif /* _LINUX_DQL_H */
diff --git a/lib/dynamic_queue_limits.c b/lib/dynamic_queue_limits.c
index fde0aa244148..8b6ad1e0a2e3 100644
--- a/lib/dynamic_queue_limits.c
+++ b/lib/dynamic_queue_limits.c
@@ -136,3 +136,11 @@  void dql_init(struct dql *dql, unsigned int hold_time)
 	dql_reset(dql);
 }
 EXPORT_SYMBOL(dql_init);
+
+void dql_set_min_limit(struct dql *dql, unsigned int min_limit)
+{
+#ifdef CONFIG_BQL
+	dql->min_limit = min_limit;
+#endif
+}
+EXPORT_SYMBOL(dql_set_min_limit);