diff mbox series

[net-next] netpoll: Optimize skb refilling on critical path

Message ID 20250304-netpoll_refill_v2-v1-1-06e2916a4642@debian.org (mailing list archive)
State Accepted
Commit 248f6571fd4c51531f7f8f07f186f7ae98a50afc
Delegated to: Netdev Maintainers
Headers show
Series [net-next] netpoll: Optimize skb refilling on critical path | expand

Checks

Context Check Description
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for net-next
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 0 this patch: 0
netdev/build_tools success Errors and warnings before: 26 (+0) this patch: 26 (+0)
netdev/cc_maintainers success CCed 6 of 6 maintainers
netdev/build_clang success Errors and warnings before: 0 this patch: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 21 this patch: 21
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 50 lines checked
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
netdev/contest success net-next-2025-03-05--06-00 (tests: 894)

Commit Message

Breno Leitao March 4, 2025, 3:50 p.m. UTC
netpoll tries to refill the skb queue on every packet send, independently
if packets are being consumed from the pool or not. This was
particularly problematic while being called from printk(), where the
operation would be done while holding the console lock.

Introduce a more intelligent approach to skb queue management. Instead
of constantly attempting to refill the queue, the system now defers
refilling to a work queue and only triggers the workqueue when a buffer
is actually dequeued. This change significantly reduces operations with
the lock held.

Add a work_struct to the netpoll structure for asynchronous refilling,
updating find_skb() to schedule refill work only when necessary (skb is
dequeued).

These changes have demonstrated a 15% reduction in time spent during
netpoll_send_msg operations, especially when no SKBs are not consumed
from consumed from pool.

When SKBs are being dequeued, the improvement is even better, around
70%, mainly because refilling the SKB pool is now happening outside of
the critical patch (with console_owner lock held).

Signed-off-by: Breno Leitao <leitao@debian.org>
---
The above results were obtained using the `function_graph` ftrace
tracer, with filtering enabled for the netpoll_send_udp() function. The
test was executed by running the netcons_basic.sh selftest hundreds of
times.
---
 include/linux/netpoll.h |  1 +
 net/core/netpoll.c      | 15 +++++++++++++--
 2 files changed, 14 insertions(+), 2 deletions(-)


---
base-commit: 5b62996184ca5bb86660bcd11d6c4560ce127df9
change-id: 20250304-netpoll_refill_v2-a5e48c402fd3

Best regards,

Comments

Simon Horman March 6, 2025, 11:48 a.m. UTC | #1
On Tue, Mar 04, 2025 at 07:50:41AM -0800, Breno Leitao wrote:
> netpoll tries to refill the skb queue on every packet send, independently
> if packets are being consumed from the pool or not. This was
> particularly problematic while being called from printk(), where the
> operation would be done while holding the console lock.
> 
> Introduce a more intelligent approach to skb queue management. Instead
> of constantly attempting to refill the queue, the system now defers
> refilling to a work queue and only triggers the workqueue when a buffer
> is actually dequeued. This change significantly reduces operations with
> the lock held.
> 
> Add a work_struct to the netpoll structure for asynchronous refilling,
> updating find_skb() to schedule refill work only when necessary (skb is
> dequeued).
> 
> These changes have demonstrated a 15% reduction in time spent during
> netpoll_send_msg operations, especially when no SKBs are not consumed
> from consumed from pool.
> 
> When SKBs are being dequeued, the improvement is even better, around
> 70%, mainly because refilling the SKB pool is now happening outside of
> the critical patch (with console_owner lock held).
> 
> Signed-off-by: Breno Leitao <leitao@debian.org>
> ---
> The above results were obtained using the `function_graph` ftrace
> tracer, with filtering enabled for the netpoll_send_udp() function. The
> test was executed by running the netcons_basic.sh selftest hundreds of
> times.

Reviewed-by: Simon Horman <horms@kernel.org>
patchwork-bot+netdevbpf@kernel.org March 8, 2025, 4 a.m. UTC | #2
Hello:

This patch was applied to netdev/net-next.git (main)
by Jakub Kicinski <kuba@kernel.org>:

On Tue, 04 Mar 2025 07:50:41 -0800 you wrote:
> netpoll tries to refill the skb queue on every packet send, independently
> if packets are being consumed from the pool or not. This was
> particularly problematic while being called from printk(), where the
> operation would be done while holding the console lock.
> 
> Introduce a more intelligent approach to skb queue management. Instead
> of constantly attempting to refill the queue, the system now defers
> refilling to a work queue and only triggers the workqueue when a buffer
> is actually dequeued. This change significantly reduces operations with
> the lock held.
> 
> [...]

Here is the summary with links:
  - [net-next] netpoll: Optimize skb refilling on critical path
    https://git.kernel.org/netdev/net-next/c/248f6571fd4c

You are awesome, thank you!
diff mbox series

Patch

diff --git a/include/linux/netpoll.h b/include/linux/netpoll.h
index f91e50a76efd4..f6e8abe0b1f19 100644
--- a/include/linux/netpoll.h
+++ b/include/linux/netpoll.h
@@ -33,6 +33,7 @@  struct netpoll {
 	u16 local_port, remote_port;
 	u8 remote_mac[ETH_ALEN];
 	struct sk_buff_head skb_pool;
+	struct work_struct refill_wq;
 };
 
 struct netpoll_info {
diff --git a/net/core/netpoll.c b/net/core/netpoll.c
index 62b4041aae1ae..8a0df2b274a88 100644
--- a/net/core/netpoll.c
+++ b/net/core/netpoll.c
@@ -284,12 +284,13 @@  static struct sk_buff *find_skb(struct netpoll *np, int len, int reserve)
 	struct sk_buff *skb;
 
 	zap_completion_queue();
-	refill_skbs(np);
 repeat:
 
 	skb = alloc_skb(len, GFP_ATOMIC);
-	if (!skb)
+	if (!skb) {
 		skb = skb_dequeue(&np->skb_pool);
+		schedule_work(&np->refill_wq);
+	}
 
 	if (!skb) {
 		if (++count < 10) {
@@ -535,6 +536,7 @@  static void skb_pool_flush(struct netpoll *np)
 {
 	struct sk_buff_head *skb_pool;
 
+	cancel_work_sync(&np->refill_wq);
 	skb_pool = &np->skb_pool;
 	skb_queue_purge_reason(skb_pool, SKB_CONSUMED);
 }
@@ -621,6 +623,14 @@  int netpoll_parse_options(struct netpoll *np, char *opt)
 }
 EXPORT_SYMBOL(netpoll_parse_options);
 
+static void refill_skbs_work_handler(struct work_struct *work)
+{
+	struct netpoll *np =
+		container_of(work, struct netpoll, refill_wq);
+
+	refill_skbs(np);
+}
+
 int __netpoll_setup(struct netpoll *np, struct net_device *ndev)
 {
 	struct netpoll_info *npinfo;
@@ -666,6 +676,7 @@  int __netpoll_setup(struct netpoll *np, struct net_device *ndev)
 
 	/* fill up the skb queue */
 	refill_skbs(np);
+	INIT_WORK(&np->refill_wq, refill_skbs_work_handler);
 
 	/* last thing to do is link it to the net device structure */
 	rcu_assign_pointer(ndev->npinfo, npinfo);