Message ID | 20250304-netpoll_refill_v2-v1-1-06e2916a4642@debian.org (mailing list archive) |
---|---|
State | Accepted |
Commit | 248f6571fd4c51531f7f8f07f186f7ae98a50afc |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | [net-next] netpoll: Optimize skb refilling on critical path | expand |
On Tue, Mar 04, 2025 at 07:50:41AM -0800, Breno Leitao wrote: > netpoll tries to refill the skb queue on every packet send, independently > if packets are being consumed from the pool or not. This was > particularly problematic while being called from printk(), where the > operation would be done while holding the console lock. > > Introduce a more intelligent approach to skb queue management. Instead > of constantly attempting to refill the queue, the system now defers > refilling to a work queue and only triggers the workqueue when a buffer > is actually dequeued. This change significantly reduces operations with > the lock held. > > Add a work_struct to the netpoll structure for asynchronous refilling, > updating find_skb() to schedule refill work only when necessary (skb is > dequeued). > > These changes have demonstrated a 15% reduction in time spent during > netpoll_send_msg operations, especially when no SKBs are not consumed > from consumed from pool. > > When SKBs are being dequeued, the improvement is even better, around > 70%, mainly because refilling the SKB pool is now happening outside of > the critical patch (with console_owner lock held). > > Signed-off-by: Breno Leitao <leitao@debian.org> > --- > The above results were obtained using the `function_graph` ftrace > tracer, with filtering enabled for the netpoll_send_udp() function. The > test was executed by running the netcons_basic.sh selftest hundreds of > times. Reviewed-by: Simon Horman <horms@kernel.org>
Hello: This patch was applied to netdev/net-next.git (main) by Jakub Kicinski <kuba@kernel.org>: On Tue, 04 Mar 2025 07:50:41 -0800 you wrote: > netpoll tries to refill the skb queue on every packet send, independently > if packets are being consumed from the pool or not. This was > particularly problematic while being called from printk(), where the > operation would be done while holding the console lock. > > Introduce a more intelligent approach to skb queue management. Instead > of constantly attempting to refill the queue, the system now defers > refilling to a work queue and only triggers the workqueue when a buffer > is actually dequeued. This change significantly reduces operations with > the lock held. > > [...] Here is the summary with links: - [net-next] netpoll: Optimize skb refilling on critical path https://git.kernel.org/netdev/net-next/c/248f6571fd4c You are awesome, thank you!
diff --git a/include/linux/netpoll.h b/include/linux/netpoll.h index f91e50a76efd4..f6e8abe0b1f19 100644 --- a/include/linux/netpoll.h +++ b/include/linux/netpoll.h @@ -33,6 +33,7 @@ struct netpoll { u16 local_port, remote_port; u8 remote_mac[ETH_ALEN]; struct sk_buff_head skb_pool; + struct work_struct refill_wq; }; struct netpoll_info { diff --git a/net/core/netpoll.c b/net/core/netpoll.c index 62b4041aae1ae..8a0df2b274a88 100644 --- a/net/core/netpoll.c +++ b/net/core/netpoll.c @@ -284,12 +284,13 @@ static struct sk_buff *find_skb(struct netpoll *np, int len, int reserve) struct sk_buff *skb; zap_completion_queue(); - refill_skbs(np); repeat: skb = alloc_skb(len, GFP_ATOMIC); - if (!skb) + if (!skb) { skb = skb_dequeue(&np->skb_pool); + schedule_work(&np->refill_wq); + } if (!skb) { if (++count < 10) { @@ -535,6 +536,7 @@ static void skb_pool_flush(struct netpoll *np) { struct sk_buff_head *skb_pool; + cancel_work_sync(&np->refill_wq); skb_pool = &np->skb_pool; skb_queue_purge_reason(skb_pool, SKB_CONSUMED); } @@ -621,6 +623,14 @@ int netpoll_parse_options(struct netpoll *np, char *opt) } EXPORT_SYMBOL(netpoll_parse_options); +static void refill_skbs_work_handler(struct work_struct *work) +{ + struct netpoll *np = + container_of(work, struct netpoll, refill_wq); + + refill_skbs(np); +} + int __netpoll_setup(struct netpoll *np, struct net_device *ndev) { struct netpoll_info *npinfo; @@ -666,6 +676,7 @@ int __netpoll_setup(struct netpoll *np, struct net_device *ndev) /* fill up the skb queue */ refill_skbs(np); + INIT_WORK(&np->refill_wq, refill_skbs_work_handler); /* last thing to do is link it to the net device structure */ rcu_assign_pointer(ndev->npinfo, npinfo);
netpoll tries to refill the skb queue on every packet send, independently if packets are being consumed from the pool or not. This was particularly problematic while being called from printk(), where the operation would be done while holding the console lock. Introduce a more intelligent approach to skb queue management. Instead of constantly attempting to refill the queue, the system now defers refilling to a work queue and only triggers the workqueue when a buffer is actually dequeued. This change significantly reduces operations with the lock held. Add a work_struct to the netpoll structure for asynchronous refilling, updating find_skb() to schedule refill work only when necessary (skb is dequeued). These changes have demonstrated a 15% reduction in time spent during netpoll_send_msg operations, especially when no SKBs are not consumed from consumed from pool. When SKBs are being dequeued, the improvement is even better, around 70%, mainly because refilling the SKB pool is now happening outside of the critical patch (with console_owner lock held). Signed-off-by: Breno Leitao <leitao@debian.org> --- The above results were obtained using the `function_graph` ftrace tracer, with filtering enabled for the netpoll_send_udp() function. The test was executed by running the netcons_basic.sh selftest hundreds of times. --- include/linux/netpoll.h | 1 + net/core/netpoll.c | 15 +++++++++++++-- 2 files changed, 14 insertions(+), 2 deletions(-) --- base-commit: 5b62996184ca5bb86660bcd11d6c4560ce127df9 change-id: 20250304-netpoll_refill_v2-a5e48c402fd3 Best regards,