Message ID | 167421646327.1321776.7390743166998776914.stgit@firesoul (mailing list archive) |
---|---|
State | Accepted |
Commit | 3176eb82681ec9c8af31c6588ddedcc6cfb9e445 |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | [net-next,V2] net: avoid irqsave in skb_defer_free_flush | expand |
Hello: This patch was applied to netdev/net-next.git (master) by Jakub Kicinski <kuba@kernel.org>: On Fri, 20 Jan 2023 13:07:43 +0100 you wrote: > The spin_lock irqsave/restore API variant in skb_defer_free_flush can > be replaced with the faster spin_lock irq variant, which doesn't need > to read and restore the CPU flags. > > Using the unconditional irq "disable/enable" API variant is safe, > because the skb_defer_free_flush() function is only called during > NAPI-RX processing in net_rx_action(), where it is known the IRQs > are enabled. > > [...] Here is the summary with links: - [net-next,V2] net: avoid irqsave in skb_defer_free_flush https://git.kernel.org/netdev/net-next/c/3176eb82681e You are awesome, thank you!
diff --git a/net/core/dev.c b/net/core/dev.c index cf78f35bc0b9..9c60190fe352 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -6616,17 +6616,16 @@ static int napi_threaded_poll(void *data) static void skb_defer_free_flush(struct softnet_data *sd) { struct sk_buff *skb, *next; - unsigned long flags; /* Paired with WRITE_ONCE() in skb_attempt_defer_free() */ if (!READ_ONCE(sd->defer_list)) return; - spin_lock_irqsave(&sd->defer_lock, flags); + spin_lock_irq(&sd->defer_lock); skb = sd->defer_list; sd->defer_list = NULL; sd->defer_count = 0; - spin_unlock_irqrestore(&sd->defer_lock, flags); + spin_unlock_irq(&sd->defer_lock); while (skb != NULL) { next = skb->next;