diff mbox series

[net-next,5/5] skbuff: refill skb_cache early from deferred-to-consume entries

Message ID 20210111182801.12609-5-alobakin@pm.me (mailing list archive)
State Changes Requested
Delegated to: Netdev Maintainers
Headers show
Series skbuff: introduce skbuff_heads bulking and reusing | expand

Checks

Context Check Description
netdev/cover_letter success Link
netdev/fixes_present success Link
netdev/patch_count success Link
netdev/tree_selection success Clearly marked for net-next
netdev/subject_prefix success Link
netdev/cc_maintainers success CCed 11 of 11 maintainers
netdev/source_inline success Was 0 now: 0
netdev/verify_signedoff success Link
netdev/module_param success Was 0 now: 0
netdev/build_32bit success Errors and warnings before: 1 this patch: 1
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/verify_fixes success Link
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 11 lines checked
netdev/build_allmodconfig_warn success Errors and warnings before: 1 this patch: 1
netdev/header_inline success Link
netdev/stable success Stable not CCed

Commit Message

Alexander Lobakin Jan. 11, 2021, 6:29 p.m. UTC
Instead of unconditional queueing of ready-to-consume skbuff_heads
to flush_skb_cache, feed skb_cache with them instead if it's not
full already.
This greatly reduces the frequency of kmem_cache_alloc_bulk() calls.

Signed-off-by: Alexander Lobakin <alobakin@pm.me>
---
 net/core/skbuff.c | 5 +++++
 1 file changed, 5 insertions(+)
diff mbox series

Patch

diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 57a7307689f3..ba0d5611635e 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -904,6 +904,11 @@  static inline void _kfree_skb_defer(struct sk_buff *skb)
 	/* drop skb->head and call any destructors for packet */
 	skb_release_all(skb);
 
+	if (nc->skb_count < NAPI_SKB_CACHE_SIZE) {
+		nc->skb_cache[nc->skb_count++] = skb;
+		return;
+	}
+
 	/* record skb to CPU local list */
 	nc->flush_skb_cache[nc->flush_skb_count++] = skb;