Message ID | 1632291439-823-1-git-send-email-lirongqing@baidu.com (mailing list archive) |
---|---|
State | Accepted |
Commit | a5df6333f1a08380c3b94a02105482263711ed3a |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | [net-next] skbuff: pass the result of data ksize to __build_skb_around | expand |
Context | Check | Description |
---|---|---|
netdev/cover_letter | success | Link |
netdev/fixes_present | success | Link |
netdev/patch_count | success | Link |
netdev/tree_selection | success | Clearly marked for net-next |
netdev/subject_prefix | success | Link |
netdev/cc_maintainers | fail | 9 maintainers not CCed: pabeni@redhat.com willemb@google.com gnault@redhat.com davem@davemloft.net alobakin@pm.me cong.wang@bytedance.com kuba@kernel.org vvs@virtuozzo.com jonathan.lemon@gmail.com |
netdev/source_inline | success | Was 0 now: 0 |
netdev/verify_signedoff | success | Link |
netdev/module_param | success | Was 0 now: 0 |
netdev/build_32bit | success | Errors and warnings before: 1 this patch: 1 |
netdev/kdoc | success | Errors and warnings before: 0 this patch: 0 |
netdev/verify_fixes | success | Link |
netdev/checkpatch | success | total: 0 errors, 0 warnings, 0 checks, 27 lines checked |
netdev/build_allmodconfig_warn | success | Errors and warnings before: 1 this patch: 1 |
netdev/header_inline | success | Link |
Hello: This patch was applied to netdev/net-next.git (refs/heads/master): On Wed, 22 Sep 2021 14:17:19 +0800 you wrote: > Avoid to call ksize again in __build_skb_around by passing > the result of data ksize to __build_skb_around > > nginx stress test shows this change can reduce ksize cpu usage, > and give a little performance boost > > Signed-off-by: Li RongQing <lirongqing@baidu.com> > > [...] Here is the summary with links: - [net-next] skbuff: pass the result of data ksize to __build_skb_around https://git.kernel.org/netdev/net-next/c/a5df6333f1a0 You are awesome, thank you! -- Deet-doot-dot, I am a bot. https://korg.docs.kernel.org/patchwork/pwbot.html
diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 9240af2..1e0a930 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -397,8 +397,9 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, { struct kmem_cache *cache; struct sk_buff *skb; - u8 *data; + unsigned int osize; bool pfmemalloc; + u8 *data; cache = (flags & SKB_ALLOC_FCLONE) ? skbuff_fclone_cache : skbuff_head_cache; @@ -430,7 +431,8 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, * Put skb_shared_info exactly at the end of allocated zone, * to allow max possible filling before reallocation. */ - size = SKB_WITH_OVERHEAD(ksize(data)); + osize = ksize(data); + size = SKB_WITH_OVERHEAD(osize); prefetchw(data + size); /* @@ -439,7 +441,7 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, * the tail pointer in struct sk_buff! */ memset(skb, 0, offsetof(struct sk_buff, tail)); - __build_skb_around(skb, data, 0); + __build_skb_around(skb, data, osize); skb->pfmemalloc = pfmemalloc; if (flags & SKB_ALLOC_FCLONE) {
Avoid to call ksize again in __build_skb_around by passing the result of data ksize to __build_skb_around nginx stress test shows this change can reduce ksize cpu usage, and give a little performance boost Signed-off-by: Li RongQing <lirongqing@baidu.com> --- net/core/skbuff.c | 8 +++++--- 1 files changed, 5 insertions(+), 3 deletions(-)