Message ID | 00c5b7641c0c854b630a80038d0131d148c2c81a.1712270285.git.asml.silence@gmail.com (mailing list archive) |
---|---|
State | Changes Requested |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | [RESEND,net-next,v3] net: cache for same cpu skb_attempt_defer_free | expand |
On Fri, Apr 5, 2024 at 1:38 AM Pavel Begunkov <asml.silence@gmail.com> wrote: > > Optimise skb_attempt_defer_free() when run by the same CPU the skb was > allocated on. Instead of __kfree_skb() -> kmem_cache_free() we can > disable softirqs and put the buffer into cpu local caches. > > CPU bound TCP ping pong style benchmarking (i.e. netbench) showed a 1% > throughput increase (392.2 -> 396.4 Krps). Cross checking with profiles, > the total CPU share of skb_attempt_defer_free() dropped by 0.6%. Note, > I'd expect the win doubled with rx only benchmarks, as the optimisation > is for the receive path, but the test spends >55% of CPU doing writes. > > Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> > --- > > v3: rebased, no changes otherwise > > v2: pass @napi_safe=true by using __napi_kfree_skb() > > net/core/skbuff.c | 15 ++++++++++++++- > 1 file changed, 14 insertions(+), 1 deletion(-) > > diff --git a/net/core/skbuff.c b/net/core/skbuff.c > index 2a5ce6667bbb..c4d36e462a9a 100644 > --- a/net/core/skbuff.c > +++ b/net/core/skbuff.c > @@ -6968,6 +6968,19 @@ void __skb_ext_put(struct skb_ext *ext) > EXPORT_SYMBOL(__skb_ext_put); > #endif /* CONFIG_SKB_EXTENSIONS */ > > +static void kfree_skb_napi_cache(struct sk_buff *skb) > +{ > + /* if SKB is a clone, don't handle this case */ > + if (skb->fclone != SKB_FCLONE_UNAVAILABLE) { > + __kfree_skb(skb); > + return; > + } > + > + local_bh_disable(); > + __napi_kfree_skb(skb, SKB_DROP_REASON_NOT_SPECIFIED); This needs to be SKB_CONSUMED
On 4/5/24 09:46, Eric Dumazet wrote: > On Fri, Apr 5, 2024 at 1:38 AM Pavel Begunkov <asml.silence@gmail.com> wrote: >> >> Optimise skb_attempt_defer_free() when run by the same CPU the skb was >> allocated on. Instead of __kfree_skb() -> kmem_cache_free() we can >> disable softirqs and put the buffer into cpu local caches. >> >> CPU bound TCP ping pong style benchmarking (i.e. netbench) showed a 1% >> throughput increase (392.2 -> 396.4 Krps). Cross checking with profiles, >> the total CPU share of skb_attempt_defer_free() dropped by 0.6%. Note, >> I'd expect the win doubled with rx only benchmarks, as the optimisation >> is for the receive path, but the test spends >55% of CPU doing writes. >> >> Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> >> --- >> >> v3: rebased, no changes otherwise >> >> v2: pass @napi_safe=true by using __napi_kfree_skb() >> >> net/core/skbuff.c | 15 ++++++++++++++- >> 1 file changed, 14 insertions(+), 1 deletion(-) >> >> diff --git a/net/core/skbuff.c b/net/core/skbuff.c >> index 2a5ce6667bbb..c4d36e462a9a 100644 >> --- a/net/core/skbuff.c >> +++ b/net/core/skbuff.c >> @@ -6968,6 +6968,19 @@ void __skb_ext_put(struct skb_ext *ext) >> EXPORT_SYMBOL(__skb_ext_put); >> #endif /* CONFIG_SKB_EXTENSIONS */ >> >> +static void kfree_skb_napi_cache(struct sk_buff *skb) >> +{ >> + /* if SKB is a clone, don't handle this case */ >> + if (skb->fclone != SKB_FCLONE_UNAVAILABLE) { >> + __kfree_skb(skb); >> + return; >> + } >> + >> + local_bh_disable(); >> + __napi_kfree_skb(skb, SKB_DROP_REASON_NOT_SPECIFIED); > > This needs to be SKB_CONSUMED Net folks and yourself were previously strictly insisting that every patch should do only one thing at a time without introducing unrelated changes. Considering it replaces __kfree_skb, which passes SKB_DROP_REASON_NOT_SPECIFIED, that should rather be a separate patch.
On Fri, Apr 5, 2024 at 1:55 PM Pavel Begunkov <asml.silence@gmail.com> wrote: > > On 4/5/24 09:46, Eric Dumazet wrote: > > On Fri, Apr 5, 2024 at 1:38 AM Pavel Begunkov <asml.silence@gmail.com> wrote: > >> > >> Optimise skb_attempt_defer_free() when run by the same CPU the skb was > >> allocated on. Instead of __kfree_skb() -> kmem_cache_free() we can > >> disable softirqs and put the buffer into cpu local caches. > >> > >> CPU bound TCP ping pong style benchmarking (i.e. netbench) showed a 1% > >> throughput increase (392.2 -> 396.4 Krps). Cross checking with profiles, > >> the total CPU share of skb_attempt_defer_free() dropped by 0.6%. Note, > >> I'd expect the win doubled with rx only benchmarks, as the optimisation > >> is for the receive path, but the test spends >55% of CPU doing writes. > >> > >> Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> > >> --- > >> > >> v3: rebased, no changes otherwise > >> > >> v2: pass @napi_safe=true by using __napi_kfree_skb() > >> > >> net/core/skbuff.c | 15 ++++++++++++++- > >> 1 file changed, 14 insertions(+), 1 deletion(-) > >> > >> diff --git a/net/core/skbuff.c b/net/core/skbuff.c > >> index 2a5ce6667bbb..c4d36e462a9a 100644 > >> --- a/net/core/skbuff.c > >> +++ b/net/core/skbuff.c > >> @@ -6968,6 +6968,19 @@ void __skb_ext_put(struct skb_ext *ext) > >> EXPORT_SYMBOL(__skb_ext_put); > >> #endif /* CONFIG_SKB_EXTENSIONS */ > >> > >> +static void kfree_skb_napi_cache(struct sk_buff *skb) > >> +{ > >> + /* if SKB is a clone, don't handle this case */ > >> + if (skb->fclone != SKB_FCLONE_UNAVAILABLE) { > >> + __kfree_skb(skb); > >> + return; > >> + } > >> + > >> + local_bh_disable(); > >> + __napi_kfree_skb(skb, SKB_DROP_REASON_NOT_SPECIFIED); > > > > This needs to be SKB_CONSUMED > > Net folks and yourself were previously strictly insisting that > every patch should do only one thing at a time without introducing > unrelated changes. Considering it replaces __kfree_skb, which > passes SKB_DROP_REASON_NOT_SPECIFIED, that should rather be a > separate patch. OK, I will send a patch myself. __kfree_skb(skb) had no drop reason yet. Here you are explicitly adding one wrong reason, this is why I gave feedback.
Hello Eric, On Fri, Apr 5, 2024 at 8:18 PM Eric Dumazet <edumazet@google.com> wrote: > > On Fri, Apr 5, 2024 at 1:55 PM Pavel Begunkov <asml.silence@gmail.com> wrote: > > > > On 4/5/24 09:46, Eric Dumazet wrote: > > > On Fri, Apr 5, 2024 at 1:38 AM Pavel Begunkov <asml.silence@gmail.com> wrote: > > >> > > >> Optimise skb_attempt_defer_free() when run by the same CPU the skb was > > >> allocated on. Instead of __kfree_skb() -> kmem_cache_free() we can > > >> disable softirqs and put the buffer into cpu local caches. > > >> > > >> CPU bound TCP ping pong style benchmarking (i.e. netbench) showed a 1% > > >> throughput increase (392.2 -> 396.4 Krps). Cross checking with profiles, > > >> the total CPU share of skb_attempt_defer_free() dropped by 0.6%. Note, > > >> I'd expect the win doubled with rx only benchmarks, as the optimisation > > >> is for the receive path, but the test spends >55% of CPU doing writes. > > >> > > >> Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> > > >> --- > > >> > > >> v3: rebased, no changes otherwise > > >> > > >> v2: pass @napi_safe=true by using __napi_kfree_skb() > > >> > > >> net/core/skbuff.c | 15 ++++++++++++++- > > >> 1 file changed, 14 insertions(+), 1 deletion(-) > > >> > > >> diff --git a/net/core/skbuff.c b/net/core/skbuff.c > > >> index 2a5ce6667bbb..c4d36e462a9a 100644 > > >> --- a/net/core/skbuff.c > > >> +++ b/net/core/skbuff.c > > >> @@ -6968,6 +6968,19 @@ void __skb_ext_put(struct skb_ext *ext) > > >> EXPORT_SYMBOL(__skb_ext_put); > > >> #endif /* CONFIG_SKB_EXTENSIONS */ > > >> > > >> +static void kfree_skb_napi_cache(struct sk_buff *skb) > > >> +{ > > >> + /* if SKB is a clone, don't handle this case */ > > >> + if (skb->fclone != SKB_FCLONE_UNAVAILABLE) { > > >> + __kfree_skb(skb); > > >> + return; > > >> + } > > >> + > > >> + local_bh_disable(); > > >> + __napi_kfree_skb(skb, SKB_DROP_REASON_NOT_SPECIFIED); > > > > > > This needs to be SKB_CONSUMED > > > > Net folks and yourself were previously strictly insisting that > > every patch should do only one thing at a time without introducing > > unrelated changes. Considering it replaces __kfree_skb, which > > passes SKB_DROP_REASON_NOT_SPECIFIED, that should rather be a > > separate patch. > > OK, I will send a patch myself. > > __kfree_skb(skb) had no drop reason yet. Can I ask one question: is it meaningless to add reason in this internal function since I observed those callers and noticed that there are no important reasons? > > Here you are explicitly adding one wrong reason, this is why I gave feedback. > Agreed. It's also what I suggested before (https://lore.kernel.org/netdev/CAL+tcoA=3KNFGNv4DSqnWcUu4LTY3Pz5ex+fRr4LkyS8ZNNKwQ@mail.gmail.com/).
On 4/5/24 13:18, Eric Dumazet wrote: > On Fri, Apr 5, 2024 at 1:55 PM Pavel Begunkov <asml.silence@gmail.com> wrote: >> >> On 4/5/24 09:46, Eric Dumazet wrote: >>> On Fri, Apr 5, 2024 at 1:38 AM Pavel Begunkov <asml.silence@gmail.com> wrote: >>>> >>>> Optimise skb_attempt_defer_free() when run by the same CPU the skb was >>>> allocated on. Instead of __kfree_skb() -> kmem_cache_free() we can >>>> disable softirqs and put the buffer into cpu local caches. >>>> >>>> CPU bound TCP ping pong style benchmarking (i.e. netbench) showed a 1% >>>> throughput increase (392.2 -> 396.4 Krps). Cross checking with profiles, >>>> the total CPU share of skb_attempt_defer_free() dropped by 0.6%. Note, >>>> I'd expect the win doubled with rx only benchmarks, as the optimisation >>>> is for the receive path, but the test spends >55% of CPU doing writes. >>>> >>>> Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> >>>> --- >>>> >>>> v3: rebased, no changes otherwise >>>> >>>> v2: pass @napi_safe=true by using __napi_kfree_skb() >>>> >>>> net/core/skbuff.c | 15 ++++++++++++++- >>>> 1 file changed, 14 insertions(+), 1 deletion(-) >>>> >>>> diff --git a/net/core/skbuff.c b/net/core/skbuff.c >>>> index 2a5ce6667bbb..c4d36e462a9a 100644 >>>> --- a/net/core/skbuff.c >>>> +++ b/net/core/skbuff.c >>>> @@ -6968,6 +6968,19 @@ void __skb_ext_put(struct skb_ext *ext) >>>> EXPORT_SYMBOL(__skb_ext_put); >>>> #endif /* CONFIG_SKB_EXTENSIONS */ >>>> >>>> +static void kfree_skb_napi_cache(struct sk_buff *skb) >>>> +{ >>>> + /* if SKB is a clone, don't handle this case */ >>>> + if (skb->fclone != SKB_FCLONE_UNAVAILABLE) { >>>> + __kfree_skb(skb); >>>> + return; >>>> + } >>>> + >>>> + local_bh_disable(); >>>> + __napi_kfree_skb(skb, SKB_DROP_REASON_NOT_SPECIFIED); >>> >>> This needs to be SKB_CONSUMED >> >> Net folks and yourself were previously strictly insisting that >> every patch should do only one thing at a time without introducing >> unrelated changes. Considering it replaces __kfree_skb, which >> passes SKB_DROP_REASON_NOT_SPECIFIED, that should rather be a >> separate patch. > > OK, I will send a patch myself. Ok, alternatively, I can make it a series adding it on top. > __kfree_skb(skb) had no drop reason yet. > > Here you are explicitly adding one wrong reason, this is why I gave feedback.
On Fri, Apr 5, 2024 at 2:29 PM Jason Xing <kerneljasonxing@gmail.com> wrote: > > Hello Eric, > > On Fri, Apr 5, 2024 at 8:18 PM Eric Dumazet <edumazet@google.com> wrote: > > > > On Fri, Apr 5, 2024 at 1:55 PM Pavel Begunkov <asml.silence@gmail.com> wrote: > > > > > > On 4/5/24 09:46, Eric Dumazet wrote: > > > > On Fri, Apr 5, 2024 at 1:38 AM Pavel Begunkov <asml.silence@gmail.com> wrote: > > > >> > > > >> Optimise skb_attempt_defer_free() when run by the same CPU the skb was > > > >> allocated on. Instead of __kfree_skb() -> kmem_cache_free() we can > > > >> disable softirqs and put the buffer into cpu local caches. > > > >> > > > >> CPU bound TCP ping pong style benchmarking (i.e. netbench) showed a 1% > > > >> throughput increase (392.2 -> 396.4 Krps). Cross checking with profiles, > > > >> the total CPU share of skb_attempt_defer_free() dropped by 0.6%. Note, > > > >> I'd expect the win doubled with rx only benchmarks, as the optimisation > > > >> is for the receive path, but the test spends >55% of CPU doing writes. > > > >> > > > >> Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> > > > >> --- > > > >> > > > >> v3: rebased, no changes otherwise > > > >> > > > >> v2: pass @napi_safe=true by using __napi_kfree_skb() > > > >> > > > >> net/core/skbuff.c | 15 ++++++++++++++- > > > >> 1 file changed, 14 insertions(+), 1 deletion(-) > > > >> > > > >> diff --git a/net/core/skbuff.c b/net/core/skbuff.c > > > >> index 2a5ce6667bbb..c4d36e462a9a 100644 > > > >> --- a/net/core/skbuff.c > > > >> +++ b/net/core/skbuff.c > > > >> @@ -6968,6 +6968,19 @@ void __skb_ext_put(struct skb_ext *ext) > > > >> EXPORT_SYMBOL(__skb_ext_put); > > > >> #endif /* CONFIG_SKB_EXTENSIONS */ > > > >> > > > >> +static void kfree_skb_napi_cache(struct sk_buff *skb) > > > >> +{ > > > >> + /* if SKB is a clone, don't handle this case */ > > > >> + if (skb->fclone != SKB_FCLONE_UNAVAILABLE) { > > > >> + __kfree_skb(skb); > > > >> + return; > > > >> + } > > > >> + > > > >> + local_bh_disable(); > > > >> + __napi_kfree_skb(skb, SKB_DROP_REASON_NOT_SPECIFIED); > > > > > > > > This needs to be SKB_CONSUMED > > > > > > Net folks and yourself were previously strictly insisting that > > > every patch should do only one thing at a time without introducing > > > unrelated changes. Considering it replaces __kfree_skb, which > > > passes SKB_DROP_REASON_NOT_SPECIFIED, that should rather be a > > > separate patch. > > > > OK, I will send a patch myself. > > > > __kfree_skb(skb) had no drop reason yet. > > Can I ask one question: is it meaningless to add reason in this > internal function since I observed those callers and noticed that > there are no important reasons? There are false positives at this moment whenever frag_list are used in rx skbs. (Small MAX_SKB_FRAGS, small MTU, big GRO size)
On Fri, Apr 5, 2024 at 2:35 PM Pavel Begunkov <asml.silence@gmail.com> wrote:
> Ok, alternatively, I can make it a series adding it on top.
Sure thing, thanks.
On Fri, Apr 5, 2024 at 2:38 PM Eric Dumazet <edumazet@google.com> wrote: > There are false positives at this moment whenever frag_list are used in rx skbs. > > (Small MAX_SKB_FRAGS, small MTU, big GRO size) perf record -a -g -e skb:kfree_skb sleep 1 [ perf record: Woken up 84 times to write data ] [ perf record: Captured and wrote 21.594 MB perf.data (95653 samples) ] perf script netserver 43113 [051] 2053323.508683: skb:kfree_skb: skbaddr=0xffff8d699e0b8f00 protocol=34525 location=skb_release_data reason: NOT_SPECIFIED 7fffa5bcadb8 kfree_skb_list_reason ([kernel.kallsyms]) 7fffa5bcadb8 kfree_skb_list_reason ([kernel.kallsyms]) 7fffa5bcb7b8 skb_release_data ([kernel.kallsyms]) 7fffa5bcaa5f __kfree_skb ([kernel.kallsyms]) 7fffa5bd7099 skb_attempt_defer_free ([kernel.kallsyms]) 7fffa5ce81fa tcp_recvmsg_locked ([kernel.kallsyms]) 7fffa5ce7cf9 tcp_recvmsg ([kernel.kallsyms]) 7fffa5dac407 inet6_recvmsg ([kernel.kallsyms]) 7fffa5bb9bc2 sock_recvmsg ([kernel.kallsyms]) 7fffa5bbbc8b __sys_recvfrom ([kernel.kallsyms]) 7fffa5bbbd3a __x64_sys_recvfrom ([kernel.kallsyms]) 7fffa5eeb367 do_syscall_64 ([kernel.kallsyms]) 7fffa600312a entry_SYSCALL_64_after_hwframe ([kernel.kallsyms]) 1220d2 __libc_recv (/usr/grte/v3/lib64/libc-2.15.so)
On Fri, Apr 5, 2024 at 8:42 PM Eric Dumazet <edumazet@google.com> wrote: > > On Fri, Apr 5, 2024 at 2:38 PM Eric Dumazet <edumazet@google.com> wrote: > > > There are false positives at this moment whenever frag_list are used in rx skbs. > > > > (Small MAX_SKB_FRAGS, small MTU, big GRO size) > > perf record -a -g -e skb:kfree_skb sleep 1 > [ perf record: Woken up 84 times to write data ] > [ perf record: Captured and wrote 21.594 MB perf.data (95653 samples) ] > > perf script > > netserver 43113 [051] 2053323.508683: skb:kfree_skb: > skbaddr=0xffff8d699e0b8f00 protocol=34525 location=skb_release_data > reason: NOT_SPECIFIED > 7fffa5bcadb8 kfree_skb_list_reason ([kernel.kallsyms]) > 7fffa5bcadb8 kfree_skb_list_reason ([kernel.kallsyms]) > 7fffa5bcb7b8 skb_release_data ([kernel.kallsyms]) > 7fffa5bcaa5f __kfree_skb ([kernel.kallsyms]) > 7fffa5bd7099 skb_attempt_defer_free ([kernel.kallsyms]) > 7fffa5ce81fa tcp_recvmsg_locked ([kernel.kallsyms]) > 7fffa5ce7cf9 tcp_recvmsg ([kernel.kallsyms]) > 7fffa5dac407 inet6_recvmsg ([kernel.kallsyms]) > 7fffa5bb9bc2 sock_recvmsg ([kernel.kallsyms]) > 7fffa5bbbc8b __sys_recvfrom ([kernel.kallsyms]) > 7fffa5bbbd3a __x64_sys_recvfrom ([kernel.kallsyms]) > 7fffa5eeb367 do_syscall_64 ([kernel.kallsyms]) > 7fffa600312a entry_SYSCALL_64_after_hwframe ([kernel.kallsyms]) > 1220d2 __libc_recv (/usr/grte/v3/lib64/libc-2.15.so) Thanks Eric. I give up this thought.
diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 2a5ce6667bbb..c4d36e462a9a 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -6968,6 +6968,19 @@ void __skb_ext_put(struct skb_ext *ext) EXPORT_SYMBOL(__skb_ext_put); #endif /* CONFIG_SKB_EXTENSIONS */ +static void kfree_skb_napi_cache(struct sk_buff *skb) +{ + /* if SKB is a clone, don't handle this case */ + if (skb->fclone != SKB_FCLONE_UNAVAILABLE) { + __kfree_skb(skb); + return; + } + + local_bh_disable(); + __napi_kfree_skb(skb, SKB_DROP_REASON_NOT_SPECIFIED); + local_bh_enable(); +} + /** * skb_attempt_defer_free - queue skb for remote freeing * @skb: buffer @@ -6986,7 +6999,7 @@ void skb_attempt_defer_free(struct sk_buff *skb) if (WARN_ON_ONCE(cpu >= nr_cpu_ids) || !cpu_online(cpu) || cpu == raw_smp_processor_id()) { -nodefer: __kfree_skb(skb); +nodefer: kfree_skb_napi_cache(skb); return; }
Optimise skb_attempt_defer_free() when run by the same CPU the skb was allocated on. Instead of __kfree_skb() -> kmem_cache_free() we can disable softirqs and put the buffer into cpu local caches. CPU bound TCP ping pong style benchmarking (i.e. netbench) showed a 1% throughput increase (392.2 -> 396.4 Krps). Cross checking with profiles, the total CPU share of skb_attempt_defer_free() dropped by 0.6%. Note, I'd expect the win doubled with rx only benchmarks, as the optimisation is for the receive path, but the test spends >55% of CPU doing writes. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> --- v3: rebased, no changes otherwise v2: pass @napi_safe=true by using __napi_kfree_skb() net/core/skbuff.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-)