mbox series

[bpf-next,v3,0/4] xdp: recycle Page Pool backed skbs built from XDP frames

Message ID 20230313214300.1043280-1-aleksander.lobakin@intel.com (mailing list archive)
Headers show
Series xdp: recycle Page Pool backed skbs built from XDP frames | expand

Message

Alexander Lobakin March 13, 2023, 9:42 p.m. UTC
Yeah, I still remember that "Who needs cpumap nowadays" (c), but anyway.

__xdp_build_skb_from_frame() missed the moment when the networking stack
became able to recycle skb pages backed by a page_pool. This was making
e.g. cpumap redirect even less effective than simple %XDP_PASS. veth was
also affected in some scenarios.
A lot of drivers use skb_mark_for_recycle() already, it's been almost
two years and seems like there are no issues in using it in the generic
code too. {__,}xdp_release_frame() can be then removed as it losts its
last user.
Page Pool becomes then zero-alloc (or almost) in the abovementioned
cases, too. Other memory type models (who needs them at this point)
have no changes.

Some numbers on 1 Xeon Platinum core bombed with 27 Mpps of 64-byte
IPv6 UDP, iavf w/XDP[0] (CONFIG_PAGE_POOL_STATS is enabled):

Plain %XDP_PASS on baseline, Page Pool driver:

src cpu Rx     drops  dst cpu Rx
  2.1 Mpps       N/A    2.1 Mpps

cpumap redirect (cross-core, w/o leaving its NUMA node) on baseline:

  6.8 Mpps  5.0 Mpps    1.8 Mpps

cpumap redirect with skb PP recycling:

  7.9 Mpps  5.7 Mpps    2.2 Mpps
                       +22% (from cpumap redir on baseline)

[0] https://github.com/alobakin/linux/commits/iavf-xdp

Alexander Lobakin (4):
  selftests/bpf: robustify test_xdp_do_redirect with more payload magics
  net: page_pool, skbuff: make skb_mark_for_recycle() always available
  xdp: recycle Page Pool backed skbs built from XDP frames
  xdp: remove unused {__,}xdp_release_frame()

 include/linux/skbuff.h                        |  4 +--
 include/net/xdp.h                             | 29 ---------------
 net/core/xdp.c                                | 19 ++--------
 .../bpf/progs/test_xdp_do_redirect.c          | 36 +++++++++++++------
 4 files changed, 30 insertions(+), 58 deletions(-)

---
From v2[1]:
* fix the test_xdp_do_redirect selftest failing after the series: it was
  relying on that %XDP_PASS frames can't be recycled on veth
  (BPF CI, Alexei);
* explain "w/o leaving its node" in the cover letter (Jesper).

From v1[2]:
* make skb_mark_for_recycle() always available, otherwise there are build
  failures on non-PP systems (kbuild bot);
* 'Page Pool' -> 'page_pool' when it's about a page_pool instance, not
  API (Jesper);
* expanded test system info a bit in the cover letter (Jesper).

[1] https://lore.kernel.org/bpf/20230303133232.2546004-1-aleksander.lobakin@intel.com
[2] https://lore.kernel.org/bpf/20230301160315.1022488-1-aleksander.lobakin@intel.com

Comments

Alexander Lobakin March 16, 2023, 11:57 a.m. UTC | #1
From: Alexander Lobakin <aleksander.lobakin@intel.com>
Date: Mon, 13 Mar 2023 22:42:56 +0100

> Yeah, I still remember that "Who needs cpumap nowadays" (c), but anyway.
> 
> __xdp_build_skb_from_frame() missed the moment when the networking stack
> became able to recycle skb pages backed by a page_pool. This was making
> e.g. cpumap redirect even less effective than simple %XDP_PASS. veth was
> also affected in some scenarios.
> A lot of drivers use skb_mark_for_recycle() already, it's been almost
> two years and seems like there are no issues in using it in the generic
> code too. {__,}xdp_release_frame() can be then removed as it losts its
> last user.
> Page Pool becomes then zero-alloc (or almost) in the abovementioned
> cases, too. Other memory type models (who needs them at this point)
> have no changes.

Sorry, our SMTP proxy went crazy and resent several times all my
messages sent via git-send-email during the last couple days. Please
ignore this.

> 
> Some numbers on 1 Xeon Platinum core bombed with 27 Mpps of 64-byte
> IPv6 UDP, iavf w/XDP[0] (CONFIG_PAGE_POOL_STATS is enabled):
> 
> Plain %XDP_PASS on baseline, Page Pool driver:
> 
> src cpu Rx     drops  dst cpu Rx
>   2.1 Mpps       N/A    2.1 Mpps
> 
> cpumap redirect (cross-core, w/o leaving its NUMA node) on baseline:
> 
>   6.8 Mpps  5.0 Mpps    1.8 Mpps
> 
> cpumap redirect with skb PP recycling:
> 
>   7.9 Mpps  5.7 Mpps    2.2 Mpps
>                        +22% (from cpumap redir on baseline)
> 
> [0] https://github.com/alobakin/linux/commits/iavf-xdp
> 
> Alexander Lobakin (4):
>   selftests/bpf: robustify test_xdp_do_redirect with more payload magics
>   net: page_pool, skbuff: make skb_mark_for_recycle() always available
>   xdp: recycle Page Pool backed skbs built from XDP frames
>   xdp: remove unused {__,}xdp_release_frame()
> 
>  include/linux/skbuff.h                        |  4 +--
>  include/net/xdp.h                             | 29 ---------------
>  net/core/xdp.c                                | 19 ++--------
>  .../bpf/progs/test_xdp_do_redirect.c          | 36 +++++++++++++------
>  4 files changed, 30 insertions(+), 58 deletions(-)
> 
> ---
> From v2[1]:
> * fix the test_xdp_do_redirect selftest failing after the series: it was
>   relying on that %XDP_PASS frames can't be recycled on veth
>   (BPF CI, Alexei);
> * explain "w/o leaving its node" in the cover letter (Jesper).
> 
> From v1[2]:
> * make skb_mark_for_recycle() always available, otherwise there are build
>   failures on non-PP systems (kbuild bot);
> * 'Page Pool' -> 'page_pool' when it's about a page_pool instance, not
>   API (Jesper);
> * expanded test system info a bit in the cover letter (Jesper).
> 
> [1] https://lore.kernel.org/bpf/20230303133232.2546004-1-aleksander.lobakin@intel.com
> [2] https://lore.kernel.org/bpf/20230301160315.1022488-1-aleksander.lobakin@intel.com

Thanks,
Olek