diff mbox series

[bpf] xsk: fix batch alloc API on non-coherent systems

Message ID 20240911191019.296480-1-maciej.fijalkowski@intel.com (mailing list archive)
State Accepted
Commit 4144a1059b47e821c82c3c82eb23a4c7312dce3a
Delegated to: Netdev Maintainers
Headers show
Series [bpf] xsk: fix batch alloc API on non-coherent systems | expand

Checks

Context Check Description
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for bpf
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag present in non-next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 16 this patch: 16
netdev/build_tools success No tools touched, skip
netdev/cc_maintainers warning 6 maintainers not CCed: jonathan.lemon@gmail.com pabeni@redhat.com kuba@kernel.org edumazet@google.com hawk@kernel.org john.fastabend@gmail.com
netdev/build_clang success Errors and warnings before: 16 this patch: 16
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success Fixes tag looks correct
netdev/build_allmodconfig_warn success Errors and warnings before: 16 this patch: 16
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 38 lines checked
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
bpf/vmtest-bpf-VM_Test-0 success Logs for Lint
bpf/vmtest-bpf-VM_Test-2 success Logs for Unittests
bpf/vmtest-bpf-VM_Test-3 success Logs for Validate matrix.py
bpf/vmtest-bpf-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-VM_Test-5 success Logs for aarch64-gcc / build-release
bpf/vmtest-bpf-VM_Test-4 success Logs for aarch64-gcc / build / build for aarch64 with gcc
bpf/vmtest-bpf-VM_Test-10 success Logs for aarch64-gcc / veristat
bpf/vmtest-bpf-VM_Test-15 success Logs for s390x-gcc / test (test_verifier, false, 360) / test_verifier on s390x with gcc
bpf/vmtest-bpf-VM_Test-9 success Logs for aarch64-gcc / test (test_verifier, false, 360) / test_verifier on aarch64 with gcc
bpf/vmtest-bpf-VM_Test-12 success Logs for s390x-gcc / build-release
bpf/vmtest-bpf-VM_Test-11 success Logs for s390x-gcc / build / build for s390x with gcc
bpf/vmtest-bpf-VM_Test-16 success Logs for s390x-gcc / veristat
bpf/vmtest-bpf-VM_Test-17 success Logs for set-matrix
bpf/vmtest-bpf-VM_Test-23 success Logs for x86_64-gcc / test (test_progs_no_alu32_parallel, true, 30) / test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-VM_Test-19 success Logs for x86_64-gcc / build-release
bpf/vmtest-bpf-VM_Test-20 success Logs for x86_64-gcc / test (test_maps, false, 360) / test_maps on x86_64 with gcc
bpf/vmtest-bpf-VM_Test-24 success Logs for x86_64-gcc / test (test_progs_parallel, true, 30) / test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-VM_Test-26 fail Logs for x86_64-gcc / veristat / veristat on x86_64 with gcc
bpf/vmtest-bpf-VM_Test-18 success Logs for x86_64-gcc / build / build for x86_64 with gcc
bpf/vmtest-bpf-VM_Test-25 success Logs for x86_64-gcc / test (test_verifier, false, 360) / test_verifier on x86_64 with gcc
bpf/vmtest-bpf-VM_Test-27 success Logs for x86_64-llvm-17 / build / build for x86_64 with llvm-17
bpf/vmtest-bpf-VM_Test-28 success Logs for x86_64-llvm-17 / build-release / build for x86_64 with llvm-17-O2
bpf/vmtest-bpf-VM_Test-29 success Logs for x86_64-llvm-17 / test (test_maps, false, 360) / test_maps on x86_64 with llvm-17
bpf/vmtest-bpf-VM_Test-33 success Logs for x86_64-llvm-17 / veristat
bpf/vmtest-bpf-VM_Test-32 success Logs for x86_64-llvm-17 / test (test_verifier, false, 360) / test_verifier on x86_64 with llvm-17
bpf/vmtest-bpf-VM_Test-34 success Logs for x86_64-llvm-18 / build / build for x86_64 with llvm-18
bpf/vmtest-bpf-VM_Test-40 success Logs for x86_64-llvm-18 / test (test_verifier, false, 360) / test_verifier on x86_64 with llvm-18
bpf/vmtest-bpf-VM_Test-41 success Logs for x86_64-llvm-18 / veristat
bpf/vmtest-bpf-VM_Test-35 success Logs for x86_64-llvm-18 / build-release / build for x86_64 with llvm-18-O2
bpf/vmtest-bpf-VM_Test-36 success Logs for x86_64-llvm-18 / test (test_maps, false, 360) / test_maps on x86_64 with llvm-18
bpf/vmtest-bpf-VM_Test-7 success Logs for aarch64-gcc / test (test_progs, false, 360) / test_progs on aarch64 with gcc
bpf/vmtest-bpf-VM_Test-22 success Logs for x86_64-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-VM_Test-13 success Logs for s390x-gcc / test (test_progs, false, 360) / test_progs on s390x with gcc
bpf/vmtest-bpf-VM_Test-14 success Logs for s390x-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on s390x with gcc
bpf/vmtest-bpf-VM_Test-21 success Logs for x86_64-gcc / test (test_progs, false, 360) / test_progs on x86_64 with gcc
bpf/vmtest-bpf-VM_Test-8 success Logs for aarch64-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on aarch64 with gcc
bpf/vmtest-bpf-VM_Test-30 success Logs for x86_64-llvm-17 / test (test_progs, false, 360) / test_progs on x86_64 with llvm-17
bpf/vmtest-bpf-VM_Test-6 success Logs for aarch64-gcc / test (test_maps, false, 360) / test_maps on aarch64 with gcc
bpf/vmtest-bpf-PR fail PR summary
bpf/vmtest-bpf-VM_Test-31 success Logs for x86_64-llvm-17 / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with llvm-17
bpf/vmtest-bpf-VM_Test-37 success Logs for x86_64-llvm-18 / test (test_progs, false, 360) / test_progs on x86_64 with llvm-18
bpf/vmtest-bpf-VM_Test-38 success Logs for x86_64-llvm-18 / test (test_progs_cpuv4, false, 360) / test_progs_cpuv4 on x86_64 with llvm-18
bpf/vmtest-bpf-VM_Test-39 success Logs for x86_64-llvm-18 / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with llvm-18
netdev/contest success net-next-2024-09-14--00-00 (tests: 763)

Commit Message

Maciej Fijalkowski Sept. 11, 2024, 7:10 p.m. UTC
In cases when synchronizing DMA operations is necessary,
xsk_buff_alloc_batch() returns a single buffer instead of the requested
count. This puts the pressure on drivers that use batch API as they have
to check for this corner case on their side and take care of allocations
by themselves, which feels counter productive. Let us improve the core
by looping over xp_alloc() @max times when slow path needs to be taken.

Another issue with current interface, as spotted and fixed by Dries, was
that when driver called xsk_buff_alloc_batch() with @max == 0, for slow
path case it still allocated and returned a single buffer, which should
not happen. By introducing the logic from first paragraph we kill two
birds with one stone and address this problem as well.

Fixes: 47e4075df300 ("xsk: Batched buffer allocation for the pool")
Reported-and-tested-by: Dries De Winter <ddewinter@synamedia.com>
Co-developed-by: Dries De Winter <ddewinter@synamedia.com>
Signed-off-by: Dries De Winter <ddewinter@synamedia.com>
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
---
 net/xdp/xsk_buff_pool.c | 25 ++++++++++++++++++-------
 1 file changed, 18 insertions(+), 7 deletions(-)

Comments

Magnus Karlsson Sept. 12, 2024, 11:04 a.m. UTC | #1
On Wed, 11 Sept 2024 at 21:10, Maciej Fijalkowski
<maciej.fijalkowski@intel.com> wrote:
>
> In cases when synchronizing DMA operations is necessary,
> xsk_buff_alloc_batch() returns a single buffer instead of the requested
> count. This puts the pressure on drivers that use batch API as they have
> to check for this corner case on their side and take care of allocations
> by themselves, which feels counter productive. Let us improve the core
> by looping over xp_alloc() @max times when slow path needs to be taken.
>
> Another issue with current interface, as spotted and fixed by Dries, was
> that when driver called xsk_buff_alloc_batch() with @max == 0, for slow
> path case it still allocated and returned a single buffer, which should
> not happen. By introducing the logic from first paragraph we kill two
> birds with one stone and address this problem as well.

Thanks Maciej and Dries for finding and fixing this.

Acked-by: Magnus Karlsson <magnus.karlsson@intel.com>

> Fixes: 47e4075df300 ("xsk: Batched buffer allocation for the pool")
> Reported-and-tested-by: Dries De Winter <ddewinter@synamedia.com>
> Co-developed-by: Dries De Winter <ddewinter@synamedia.com>
> Signed-off-by: Dries De Winter <ddewinter@synamedia.com>
> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> ---
>  net/xdp/xsk_buff_pool.c | 25 ++++++++++++++++++-------
>  1 file changed, 18 insertions(+), 7 deletions(-)
>
> diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
> index 29afa880ffa0..5e2e03042ef3 100644
> --- a/net/xdp/xsk_buff_pool.c
> +++ b/net/xdp/xsk_buff_pool.c
> @@ -623,20 +623,31 @@ static u32 xp_alloc_reused(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u3
>         return nb_entries;
>  }
>
> -u32 xp_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max)
> +static u32 xp_alloc_slow(struct xsk_buff_pool *pool, struct xdp_buff **xdp,
> +                        u32 max)
>  {
> -       u32 nb_entries1 = 0, nb_entries2;
> +       int i;
>
> -       if (unlikely(pool->dev && dma_dev_need_sync(pool->dev))) {
> +       for (i = 0; i < max; i++) {
>                 struct xdp_buff *buff;
>
> -               /* Slow path */
>                 buff = xp_alloc(pool);
> -               if (buff)
> -                       *xdp = buff;
> -               return !!buff;
> +               if (unlikely(!buff))
> +                       return i;
> +               *xdp = buff;
> +               xdp++;
>         }
>
> +       return max;
> +}
> +
> +u32 xp_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max)
> +{
> +       u32 nb_entries1 = 0, nb_entries2;
> +
> +       if (unlikely(pool->dev && dma_dev_need_sync(pool->dev)))
> +               return xp_alloc_slow(pool, xdp, max);
> +
>         if (unlikely(pool->free_list_cnt)) {
>                 nb_entries1 = xp_alloc_reused(pool, xdp, max);
>                 if (nb_entries1 == max)
> --
> 2.34.1
>
>
Alexei Starovoitov Sept. 13, 2024, 7:48 p.m. UTC | #2
On Thu, Sep 12, 2024 at 4:04 AM Magnus Karlsson
<magnus.karlsson@gmail.com> wrote:
>
> On Wed, 11 Sept 2024 at 21:10, Maciej Fijalkowski
> <maciej.fijalkowski@intel.com> wrote:
> >
> > In cases when synchronizing DMA operations is necessary,
> > xsk_buff_alloc_batch() returns a single buffer instead of the requested
> > count. This puts the pressure on drivers that use batch API as they have
> > to check for this corner case on their side and take care of allocations
> > by themselves, which feels counter productive. Let us improve the core
> > by looping over xp_alloc() @max times when slow path needs to be taken.
> >
> > Another issue with current interface, as spotted and fixed by Dries, was
> > that when driver called xsk_buff_alloc_batch() with @max == 0, for slow
> > path case it still allocated and returned a single buffer, which should
> > not happen. By introducing the logic from first paragraph we kill two
> > birds with one stone and address this problem as well.
>
> Thanks Maciej and Dries for finding and fixing this.
>
> Acked-by: Magnus Karlsson <magnus.karlsson@intel.com>

We already did the last bpf and bpf-next/net PRs before the merge window,
so reassigning to netdev.

Acked-by: Alexei Starovoitov <ast@kernel.org>
patchwork-bot+netdevbpf@kernel.org Sept. 14, 2024, 1:20 a.m. UTC | #3
Hello:

This patch was applied to netdev/net.git (main)
by Jakub Kicinski <kuba@kernel.org>:

On Wed, 11 Sep 2024 21:10:19 +0200 you wrote:
> In cases when synchronizing DMA operations is necessary,
> xsk_buff_alloc_batch() returns a single buffer instead of the requested
> count. This puts the pressure on drivers that use batch API as they have
> to check for this corner case on their side and take care of allocations
> by themselves, which feels counter productive. Let us improve the core
> by looping over xp_alloc() @max times when slow path needs to be taken.
> 
> [...]

Here is the summary with links:
  - [bpf] xsk: fix batch alloc API on non-coherent systems
    https://git.kernel.org/netdev/net/c/4144a1059b47

You are awesome, thank you!
diff mbox series

Patch

diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
index 29afa880ffa0..5e2e03042ef3 100644
--- a/net/xdp/xsk_buff_pool.c
+++ b/net/xdp/xsk_buff_pool.c
@@ -623,20 +623,31 @@  static u32 xp_alloc_reused(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u3
 	return nb_entries;
 }
 
-u32 xp_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max)
+static u32 xp_alloc_slow(struct xsk_buff_pool *pool, struct xdp_buff **xdp,
+			 u32 max)
 {
-	u32 nb_entries1 = 0, nb_entries2;
+	int i;
 
-	if (unlikely(pool->dev && dma_dev_need_sync(pool->dev))) {
+	for (i = 0; i < max; i++) {
 		struct xdp_buff *buff;
 
-		/* Slow path */
 		buff = xp_alloc(pool);
-		if (buff)
-			*xdp = buff;
-		return !!buff;
+		if (unlikely(!buff))
+			return i;
+		*xdp = buff;
+		xdp++;
 	}
 
+	return max;
+}
+
+u32 xp_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max)
+{
+	u32 nb_entries1 = 0, nb_entries2;
+
+	if (unlikely(pool->dev && dma_dev_need_sync(pool->dev)))
+		return xp_alloc_slow(pool, xdp, max);
+
 	if (unlikely(pool->free_list_cnt)) {
 		nb_entries1 = xp_alloc_reused(pool, xdp, max);
 		if (nb_entries1 == max)