diff mbox series

[bpf-next] selftests/xsk: fix for SEND_RECEIVE_UNALIGNED test.

Message ID 20231103142936.393654-1-tushar.vyavahare@intel.com (mailing list archive)
State Changes Requested
Delegated to: BPF
Headers show
Series [bpf-next] selftests/xsk: fix for SEND_RECEIVE_UNALIGNED test. | expand

Checks

Context Check Description
bpf/vmtest-bpf-next-PR success PR summary
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for bpf-next
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 9 this patch: 9
netdev/cc_maintainers warning 12 maintainers not CCed: sdf@google.com jolsa@kernel.org andrii@kernel.org john.fastabend@gmail.com kpsingh@kernel.org mykolal@fb.com song@kernel.org shuah@kernel.org linux-kselftest@vger.kernel.org yonghong.song@linux.dev haoluo@google.com martin.lau@linux.dev
netdev/build_clang success Errors and warnings before: 9 this patch: 9
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success Fixes tag looks correct
netdev/build_allmodconfig_warn success Errors and warnings before: 9 this patch: 9
netdev/checkpatch warning WARNING: line length of 81 exceeds 80 columns WARNING: line length of 83 exceeds 80 columns WARNING: line length of 86 exceeds 80 columns WARNING: line length of 87 exceeds 80 columns WARNING: line length of 88 exceeds 80 columns WARNING: line length of 91 exceeds 80 columns WARNING: line length of 95 exceeds 80 columns WARNING: line length of 97 exceeds 80 columns WARNING: line length of 98 exceeds 80 columns WARNING: line length of 99 exceeds 80 columns
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
bpf/vmtest-bpf-next-VM_Test-12 success Logs for x86_64-gcc / test (test_maps, false, 360) / test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-13 success Logs for x86_64-gcc / test (test_progs, false, 360) / test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-9 success Logs for s390x-gcc / build / build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-14 success Logs for s390x-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-16 success Logs for x86_64-gcc / build / build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-15 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-21 success Logs for x86_64-gcc / test (test_progs_parallel, true, 30) / test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-20 success Logs for x86_64-gcc / test (test_progs_no_alu32_parallel, true, 30) / test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-22 success Logs for x86_64-gcc / test (test_verifier, false, 360) / test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-23 success Logs for x86_64-gcc / veristat / veristat on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-27 success Logs for x86_64-llvm-16 / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-25 success Logs for x86_64-llvm-16 / test (test_maps, false, 360) / test_maps on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-26 success Logs for x86_64-llvm-16 / test (test_progs, false, 360) / test_progs on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-28 success Logs for x86_64-llvm-16 / test (test_verifier, false, 360) / test_verifier on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-29 success Logs for x86_64-llvm-16 / veristat
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-0 success Logs for Lint
bpf/vmtest-bpf-next-VM_Test-2 success Logs for Validate matrix.py
bpf/vmtest-bpf-next-VM_Test-5 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-3 success Logs for aarch64-gcc / build / build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-24 success Logs for x86_64-llvm-16 / veristat
bpf/vmtest-bpf-next-VM_Test-11 success Logs for x86_64-gcc / build / build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-10 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-19 success Logs for x86_64-llvm-16 / build / build for x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-8 success Logs for aarch64-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-4 success Logs for aarch64-gcc / test (test_maps, false, 360) / test_maps on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-6 success Logs for aarch64-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-7 success Logs for aarch64-gcc / test (test_verifier, false, 360) / test_verifier on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-17 success Logs for x86_64-gcc / test (test_verifier, false, 360) / test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-18 success Logs for x86_64-gcc / veristat / veristat on x86_64 with gcc

Commit Message

Tushar Vyavahare Nov. 3, 2023, 2:29 p.m. UTC
Fix test broken by shared umem test and framework enhancement commit.

Correct the current implementation of pkt_stream_replace_half() by
ensuring that nb_valid_entries are not set to half, as this is not true
for all the tests.

Create a new function called pkt_modify() that allows for packet
modification to meet specific requirements while ensuring the accurate
maintenance of the valid packet count to prevent inconsistencies in packet
tracking.

Fixes: 6d198a89c004 ("selftests/xsk: Add a test for shared umem feature")
Reported-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Tushar Vyavahare <tushar.vyavahare@intel.com>
---
 tools/testing/selftests/bpf/xskxceiver.c | 71 ++++++++++++++++--------
 1 file changed, 47 insertions(+), 24 deletions(-)

Comments

Magnus Karlsson Nov. 8, 2023, 9:35 a.m. UTC | #1
On Fri, 3 Nov 2023 at 15:41, Tushar Vyavahare
<tushar.vyavahare@intel.com> wrote:
>
> Fix test broken by shared umem test and framework enhancement commit.
>
> Correct the current implementation of pkt_stream_replace_half() by
> ensuring that nb_valid_entries are not set to half, as this is not true
> for all the tests.
>
> Create a new function called pkt_modify() that allows for packet
> modification to meet specific requirements while ensuring the accurate
> maintenance of the valid packet count to prevent inconsistencies in packet
> tracking.

Thanks for the fix Tushar. While long, this gives the packet stream
modification functionality a better structure.

Acked-by: Magnus Karlsson <magnus.karlsson@intel.com>

> Fixes: 6d198a89c004 ("selftests/xsk: Add a test for shared umem feature")
> Reported-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> Signed-off-by: Tushar Vyavahare <tushar.vyavahare@intel.com>
> ---
>  tools/testing/selftests/bpf/xskxceiver.c | 71 ++++++++++++++++--------
>  1 file changed, 47 insertions(+), 24 deletions(-)
>
> diff --git a/tools/testing/selftests/bpf/xskxceiver.c b/tools/testing/selftests/bpf/xskxceiver.c
> index 591ca9637b23..f7d3a4a9013f 100644
> --- a/tools/testing/selftests/bpf/xskxceiver.c
> +++ b/tools/testing/selftests/bpf/xskxceiver.c
> @@ -634,16 +634,35 @@ static u32 pkt_nb_frags(u32 frame_size, struct pkt_stream *pkt_stream, struct pk
>         return nb_frags;
>  }
>
> -static void pkt_set(struct pkt_stream *pkt_stream, struct pkt *pkt, int offset, u32 len)
> +static bool pkt_valid(bool unaligned_mode, int offset, u32 len)
> +{
> +       if (len > MAX_ETH_JUMBO_SIZE || (!unaligned_mode && offset < 0))
> +               return false;
> +
> +       return true;
> +}
> +
> +static void pkt_set(struct pkt_stream *pkt_stream, struct xsk_umem_info *umem, struct pkt *pkt,
> +                   int offset, u32 len)
>  {
>         pkt->offset = offset;
>         pkt->len = len;
> -       if (len > MAX_ETH_JUMBO_SIZE) {
> -               pkt->valid = false;
> -       } else {
> -               pkt->valid = true;
> +
> +       pkt->valid = pkt_valid(umem->unaligned_mode, offset, len);
> +       if (pkt->valid)
>                 pkt_stream->nb_valid_entries++;
> -       }
> +}
> +
> +static void pkt_modify(struct pkt_stream *pkt_stream, struct xsk_umem_info *umem, struct pkt *pkt,
> +                      int offset, u32 len)
> +{
> +       bool mod_valid;
> +
> +       pkt->offset = offset;
> +       pkt->len = len;
> +       mod_valid  = pkt_valid(umem->unaligned_mode, offset, len);
> +       pkt_stream->nb_valid_entries += mod_valid - pkt->valid;
> +       pkt->valid = mod_valid;
>  }
>
>  static u32 pkt_get_buffer_len(struct xsk_umem_info *umem, u32 len)
> @@ -651,7 +670,8 @@ static u32 pkt_get_buffer_len(struct xsk_umem_info *umem, u32 len)
>         return ceil_u32(len, umem->frame_size) * umem->frame_size;
>  }
>
> -static struct pkt_stream *__pkt_stream_generate(u32 nb_pkts, u32 pkt_len, u32 nb_start, u32 nb_off)
> +static struct pkt_stream *__pkt_stream_generate(struct xsk_umem_info *umem, u32 nb_pkts,
> +                                               u32 pkt_len, u32 nb_start, u32 nb_off)
>  {
>         struct pkt_stream *pkt_stream;
>         u32 i;
> @@ -665,30 +685,31 @@ static struct pkt_stream *__pkt_stream_generate(u32 nb_pkts, u32 pkt_len, u32 nb
>         for (i = 0; i < nb_pkts; i++) {
>                 struct pkt *pkt = &pkt_stream->pkts[i];
>
> -               pkt_set(pkt_stream, pkt, 0, pkt_len);
> +               pkt_set(pkt_stream, umem, pkt, 0, pkt_len);
>                 pkt->pkt_nb = nb_start + i * nb_off;
>         }
>
>         return pkt_stream;
>  }
>
> -static struct pkt_stream *pkt_stream_generate(u32 nb_pkts, u32 pkt_len)
> +static struct pkt_stream *pkt_stream_generate(struct xsk_umem_info *umem, u32 nb_pkts, u32 pkt_len)
>  {
> -       return __pkt_stream_generate(nb_pkts, pkt_len, 0, 1);
> +       return __pkt_stream_generate(umem, nb_pkts, pkt_len, 0, 1);
>  }
>
> -static struct pkt_stream *pkt_stream_clone(struct pkt_stream *pkt_stream)
> +static struct pkt_stream *pkt_stream_clone(struct pkt_stream *pkt_stream,
> +                                          struct xsk_umem_info *umem)
>  {
> -       return pkt_stream_generate(pkt_stream->nb_pkts, pkt_stream->pkts[0].len);
> +       return pkt_stream_generate(umem, pkt_stream->nb_pkts, pkt_stream->pkts[0].len);
>  }
>
>  static void pkt_stream_replace(struct test_spec *test, u32 nb_pkts, u32 pkt_len)
>  {
>         struct pkt_stream *pkt_stream;
>
> -       pkt_stream = pkt_stream_generate(nb_pkts, pkt_len);
> +       pkt_stream = pkt_stream_generate(test->ifobj_rx->umem, nb_pkts, pkt_len);
>         test->ifobj_tx->xsk->pkt_stream = pkt_stream;
> -       pkt_stream = pkt_stream_generate(nb_pkts, pkt_len);
> +       pkt_stream = pkt_stream_generate(test->ifobj_tx->umem, nb_pkts, pkt_len);
>         test->ifobj_rx->xsk->pkt_stream = pkt_stream;
>  }
>
> @@ -698,12 +719,11 @@ static void __pkt_stream_replace_half(struct ifobject *ifobj, u32 pkt_len,
>         struct pkt_stream *pkt_stream;
>         u32 i;
>
> -       pkt_stream = pkt_stream_clone(ifobj->xsk->pkt_stream);
> +       pkt_stream = pkt_stream_clone(ifobj->xsk->pkt_stream, ifobj->umem);
>         for (i = 1; i < ifobj->xsk->pkt_stream->nb_pkts; i += 2)
> -               pkt_set(pkt_stream, &pkt_stream->pkts[i], offset, pkt_len);
> +               pkt_modify(pkt_stream, ifobj->umem, &pkt_stream->pkts[i], offset, pkt_len);
>
>         ifobj->xsk->pkt_stream = pkt_stream;
> -       pkt_stream->nb_valid_entries /= 2;
>  }
>
>  static void pkt_stream_replace_half(struct test_spec *test, u32 pkt_len, int offset)
> @@ -715,9 +735,10 @@ static void pkt_stream_replace_half(struct test_spec *test, u32 pkt_len, int off
>  static void pkt_stream_receive_half(struct test_spec *test)
>  {
>         struct pkt_stream *pkt_stream = test->ifobj_tx->xsk->pkt_stream;
> +       struct xsk_umem_info *umem = test->ifobj_rx->umem;
>         u32 i;
>
> -       test->ifobj_rx->xsk->pkt_stream = pkt_stream_generate(pkt_stream->nb_pkts,
> +       test->ifobj_rx->xsk->pkt_stream = pkt_stream_generate(umem, pkt_stream->nb_pkts,
>                                                               pkt_stream->pkts[0].len);
>         pkt_stream = test->ifobj_rx->xsk->pkt_stream;
>         for (i = 1; i < pkt_stream->nb_pkts; i += 2)
> @@ -733,12 +754,12 @@ static void pkt_stream_even_odd_sequence(struct test_spec *test)
>
>         for (i = 0; i < test->nb_sockets; i++) {
>                 pkt_stream = test->ifobj_tx->xsk_arr[i].pkt_stream;
> -               pkt_stream = __pkt_stream_generate(pkt_stream->nb_pkts / 2,
> +               pkt_stream = __pkt_stream_generate(test->ifobj_tx->umem, pkt_stream->nb_pkts / 2,
>                                                    pkt_stream->pkts[0].len, i, 2);
>                 test->ifobj_tx->xsk_arr[i].pkt_stream = pkt_stream;
>
>                 pkt_stream = test->ifobj_rx->xsk_arr[i].pkt_stream;
> -               pkt_stream = __pkt_stream_generate(pkt_stream->nb_pkts / 2,
> +               pkt_stream = __pkt_stream_generate(test->ifobj_rx->umem, pkt_stream->nb_pkts / 2,
>                                                    pkt_stream->pkts[0].len, i, 2);
>                 test->ifobj_rx->xsk_arr[i].pkt_stream = pkt_stream;
>         }
> @@ -1961,7 +1982,8 @@ static int testapp_stats_tx_invalid_descs(struct test_spec *test)
>  static int testapp_stats_rx_full(struct test_spec *test)
>  {
>         pkt_stream_replace(test, DEFAULT_UMEM_BUFFERS + DEFAULT_UMEM_BUFFERS / 2, MIN_PKT_SIZE);
> -       test->ifobj_rx->xsk->pkt_stream = pkt_stream_generate(DEFAULT_UMEM_BUFFERS, MIN_PKT_SIZE);
> +       test->ifobj_rx->xsk->pkt_stream = pkt_stream_generate(test->ifobj_rx->umem,
> +                                                             DEFAULT_UMEM_BUFFERS, MIN_PKT_SIZE);
>
>         test->ifobj_rx->xsk->rxqsize = DEFAULT_UMEM_BUFFERS;
>         test->ifobj_rx->release_rx = false;
> @@ -1972,7 +1994,8 @@ static int testapp_stats_rx_full(struct test_spec *test)
>  static int testapp_stats_fill_empty(struct test_spec *test)
>  {
>         pkt_stream_replace(test, DEFAULT_UMEM_BUFFERS + DEFAULT_UMEM_BUFFERS / 2, MIN_PKT_SIZE);
> -       test->ifobj_rx->xsk->pkt_stream = pkt_stream_generate(DEFAULT_UMEM_BUFFERS, MIN_PKT_SIZE);
> +       test->ifobj_rx->xsk->pkt_stream = pkt_stream_generate(test->ifobj_rx->umem,
> +                                                             DEFAULT_UMEM_BUFFERS, MIN_PKT_SIZE);
>
>         test->ifobj_rx->use_fill_ring = false;
>         test->ifobj_rx->validation_func = validate_fill_empty;
> @@ -2526,8 +2549,8 @@ int main(int argc, char **argv)
>         init_iface(ifobj_tx, worker_testapp_validate_tx);
>
>         test_spec_init(&test, ifobj_tx, ifobj_rx, 0, &tests[0]);
> -       tx_pkt_stream_default = pkt_stream_generate(DEFAULT_PKT_CNT, MIN_PKT_SIZE);
> -       rx_pkt_stream_default = pkt_stream_generate(DEFAULT_PKT_CNT, MIN_PKT_SIZE);
> +       tx_pkt_stream_default = pkt_stream_generate(ifobj_tx->umem, DEFAULT_PKT_CNT, MIN_PKT_SIZE);
> +       rx_pkt_stream_default = pkt_stream_generate(ifobj_rx->umem, DEFAULT_PKT_CNT, MIN_PKT_SIZE);
>         if (!tx_pkt_stream_default || !rx_pkt_stream_default)
>                 exit_with_error(ENOMEM);
>         test.tx_pkt_stream_default = tx_pkt_stream_default;
> --
> 2.34.1
>
>
Fijalkowski, Maciej Nov. 8, 2023, 2:30 p.m. UTC | #2
On Fri, Nov 03, 2023 at 02:29:36PM +0000, Tushar Vyavahare wrote:
> Fix test broken by shared umem test and framework enhancement commit.
> 
> Correct the current implementation of pkt_stream_replace_half() by
> ensuring that nb_valid_entries are not set to half, as this is not true
> for all the tests.

Please be more specific - so what is the expected value for
nb_valid_entries for unaligned mode test then, if not the half?

> 
> Create a new function called pkt_modify() that allows for packet
> modification to meet specific requirements while ensuring the accurate
> maintenance of the valid packet count to prevent inconsistencies in packet
> tracking.
> 
> Fixes: 6d198a89c004 ("selftests/xsk: Add a test for shared umem feature")
> Reported-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> Signed-off-by: Tushar Vyavahare <tushar.vyavahare@intel.com>
> ---
>  tools/testing/selftests/bpf/xskxceiver.c | 71 ++++++++++++++++--------
>  1 file changed, 47 insertions(+), 24 deletions(-)
> 
> diff --git a/tools/testing/selftests/bpf/xskxceiver.c b/tools/testing/selftests/bpf/xskxceiver.c
> index 591ca9637b23..f7d3a4a9013f 100644
> --- a/tools/testing/selftests/bpf/xskxceiver.c
> +++ b/tools/testing/selftests/bpf/xskxceiver.c
> @@ -634,16 +634,35 @@ static u32 pkt_nb_frags(u32 frame_size, struct pkt_stream *pkt_stream, struct pk
>  	return nb_frags;
>  }
>  
> -static void pkt_set(struct pkt_stream *pkt_stream, struct pkt *pkt, int offset, u32 len)
> +static bool pkt_valid(bool unaligned_mode, int offset, u32 len)

kinda confusing to have is_pkt_valid() and pkt_valid() functions...
maybe name this as set_pkt_valid() ? doesn't help much but anyways.

> +{
> +	if (len > MAX_ETH_JUMBO_SIZE || (!unaligned_mode && offset < 0))
> +		return false;
> +
> +	return true;
> +}
> +
> +static void pkt_set(struct pkt_stream *pkt_stream, struct xsk_umem_info *umem, struct pkt *pkt,
> +		    int offset, u32 len)

How about adding a bool unaligned to pkt_stream instead of passing whole
xsk_umem_info to pkt_set - wouldn't this make the diff smaller?

>  {
>  	pkt->offset = offset;
>  	pkt->len = len;
> -	if (len > MAX_ETH_JUMBO_SIZE) {
> -		pkt->valid = false;
> -	} else {
> -		pkt->valid = true;
> +
> +	pkt->valid = pkt_valid(umem->unaligned_mode, offset, len);
> +	if (pkt->valid)
>  		pkt_stream->nb_valid_entries++;
> -	}
> +}
> +
> +static void pkt_modify(struct pkt_stream *pkt_stream, struct xsk_umem_info *umem, struct pkt *pkt,
> +		       int offset, u32 len)
> +{
> +	bool mod_valid;
> +
> +	pkt->offset = offset;
> +	pkt->len = len;
> +	mod_valid  = pkt_valid(umem->unaligned_mode, offset, len);

double space

> +	pkt_stream->nb_valid_entries += mod_valid - pkt->valid;
> +	pkt->valid = mod_valid;
>  }
>  
>  static u32 pkt_get_buffer_len(struct xsk_umem_info *umem, u32 len)
> @@ -651,7 +670,8 @@ static u32 pkt_get_buffer_len(struct xsk_umem_info *umem, u32 len)
>  	return ceil_u32(len, umem->frame_size) * umem->frame_size;
>  }
>  
> -static struct pkt_stream *__pkt_stream_generate(u32 nb_pkts, u32 pkt_len, u32 nb_start, u32 nb_off)
> +static struct pkt_stream *__pkt_stream_generate(struct xsk_umem_info *umem, u32 nb_pkts,
> +						u32 pkt_len, u32 nb_start, u32 nb_off)
>  {
>  	struct pkt_stream *pkt_stream;
>  	u32 i;
> @@ -665,30 +685,31 @@ static struct pkt_stream *__pkt_stream_generate(u32 nb_pkts, u32 pkt_len, u32 nb
>  	for (i = 0; i < nb_pkts; i++) {
>  		struct pkt *pkt = &pkt_stream->pkts[i];
>  
> -		pkt_set(pkt_stream, pkt, 0, pkt_len);
> +		pkt_set(pkt_stream, umem, pkt, 0, pkt_len);
>  		pkt->pkt_nb = nb_start + i * nb_off;
>  	}
>  
>  	return pkt_stream;
>  }
>  
> -static struct pkt_stream *pkt_stream_generate(u32 nb_pkts, u32 pkt_len)
> +static struct pkt_stream *pkt_stream_generate(struct xsk_umem_info *umem, u32 nb_pkts, u32 pkt_len)
>  {
> -	return __pkt_stream_generate(nb_pkts, pkt_len, 0, 1);
> +	return __pkt_stream_generate(umem, nb_pkts, pkt_len, 0, 1);
>  }
>  
> -static struct pkt_stream *pkt_stream_clone(struct pkt_stream *pkt_stream)
> +static struct pkt_stream *pkt_stream_clone(struct pkt_stream *pkt_stream,
> +					   struct xsk_umem_info *umem)
>  {
> -	return pkt_stream_generate(pkt_stream->nb_pkts, pkt_stream->pkts[0].len);
> +	return pkt_stream_generate(umem, pkt_stream->nb_pkts, pkt_stream->pkts[0].len);
>  }
>  
>  static void pkt_stream_replace(struct test_spec *test, u32 nb_pkts, u32 pkt_len)
>  {
>  	struct pkt_stream *pkt_stream;
>  
> -	pkt_stream = pkt_stream_generate(nb_pkts, pkt_len);
> +	pkt_stream = pkt_stream_generate(test->ifobj_rx->umem, nb_pkts, pkt_len);
>  	test->ifobj_tx->xsk->pkt_stream = pkt_stream;
> -	pkt_stream = pkt_stream_generate(nb_pkts, pkt_len);
> +	pkt_stream = pkt_stream_generate(test->ifobj_tx->umem, nb_pkts, pkt_len);
>  	test->ifobj_rx->xsk->pkt_stream = pkt_stream;
>  }
>  
> @@ -698,12 +719,11 @@ static void __pkt_stream_replace_half(struct ifobject *ifobj, u32 pkt_len,
>  	struct pkt_stream *pkt_stream;
>  	u32 i;
>  
> -	pkt_stream = pkt_stream_clone(ifobj->xsk->pkt_stream);
> +	pkt_stream = pkt_stream_clone(ifobj->xsk->pkt_stream, ifobj->umem);
>  	for (i = 1; i < ifobj->xsk->pkt_stream->nb_pkts; i += 2)
> -		pkt_set(pkt_stream, &pkt_stream->pkts[i], offset, pkt_len);
> +		pkt_modify(pkt_stream, ifobj->umem, &pkt_stream->pkts[i], offset, pkt_len);
>  
>  	ifobj->xsk->pkt_stream = pkt_stream;
> -	pkt_stream->nb_valid_entries /= 2;
>  }
>  
>  static void pkt_stream_replace_half(struct test_spec *test, u32 pkt_len, int offset)
> @@ -715,9 +735,10 @@ static void pkt_stream_replace_half(struct test_spec *test, u32 pkt_len, int off
>  static void pkt_stream_receive_half(struct test_spec *test)
>  {
>  	struct pkt_stream *pkt_stream = test->ifobj_tx->xsk->pkt_stream;
> +	struct xsk_umem_info *umem = test->ifobj_rx->umem;
>  	u32 i;
>  
> -	test->ifobj_rx->xsk->pkt_stream = pkt_stream_generate(pkt_stream->nb_pkts,
> +	test->ifobj_rx->xsk->pkt_stream = pkt_stream_generate(umem, pkt_stream->nb_pkts,
>  							      pkt_stream->pkts[0].len);
>  	pkt_stream = test->ifobj_rx->xsk->pkt_stream;
>  	for (i = 1; i < pkt_stream->nb_pkts; i += 2)
> @@ -733,12 +754,12 @@ static void pkt_stream_even_odd_sequence(struct test_spec *test)
>  
>  	for (i = 0; i < test->nb_sockets; i++) {
>  		pkt_stream = test->ifobj_tx->xsk_arr[i].pkt_stream;
> -		pkt_stream = __pkt_stream_generate(pkt_stream->nb_pkts / 2,
> +		pkt_stream = __pkt_stream_generate(test->ifobj_tx->umem, pkt_stream->nb_pkts / 2,
>  						   pkt_stream->pkts[0].len, i, 2);
>  		test->ifobj_tx->xsk_arr[i].pkt_stream = pkt_stream;
>  
>  		pkt_stream = test->ifobj_rx->xsk_arr[i].pkt_stream;
> -		pkt_stream = __pkt_stream_generate(pkt_stream->nb_pkts / 2,
> +		pkt_stream = __pkt_stream_generate(test->ifobj_rx->umem, pkt_stream->nb_pkts / 2,
>  						   pkt_stream->pkts[0].len, i, 2);
>  		test->ifobj_rx->xsk_arr[i].pkt_stream = pkt_stream;
>  	}
> @@ -1961,7 +1982,8 @@ static int testapp_stats_tx_invalid_descs(struct test_spec *test)
>  static int testapp_stats_rx_full(struct test_spec *test)
>  {
>  	pkt_stream_replace(test, DEFAULT_UMEM_BUFFERS + DEFAULT_UMEM_BUFFERS / 2, MIN_PKT_SIZE);
> -	test->ifobj_rx->xsk->pkt_stream = pkt_stream_generate(DEFAULT_UMEM_BUFFERS, MIN_PKT_SIZE);
> +	test->ifobj_rx->xsk->pkt_stream = pkt_stream_generate(test->ifobj_rx->umem,
> +							      DEFAULT_UMEM_BUFFERS, MIN_PKT_SIZE);
>  
>  	test->ifobj_rx->xsk->rxqsize = DEFAULT_UMEM_BUFFERS;
>  	test->ifobj_rx->release_rx = false;
> @@ -1972,7 +1994,8 @@ static int testapp_stats_rx_full(struct test_spec *test)
>  static int testapp_stats_fill_empty(struct test_spec *test)
>  {
>  	pkt_stream_replace(test, DEFAULT_UMEM_BUFFERS + DEFAULT_UMEM_BUFFERS / 2, MIN_PKT_SIZE);
> -	test->ifobj_rx->xsk->pkt_stream = pkt_stream_generate(DEFAULT_UMEM_BUFFERS, MIN_PKT_SIZE);
> +	test->ifobj_rx->xsk->pkt_stream = pkt_stream_generate(test->ifobj_rx->umem,
> +							      DEFAULT_UMEM_BUFFERS, MIN_PKT_SIZE);
>  
>  	test->ifobj_rx->use_fill_ring = false;
>  	test->ifobj_rx->validation_func = validate_fill_empty;
> @@ -2526,8 +2549,8 @@ int main(int argc, char **argv)
>  	init_iface(ifobj_tx, worker_testapp_validate_tx);
>  
>  	test_spec_init(&test, ifobj_tx, ifobj_rx, 0, &tests[0]);
> -	tx_pkt_stream_default = pkt_stream_generate(DEFAULT_PKT_CNT, MIN_PKT_SIZE);
> -	rx_pkt_stream_default = pkt_stream_generate(DEFAULT_PKT_CNT, MIN_PKT_SIZE);
> +	tx_pkt_stream_default = pkt_stream_generate(ifobj_tx->umem, DEFAULT_PKT_CNT, MIN_PKT_SIZE);
> +	rx_pkt_stream_default = pkt_stream_generate(ifobj_rx->umem, DEFAULT_PKT_CNT, MIN_PKT_SIZE);
>  	if (!tx_pkt_stream_default || !rx_pkt_stream_default)
>  		exit_with_error(ENOMEM);
>  	test.tx_pkt_stream_default = tx_pkt_stream_default;
> -- 
> 2.34.1
>
Tushar Vyavahare Nov. 13, 2023, 6:42 a.m. UTC | #3
> -----Original Message-----
> From: Fijalkowski, Maciej <maciej.fijalkowski@intel.com>
> Sent: Wednesday, November 8, 2023 8:01 PM
> To: Vyavahare, Tushar <tushar.vyavahare@intel.com>
> Cc: bpf@vger.kernel.org; netdev@vger.kernel.org; bjorn@kernel.org; Karlsson,
> Magnus <magnus.karlsson@intel.com>; jonathan.lemon@gmail.com;
> davem@davemloft.net; kuba@kernel.org; pabeni@redhat.com;
> ast@kernel.org; daniel@iogearbox.net; Sarkar, Tirthendu
> <tirthendu.sarkar@intel.com>
> Subject: Re: [PATCH bpf-next] selftests/xsk: fix for SEND_RECEIVE_UNALIGNED
> test.
> 
> On Fri, Nov 03, 2023 at 02:29:36PM +0000, Tushar Vyavahare wrote:
> > Fix test broken by shared umem test and framework enhancement commit.
> >
> > Correct the current implementation of pkt_stream_replace_half() by
> > ensuring that nb_valid_entries are not set to half, as this is not
> > true for all the tests.
> 
> Please be more specific - so what is the expected value for nb_valid_entries for
> unaligned mode test then, if not the half?
> 

The expected value for nb_valid_entries for the SEND_RECEIVE_UNALIGNED
test would be equal to the total number of packets sent.

> >
> > Create a new function called pkt_modify() that allows for packet
> > modification to meet specific requirements while ensuring the accurate
> > maintenance of the valid packet count to prevent inconsistencies in
> > packet tracking.
> >
> > Fixes: 6d198a89c004 ("selftests/xsk: Add a test for shared umem
> > feature")
> > Reported-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> > Signed-off-by: Tushar Vyavahare <tushar.vyavahare@intel.com>
> > ---
> >  tools/testing/selftests/bpf/xskxceiver.c | 71
> > ++++++++++++++++--------
> >  1 file changed, 47 insertions(+), 24 deletions(-)
> >
> > diff --git a/tools/testing/selftests/bpf/xskxceiver.c
> > b/tools/testing/selftests/bpf/xskxceiver.c
> > index 591ca9637b23..f7d3a4a9013f 100644
> > --- a/tools/testing/selftests/bpf/xskxceiver.c
> > +++ b/tools/testing/selftests/bpf/xskxceiver.c
> > @@ -634,16 +634,35 @@ static u32 pkt_nb_frags(u32 frame_size, struct
> pkt_stream *pkt_stream, struct pk
> >  	return nb_frags;
> >  }
> >
> > -static void pkt_set(struct pkt_stream *pkt_stream, struct pkt *pkt,
> > int offset, u32 len)
> > +static bool pkt_valid(bool unaligned_mode, int offset, u32 len)
> 
> kinda confusing to have is_pkt_valid() and pkt_valid() functions...
> maybe name this as set_pkt_valid() ? doesn't help much but anyways.
> 

will do it.

> > +{
> > +	if (len > MAX_ETH_JUMBO_SIZE || (!unaligned_mode && offset < 0))
> > +		return false;
> > +
> > +	return true;
> > +}
> > +
> > +static void pkt_set(struct pkt_stream *pkt_stream, struct xsk_umem_info
> *umem, struct pkt *pkt,
> > +		    int offset, u32 len)
> 
> How about adding a bool unaligned to pkt_stream instead of passing whole
> xsk_umem_info to pkt_set - wouldn't this make the diff smaller?
> 

We can also do it this way, but in this case, the difference will be
larger. Wherever we are using "struct pkt_stream *pkt_stream," we must set
this bool flag again. For example, in places like 
__pkt_stream_replace_half(), __pkt_stream_generate_custom() , and a few
more. I believe we should stick with the current approach.

> >  {
> >  	pkt->offset = offset;
> >  	pkt->len = len;
> > -	if (len > MAX_ETH_JUMBO_SIZE) {
> > -		pkt->valid = false;
> > -	} else {
> > -		pkt->valid = true;
> > +
> > +	pkt->valid = pkt_valid(umem->unaligned_mode, offset, len);
> > +	if (pkt->valid)
> >  		pkt_stream->nb_valid_entries++;
> > -	}
> > +}
> > +
> > +static void pkt_modify(struct pkt_stream *pkt_stream, struct
> xsk_umem_info *umem, struct pkt *pkt,
> > +		       int offset, u32 len)
> > +{
> > +	bool mod_valid;
> > +
> > +	pkt->offset = offset;
> > +	pkt->len = len;
> > +	mod_valid  = pkt_valid(umem->unaligned_mode, offset, len);
> 
> double space
> 

will do it.

> > +	pkt_stream->nb_valid_entries += mod_valid - pkt->valid;
> > +	pkt->valid = mod_valid;
> >  }
> >
> >  static u32 pkt_get_buffer_len(struct xsk_umem_info *umem, u32 len) @@
> > -651,7 +670,8 @@ static u32 pkt_get_buffer_len(struct xsk_umem_info
> *umem, u32 len)
> >  	return ceil_u32(len, umem->frame_size) * umem->frame_size;  }
> >
> > -static struct pkt_stream *__pkt_stream_generate(u32 nb_pkts, u32
> > pkt_len, u32 nb_start, u32 nb_off)
> > +static struct pkt_stream *__pkt_stream_generate(struct xsk_umem_info
> *umem, u32 nb_pkts,
> > +						u32 pkt_len, u32 nb_start,
> u32 nb_off)
> >  {
> >  	struct pkt_stream *pkt_stream;
> >  	u32 i;
> > @@ -665,30 +685,31 @@ static struct pkt_stream
> *__pkt_stream_generate(u32 nb_pkts, u32 pkt_len, u32 nb
> >  	for (i = 0; i < nb_pkts; i++) {
> >  		struct pkt *pkt = &pkt_stream->pkts[i];
> >
> > -		pkt_set(pkt_stream, pkt, 0, pkt_len);
> > +		pkt_set(pkt_stream, umem, pkt, 0, pkt_len);
> >  		pkt->pkt_nb = nb_start + i * nb_off;
> >  	}
> >
> >  	return pkt_stream;
> >  }
> >
> > -static struct pkt_stream *pkt_stream_generate(u32 nb_pkts, u32
> > pkt_len)
> > +static struct pkt_stream *pkt_stream_generate(struct xsk_umem_info
> > +*umem, u32 nb_pkts, u32 pkt_len)
> >  {
> > -	return __pkt_stream_generate(nb_pkts, pkt_len, 0, 1);
> > +	return __pkt_stream_generate(umem, nb_pkts, pkt_len, 0, 1);
> >  }
> >
> > -static struct pkt_stream *pkt_stream_clone(struct pkt_stream
> > *pkt_stream)
> > +static struct pkt_stream *pkt_stream_clone(struct pkt_stream *pkt_stream,
> > +					   struct xsk_umem_info *umem)
> >  {
> > -	return pkt_stream_generate(pkt_stream->nb_pkts, pkt_stream-
> >pkts[0].len);
> > +	return pkt_stream_generate(umem, pkt_stream->nb_pkts,
> > +pkt_stream->pkts[0].len);
> >  }
> >
> >  static void pkt_stream_replace(struct test_spec *test, u32 nb_pkts,
> > u32 pkt_len)  {
> >  	struct pkt_stream *pkt_stream;
> >
> > -	pkt_stream = pkt_stream_generate(nb_pkts, pkt_len);
> > +	pkt_stream = pkt_stream_generate(test->ifobj_rx->umem, nb_pkts,
> > +pkt_len);
> >  	test->ifobj_tx->xsk->pkt_stream = pkt_stream;
> > -	pkt_stream = pkt_stream_generate(nb_pkts, pkt_len);
> > +	pkt_stream = pkt_stream_generate(test->ifobj_tx->umem, nb_pkts,
> > +pkt_len);
> >  	test->ifobj_rx->xsk->pkt_stream = pkt_stream;  }
> >
> > @@ -698,12 +719,11 @@ static void __pkt_stream_replace_half(struct
> ifobject *ifobj, u32 pkt_len,
> >  	struct pkt_stream *pkt_stream;
> >  	u32 i;
> >
> > -	pkt_stream = pkt_stream_clone(ifobj->xsk->pkt_stream);
> > +	pkt_stream = pkt_stream_clone(ifobj->xsk->pkt_stream, ifobj-
> >umem);
> >  	for (i = 1; i < ifobj->xsk->pkt_stream->nb_pkts; i += 2)
> > -		pkt_set(pkt_stream, &pkt_stream->pkts[i], offset, pkt_len);
> > +		pkt_modify(pkt_stream, ifobj->umem, &pkt_stream->pkts[i],
> offset,
> > +pkt_len);
> >
> >  	ifobj->xsk->pkt_stream = pkt_stream;
> > -	pkt_stream->nb_valid_entries /= 2;
> >  }
> >
> >  static void pkt_stream_replace_half(struct test_spec *test, u32
> > pkt_len, int offset) @@ -715,9 +735,10 @@ static void
> > pkt_stream_replace_half(struct test_spec *test, u32 pkt_len, int off
> > static void pkt_stream_receive_half(struct test_spec *test)  {
> >  	struct pkt_stream *pkt_stream = test->ifobj_tx->xsk->pkt_stream;
> > +	struct xsk_umem_info *umem = test->ifobj_rx->umem;
> >  	u32 i;
> >
> > -	test->ifobj_rx->xsk->pkt_stream = pkt_stream_generate(pkt_stream-
> >nb_pkts,
> > +	test->ifobj_rx->xsk->pkt_stream = pkt_stream_generate(umem,
> > +pkt_stream->nb_pkts,
> >  							      pkt_stream-
> >pkts[0].len);
> >  	pkt_stream = test->ifobj_rx->xsk->pkt_stream;
> >  	for (i = 1; i < pkt_stream->nb_pkts; i += 2) @@ -733,12 +754,12 @@
> > static void pkt_stream_even_odd_sequence(struct test_spec *test)
> >
> >  	for (i = 0; i < test->nb_sockets; i++) {
> >  		pkt_stream = test->ifobj_tx->xsk_arr[i].pkt_stream;
> > -		pkt_stream = __pkt_stream_generate(pkt_stream->nb_pkts /
> 2,
> > +		pkt_stream = __pkt_stream_generate(test->ifobj_tx->umem,
> > +pkt_stream->nb_pkts / 2,
> >  						   pkt_stream->pkts[0].len, i,
> 2);
> >  		test->ifobj_tx->xsk_arr[i].pkt_stream = pkt_stream;
> >
> >  		pkt_stream = test->ifobj_rx->xsk_arr[i].pkt_stream;
> > -		pkt_stream = __pkt_stream_generate(pkt_stream->nb_pkts /
> 2,
> > +		pkt_stream = __pkt_stream_generate(test->ifobj_rx->umem,
> > +pkt_stream->nb_pkts / 2,
> >  						   pkt_stream->pkts[0].len, i,
> 2);
> >  		test->ifobj_rx->xsk_arr[i].pkt_stream = pkt_stream;
> >  	}
> > @@ -1961,7 +1982,8 @@ static int testapp_stats_tx_invalid_descs(struct
> > test_spec *test)  static int testapp_stats_rx_full(struct test_spec
> > *test)  {
> >  	pkt_stream_replace(test, DEFAULT_UMEM_BUFFERS +
> DEFAULT_UMEM_BUFFERS / 2, MIN_PKT_SIZE);
> > -	test->ifobj_rx->xsk->pkt_stream =
> pkt_stream_generate(DEFAULT_UMEM_BUFFERS, MIN_PKT_SIZE);
> > +	test->ifobj_rx->xsk->pkt_stream = pkt_stream_generate(test-
> >ifobj_rx->umem,
> > +
> DEFAULT_UMEM_BUFFERS, MIN_PKT_SIZE);
> >
> >  	test->ifobj_rx->xsk->rxqsize = DEFAULT_UMEM_BUFFERS;
> >  	test->ifobj_rx->release_rx = false;
> > @@ -1972,7 +1994,8 @@ static int testapp_stats_rx_full(struct
> > test_spec *test)  static int testapp_stats_fill_empty(struct test_spec
> > *test)  {
> >  	pkt_stream_replace(test, DEFAULT_UMEM_BUFFERS +
> DEFAULT_UMEM_BUFFERS / 2, MIN_PKT_SIZE);
> > -	test->ifobj_rx->xsk->pkt_stream =
> pkt_stream_generate(DEFAULT_UMEM_BUFFERS, MIN_PKT_SIZE);
> > +	test->ifobj_rx->xsk->pkt_stream = pkt_stream_generate(test-
> >ifobj_rx->umem,
> > +
> DEFAULT_UMEM_BUFFERS, MIN_PKT_SIZE);
> >
> >  	test->ifobj_rx->use_fill_ring = false;
> >  	test->ifobj_rx->validation_func = validate_fill_empty; @@ -2526,8
> > +2549,8 @@ int main(int argc, char **argv)
> >  	init_iface(ifobj_tx, worker_testapp_validate_tx);
> >
> >  	test_spec_init(&test, ifobj_tx, ifobj_rx, 0, &tests[0]);
> > -	tx_pkt_stream_default = pkt_stream_generate(DEFAULT_PKT_CNT,
> MIN_PKT_SIZE);
> > -	rx_pkt_stream_default = pkt_stream_generate(DEFAULT_PKT_CNT,
> MIN_PKT_SIZE);
> > +	tx_pkt_stream_default = pkt_stream_generate(ifobj_tx->umem,
> DEFAULT_PKT_CNT, MIN_PKT_SIZE);
> > +	rx_pkt_stream_default = pkt_stream_generate(ifobj_rx->umem,
> > +DEFAULT_PKT_CNT, MIN_PKT_SIZE);
> >  	if (!tx_pkt_stream_default || !rx_pkt_stream_default)
> >  		exit_with_error(ENOMEM);
> >  	test.tx_pkt_stream_default = tx_pkt_stream_default;
> > --
> > 2.34.1
> >
Fijalkowski, Maciej Nov. 13, 2023, 11:20 a.m. UTC | #4
On Mon, Nov 13, 2023 at 07:42:09AM +0100, Vyavahare, Tushar wrote:
> 
> 
> > -----Original Message-----
> > From: Fijalkowski, Maciej <maciej.fijalkowski@intel.com>
> > Sent: Wednesday, November 8, 2023 8:01 PM
> > To: Vyavahare, Tushar <tushar.vyavahare@intel.com>
> > Cc: bpf@vger.kernel.org; netdev@vger.kernel.org; bjorn@kernel.org; Karlsson,
> > Magnus <magnus.karlsson@intel.com>; jonathan.lemon@gmail.com;
> > davem@davemloft.net; kuba@kernel.org; pabeni@redhat.com;
> > ast@kernel.org; daniel@iogearbox.net; Sarkar, Tirthendu
> > <tirthendu.sarkar@intel.com>
> > Subject: Re: [PATCH bpf-next] selftests/xsk: fix for SEND_RECEIVE_UNALIGNED
> > test.
> > 
> > On Fri, Nov 03, 2023 at 02:29:36PM +0000, Tushar Vyavahare wrote:
> > > Fix test broken by shared umem test and framework enhancement commit.
> > >
> > > Correct the current implementation of pkt_stream_replace_half() by
> > > ensuring that nb_valid_entries are not set to half, as this is not
> > > true for all the tests.
> > 
> > Please be more specific - so what is the expected value for nb_valid_entries for
> > unaligned mode test then, if not the half?
> > 
> 
> The expected value for nb_valid_entries for the SEND_RECEIVE_UNALIGNED
> test would be equal to the total number of packets sent.
> 
> > >
> > > Create a new function called pkt_modify() that allows for packet
> > > modification to meet specific requirements while ensuring the accurate
> > > maintenance of the valid packet count to prevent inconsistencies in
> > > packet tracking.
> > >
> > > Fixes: 6d198a89c004 ("selftests/xsk: Add a test for shared umem
> > > feature")
> > > Reported-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> > > Signed-off-by: Tushar Vyavahare <tushar.vyavahare@intel.com>
> > > ---
> > >  tools/testing/selftests/bpf/xskxceiver.c | 71
> > > ++++++++++++++++--------
> > >  1 file changed, 47 insertions(+), 24 deletions(-)
> > >
> > > diff --git a/tools/testing/selftests/bpf/xskxceiver.c
> > > b/tools/testing/selftests/bpf/xskxceiver.c
> > > index 591ca9637b23..f7d3a4a9013f 100644
> > > --- a/tools/testing/selftests/bpf/xskxceiver.c
> > > +++ b/tools/testing/selftests/bpf/xskxceiver.c
> > > @@ -634,16 +634,35 @@ static u32 pkt_nb_frags(u32 frame_size, struct
> > pkt_stream *pkt_stream, struct pk
> > >  	return nb_frags;
> > >  }
> > >
> > > -static void pkt_set(struct pkt_stream *pkt_stream, struct pkt *pkt,
> > > int offset, u32 len)
> > > +static bool pkt_valid(bool unaligned_mode, int offset, u32 len)
> > 
> > kinda confusing to have is_pkt_valid() and pkt_valid() functions...
> > maybe name this as set_pkt_valid() ? doesn't help much but anyways.
> > 
> 
> will do it.
> 
> > > +{
> > > +	if (len > MAX_ETH_JUMBO_SIZE || (!unaligned_mode && offset < 0))
> > > +		return false;
> > > +
> > > +	return true;
> > > +}
> > > +
> > > +static void pkt_set(struct pkt_stream *pkt_stream, struct xsk_umem_info
> > *umem, struct pkt *pkt,
> > > +		    int offset, u32 len)
> > 
> > How about adding a bool unaligned to pkt_stream instead of passing whole
> > xsk_umem_info to pkt_set - wouldn't this make the diff smaller?
> > 
> 
> We can also do it this way, but in this case, the difference will be
> larger. Wherever we are using "struct pkt_stream *pkt_stream," we must set
> this bool flag again. For example, in places like 
> __pkt_stream_replace_half(), __pkt_stream_generate_custom() , and a few
> more. I believe we should stick with the current approach.

We have a default pkt streams that are restored in run_pkt_test(), so I
believe that setting this unaligned flag could be scoped to each test_func
that is related to unaligned mode tests?

> 
> > >  {
> > >  	pkt->offset = offset;
> > >  	pkt->len = len;
> > > -	if (len > MAX_ETH_JUMBO_SIZE) {
> > > -		pkt->valid = false;
> > > -	} else {
> > > -		pkt->valid = true;
> > > +
> > > +	pkt->valid = pkt_valid(umem->unaligned_mode, offset, len);
> > > +	if (pkt->valid)
> > >  		pkt_stream->nb_valid_entries++;
> > > -	}
> > > +}
> > > +
> > > +static void pkt_modify(struct pkt_stream *pkt_stream, struct
> > xsk_umem_info *umem, struct pkt *pkt,
> > > +		       int offset, u32 len)
> > > +{
> > > +	bool mod_valid;
> > > +
> > > +	pkt->offset = offset;
> > > +	pkt->len = len;
> > > +	mod_valid  = pkt_valid(umem->unaligned_mode, offset, len);
> > 
> > double space
> > 
> 
> will do it.
> 
> > > +	pkt_stream->nb_valid_entries += mod_valid - pkt->valid;
> > > +	pkt->valid = mod_valid;
> > >  }
> > >
> > >  static u32 pkt_get_buffer_len(struct xsk_umem_info *umem, u32 len) @@
> > > -651,7 +670,8 @@ static u32 pkt_get_buffer_len(struct xsk_umem_info
> > *umem, u32 len)
> > >  	return ceil_u32(len, umem->frame_size) * umem->frame_size;  }
> > >
> > > -static struct pkt_stream *__pkt_stream_generate(u32 nb_pkts, u32
> > > pkt_len, u32 nb_start, u32 nb_off)
> > > +static struct pkt_stream *__pkt_stream_generate(struct xsk_umem_info
> > *umem, u32 nb_pkts,
> > > +						u32 pkt_len, u32 nb_start,
> > u32 nb_off)
> > >  {
> > >  	struct pkt_stream *pkt_stream;
> > >  	u32 i;
> > > @@ -665,30 +685,31 @@ static struct pkt_stream
> > *__pkt_stream_generate(u32 nb_pkts, u32 pkt_len, u32 nb
> > >  	for (i = 0; i < nb_pkts; i++) {
> > >  		struct pkt *pkt = &pkt_stream->pkts[i];
> > >
> > > -		pkt_set(pkt_stream, pkt, 0, pkt_len);
> > > +		pkt_set(pkt_stream, umem, pkt, 0, pkt_len);
> > >  		pkt->pkt_nb = nb_start + i * nb_off;
> > >  	}
> > >
> > >  	return pkt_stream;
> > >  }
> > >
> > > -static struct pkt_stream *pkt_stream_generate(u32 nb_pkts, u32
> > > pkt_len)
> > > +static struct pkt_stream *pkt_stream_generate(struct xsk_umem_info
> > > +*umem, u32 nb_pkts, u32 pkt_len)
> > >  {
> > > -	return __pkt_stream_generate(nb_pkts, pkt_len, 0, 1);
> > > +	return __pkt_stream_generate(umem, nb_pkts, pkt_len, 0, 1);
> > >  }
> > >
> > > -static struct pkt_stream *pkt_stream_clone(struct pkt_stream
> > > *pkt_stream)
> > > +static struct pkt_stream *pkt_stream_clone(struct pkt_stream *pkt_stream,
> > > +					   struct xsk_umem_info *umem)
> > >  {
> > > -	return pkt_stream_generate(pkt_stream->nb_pkts, pkt_stream-
> > >pkts[0].len);
> > > +	return pkt_stream_generate(umem, pkt_stream->nb_pkts,
> > > +pkt_stream->pkts[0].len);
> > >  }
> > >
> > >  static void pkt_stream_replace(struct test_spec *test, u32 nb_pkts,
> > > u32 pkt_len)  {
> > >  	struct pkt_stream *pkt_stream;
> > >
> > > -	pkt_stream = pkt_stream_generate(nb_pkts, pkt_len);
> > > +	pkt_stream = pkt_stream_generate(test->ifobj_rx->umem, nb_pkts,
> > > +pkt_len);
> > >  	test->ifobj_tx->xsk->pkt_stream = pkt_stream;
> > > -	pkt_stream = pkt_stream_generate(nb_pkts, pkt_len);
> > > +	pkt_stream = pkt_stream_generate(test->ifobj_tx->umem, nb_pkts,
> > > +pkt_len);
> > >  	test->ifobj_rx->xsk->pkt_stream = pkt_stream;  }
> > >
> > > @@ -698,12 +719,11 @@ static void __pkt_stream_replace_half(struct
> > ifobject *ifobj, u32 pkt_len,
> > >  	struct pkt_stream *pkt_stream;
> > >  	u32 i;
> > >
> > > -	pkt_stream = pkt_stream_clone(ifobj->xsk->pkt_stream);
> > > +	pkt_stream = pkt_stream_clone(ifobj->xsk->pkt_stream, ifobj-
> > >umem);
> > >  	for (i = 1; i < ifobj->xsk->pkt_stream->nb_pkts; i += 2)
> > > -		pkt_set(pkt_stream, &pkt_stream->pkts[i], offset, pkt_len);
> > > +		pkt_modify(pkt_stream, ifobj->umem, &pkt_stream->pkts[i],
> > offset,
> > > +pkt_len);
> > >
> > >  	ifobj->xsk->pkt_stream = pkt_stream;
> > > -	pkt_stream->nb_valid_entries /= 2;
> > >  }
> > >
> > >  static void pkt_stream_replace_half(struct test_spec *test, u32
> > > pkt_len, int offset) @@ -715,9 +735,10 @@ static void
> > > pkt_stream_replace_half(struct test_spec *test, u32 pkt_len, int off
> > > static void pkt_stream_receive_half(struct test_spec *test)  {
> > >  	struct pkt_stream *pkt_stream = test->ifobj_tx->xsk->pkt_stream;
> > > +	struct xsk_umem_info *umem = test->ifobj_rx->umem;
> > >  	u32 i;
> > >
> > > -	test->ifobj_rx->xsk->pkt_stream = pkt_stream_generate(pkt_stream-
> > >nb_pkts,
> > > +	test->ifobj_rx->xsk->pkt_stream = pkt_stream_generate(umem,
> > > +pkt_stream->nb_pkts,
> > >  							      pkt_stream-
> > >pkts[0].len);
> > >  	pkt_stream = test->ifobj_rx->xsk->pkt_stream;
> > >  	for (i = 1; i < pkt_stream->nb_pkts; i += 2) @@ -733,12 +754,12 @@
> > > static void pkt_stream_even_odd_sequence(struct test_spec *test)
> > >
> > >  	for (i = 0; i < test->nb_sockets; i++) {
> > >  		pkt_stream = test->ifobj_tx->xsk_arr[i].pkt_stream;
> > > -		pkt_stream = __pkt_stream_generate(pkt_stream->nb_pkts /
> > 2,
> > > +		pkt_stream = __pkt_stream_generate(test->ifobj_tx->umem,
> > > +pkt_stream->nb_pkts / 2,
> > >  						   pkt_stream->pkts[0].len, i,
> > 2);
> > >  		test->ifobj_tx->xsk_arr[i].pkt_stream = pkt_stream;
> > >
> > >  		pkt_stream = test->ifobj_rx->xsk_arr[i].pkt_stream;
> > > -		pkt_stream = __pkt_stream_generate(pkt_stream->nb_pkts /
> > 2,
> > > +		pkt_stream = __pkt_stream_generate(test->ifobj_rx->umem,
> > > +pkt_stream->nb_pkts / 2,
> > >  						   pkt_stream->pkts[0].len, i,
> > 2);
> > >  		test->ifobj_rx->xsk_arr[i].pkt_stream = pkt_stream;
> > >  	}
> > > @@ -1961,7 +1982,8 @@ static int testapp_stats_tx_invalid_descs(struct
> > > test_spec *test)  static int testapp_stats_rx_full(struct test_spec
> > > *test)  {
> > >  	pkt_stream_replace(test, DEFAULT_UMEM_BUFFERS +
> > DEFAULT_UMEM_BUFFERS / 2, MIN_PKT_SIZE);
> > > -	test->ifobj_rx->xsk->pkt_stream =
> > pkt_stream_generate(DEFAULT_UMEM_BUFFERS, MIN_PKT_SIZE);
> > > +	test->ifobj_rx->xsk->pkt_stream = pkt_stream_generate(test-
> > >ifobj_rx->umem,
> > > +
> > DEFAULT_UMEM_BUFFERS, MIN_PKT_SIZE);
> > >
> > >  	test->ifobj_rx->xsk->rxqsize = DEFAULT_UMEM_BUFFERS;
> > >  	test->ifobj_rx->release_rx = false;
> > > @@ -1972,7 +1994,8 @@ static int testapp_stats_rx_full(struct
> > > test_spec *test)  static int testapp_stats_fill_empty(struct test_spec
> > > *test)  {
> > >  	pkt_stream_replace(test, DEFAULT_UMEM_BUFFERS +
> > DEFAULT_UMEM_BUFFERS / 2, MIN_PKT_SIZE);
> > > -	test->ifobj_rx->xsk->pkt_stream =
> > pkt_stream_generate(DEFAULT_UMEM_BUFFERS, MIN_PKT_SIZE);
> > > +	test->ifobj_rx->xsk->pkt_stream = pkt_stream_generate(test-
> > >ifobj_rx->umem,
> > > +
> > DEFAULT_UMEM_BUFFERS, MIN_PKT_SIZE);
> > >
> > >  	test->ifobj_rx->use_fill_ring = false;
> > >  	test->ifobj_rx->validation_func = validate_fill_empty; @@ -2526,8
> > > +2549,8 @@ int main(int argc, char **argv)
> > >  	init_iface(ifobj_tx, worker_testapp_validate_tx);
> > >
> > >  	test_spec_init(&test, ifobj_tx, ifobj_rx, 0, &tests[0]);
> > > -	tx_pkt_stream_default = pkt_stream_generate(DEFAULT_PKT_CNT,
> > MIN_PKT_SIZE);
> > > -	rx_pkt_stream_default = pkt_stream_generate(DEFAULT_PKT_CNT,
> > MIN_PKT_SIZE);
> > > +	tx_pkt_stream_default = pkt_stream_generate(ifobj_tx->umem,
> > DEFAULT_PKT_CNT, MIN_PKT_SIZE);
> > > +	rx_pkt_stream_default = pkt_stream_generate(ifobj_rx->umem,
> > > +DEFAULT_PKT_CNT, MIN_PKT_SIZE);
> > >  	if (!tx_pkt_stream_default || !rx_pkt_stream_default)
> > >  		exit_with_error(ENOMEM);
> > >  	test.tx_pkt_stream_default = tx_pkt_stream_default;
> > > --
> > > 2.34.1
> > >
Fijalkowski, Maciej Nov. 14, 2023, 5:03 p.m. UTC | #5
On Mon, Nov 13, 2023 at 12:20:46PM +0100, Maciej Fijalkowski wrote:
> On Mon, Nov 13, 2023 at 07:42:09AM +0100, Vyavahare, Tushar wrote:
> > 
> > 
> > > -----Original Message-----
> > > From: Fijalkowski, Maciej <maciej.fijalkowski@intel.com>
> > > Sent: Wednesday, November 8, 2023 8:01 PM
> > > To: Vyavahare, Tushar <tushar.vyavahare@intel.com>
> > > Cc: bpf@vger.kernel.org; netdev@vger.kernel.org; bjorn@kernel.org; Karlsson,
> > > Magnus <magnus.karlsson@intel.com>; jonathan.lemon@gmail.com;
> > > davem@davemloft.net; kuba@kernel.org; pabeni@redhat.com;
> > > ast@kernel.org; daniel@iogearbox.net; Sarkar, Tirthendu
> > > <tirthendu.sarkar@intel.com>
> > > Subject: Re: [PATCH bpf-next] selftests/xsk: fix for SEND_RECEIVE_UNALIGNED
> > > test.
> > > 
> > > On Fri, Nov 03, 2023 at 02:29:36PM +0000, Tushar Vyavahare wrote:
> > > > Fix test broken by shared umem test and framework enhancement commit.
> > > >
> > > > Correct the current implementation of pkt_stream_replace_half() by
> > > > ensuring that nb_valid_entries are not set to half, as this is not
> > > > true for all the tests.
> > > 
> > > Please be more specific - so what is the expected value for nb_valid_entries for
> > > unaligned mode test then, if not the half?
> > > 
> > 
> > The expected value for nb_valid_entries for the SEND_RECEIVE_UNALIGNED
> > test would be equal to the total number of packets sent.
> > 
> > > >
> > > > Create a new function called pkt_modify() that allows for packet
> > > > modification to meet specific requirements while ensuring the accurate
> > > > maintenance of the valid packet count to prevent inconsistencies in
> > > > packet tracking.
> > > >
> > > > Fixes: 6d198a89c004 ("selftests/xsk: Add a test for shared umem
> > > > feature")
> > > > Reported-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> > > > Signed-off-by: Tushar Vyavahare <tushar.vyavahare@intel.com>
> > > > ---
> > > >  tools/testing/selftests/bpf/xskxceiver.c | 71
> > > > ++++++++++++++++--------
> > > >  1 file changed, 47 insertions(+), 24 deletions(-)
> > > >
> > > > diff --git a/tools/testing/selftests/bpf/xskxceiver.c
> > > > b/tools/testing/selftests/bpf/xskxceiver.c
> > > > index 591ca9637b23..f7d3a4a9013f 100644
> > > > --- a/tools/testing/selftests/bpf/xskxceiver.c
> > > > +++ b/tools/testing/selftests/bpf/xskxceiver.c
> > > > @@ -634,16 +634,35 @@ static u32 pkt_nb_frags(u32 frame_size, struct
> > > pkt_stream *pkt_stream, struct pk
> > > >  	return nb_frags;
> > > >  }
> > > >
> > > > -static void pkt_set(struct pkt_stream *pkt_stream, struct pkt *pkt,
> > > > int offset, u32 len)
> > > > +static bool pkt_valid(bool unaligned_mode, int offset, u32 len)
> > > 
> > > kinda confusing to have is_pkt_valid() and pkt_valid() functions...
> > > maybe name this as set_pkt_valid() ? doesn't help much but anyways.
> > > 
> > 
> > will do it.
> > 
> > > > +{
> > > > +	if (len > MAX_ETH_JUMBO_SIZE || (!unaligned_mode && offset < 0))
> > > > +		return false;
> > > > +
> > > > +	return true;
> > > > +}
> > > > +
> > > > +static void pkt_set(struct pkt_stream *pkt_stream, struct xsk_umem_info
> > > *umem, struct pkt *pkt,
> > > > +		    int offset, u32 len)
> > > 
> > > How about adding a bool unaligned to pkt_stream instead of passing whole
> > > xsk_umem_info to pkt_set - wouldn't this make the diff smaller?
> > > 
> > 
> > We can also do it this way, but in this case, the difference will be
> > larger. Wherever we are using "struct pkt_stream *pkt_stream," we must set
> > this bool flag again. For example, in places like 
> > __pkt_stream_replace_half(), __pkt_stream_generate_custom() , and a few
> > more. I believe we should stick with the current approach.
> 
> We have a default pkt streams that are restored in run_pkt_test(), so I
> believe that setting this unaligned flag could be scoped to each test_func
> that is related to unaligned mode tests?

Ok now I see that we are sort of losing context when generating pkt
streams, that's a bit unfortunate in this case. Maybe we can think of some
refactor later on.
diff mbox series

Patch

diff --git a/tools/testing/selftests/bpf/xskxceiver.c b/tools/testing/selftests/bpf/xskxceiver.c
index 591ca9637b23..f7d3a4a9013f 100644
--- a/tools/testing/selftests/bpf/xskxceiver.c
+++ b/tools/testing/selftests/bpf/xskxceiver.c
@@ -634,16 +634,35 @@  static u32 pkt_nb_frags(u32 frame_size, struct pkt_stream *pkt_stream, struct pk
 	return nb_frags;
 }
 
-static void pkt_set(struct pkt_stream *pkt_stream, struct pkt *pkt, int offset, u32 len)
+static bool pkt_valid(bool unaligned_mode, int offset, u32 len)
+{
+	if (len > MAX_ETH_JUMBO_SIZE || (!unaligned_mode && offset < 0))
+		return false;
+
+	return true;
+}
+
+static void pkt_set(struct pkt_stream *pkt_stream, struct xsk_umem_info *umem, struct pkt *pkt,
+		    int offset, u32 len)
 {
 	pkt->offset = offset;
 	pkt->len = len;
-	if (len > MAX_ETH_JUMBO_SIZE) {
-		pkt->valid = false;
-	} else {
-		pkt->valid = true;
+
+	pkt->valid = pkt_valid(umem->unaligned_mode, offset, len);
+	if (pkt->valid)
 		pkt_stream->nb_valid_entries++;
-	}
+}
+
+static void pkt_modify(struct pkt_stream *pkt_stream, struct xsk_umem_info *umem, struct pkt *pkt,
+		       int offset, u32 len)
+{
+	bool mod_valid;
+
+	pkt->offset = offset;
+	pkt->len = len;
+	mod_valid  = pkt_valid(umem->unaligned_mode, offset, len);
+	pkt_stream->nb_valid_entries += mod_valid - pkt->valid;
+	pkt->valid = mod_valid;
 }
 
 static u32 pkt_get_buffer_len(struct xsk_umem_info *umem, u32 len)
@@ -651,7 +670,8 @@  static u32 pkt_get_buffer_len(struct xsk_umem_info *umem, u32 len)
 	return ceil_u32(len, umem->frame_size) * umem->frame_size;
 }
 
-static struct pkt_stream *__pkt_stream_generate(u32 nb_pkts, u32 pkt_len, u32 nb_start, u32 nb_off)
+static struct pkt_stream *__pkt_stream_generate(struct xsk_umem_info *umem, u32 nb_pkts,
+						u32 pkt_len, u32 nb_start, u32 nb_off)
 {
 	struct pkt_stream *pkt_stream;
 	u32 i;
@@ -665,30 +685,31 @@  static struct pkt_stream *__pkt_stream_generate(u32 nb_pkts, u32 pkt_len, u32 nb
 	for (i = 0; i < nb_pkts; i++) {
 		struct pkt *pkt = &pkt_stream->pkts[i];
 
-		pkt_set(pkt_stream, pkt, 0, pkt_len);
+		pkt_set(pkt_stream, umem, pkt, 0, pkt_len);
 		pkt->pkt_nb = nb_start + i * nb_off;
 	}
 
 	return pkt_stream;
 }
 
-static struct pkt_stream *pkt_stream_generate(u32 nb_pkts, u32 pkt_len)
+static struct pkt_stream *pkt_stream_generate(struct xsk_umem_info *umem, u32 nb_pkts, u32 pkt_len)
 {
-	return __pkt_stream_generate(nb_pkts, pkt_len, 0, 1);
+	return __pkt_stream_generate(umem, nb_pkts, pkt_len, 0, 1);
 }
 
-static struct pkt_stream *pkt_stream_clone(struct pkt_stream *pkt_stream)
+static struct pkt_stream *pkt_stream_clone(struct pkt_stream *pkt_stream,
+					   struct xsk_umem_info *umem)
 {
-	return pkt_stream_generate(pkt_stream->nb_pkts, pkt_stream->pkts[0].len);
+	return pkt_stream_generate(umem, pkt_stream->nb_pkts, pkt_stream->pkts[0].len);
 }
 
 static void pkt_stream_replace(struct test_spec *test, u32 nb_pkts, u32 pkt_len)
 {
 	struct pkt_stream *pkt_stream;
 
-	pkt_stream = pkt_stream_generate(nb_pkts, pkt_len);
+	pkt_stream = pkt_stream_generate(test->ifobj_rx->umem, nb_pkts, pkt_len);
 	test->ifobj_tx->xsk->pkt_stream = pkt_stream;
-	pkt_stream = pkt_stream_generate(nb_pkts, pkt_len);
+	pkt_stream = pkt_stream_generate(test->ifobj_tx->umem, nb_pkts, pkt_len);
 	test->ifobj_rx->xsk->pkt_stream = pkt_stream;
 }
 
@@ -698,12 +719,11 @@  static void __pkt_stream_replace_half(struct ifobject *ifobj, u32 pkt_len,
 	struct pkt_stream *pkt_stream;
 	u32 i;
 
-	pkt_stream = pkt_stream_clone(ifobj->xsk->pkt_stream);
+	pkt_stream = pkt_stream_clone(ifobj->xsk->pkt_stream, ifobj->umem);
 	for (i = 1; i < ifobj->xsk->pkt_stream->nb_pkts; i += 2)
-		pkt_set(pkt_stream, &pkt_stream->pkts[i], offset, pkt_len);
+		pkt_modify(pkt_stream, ifobj->umem, &pkt_stream->pkts[i], offset, pkt_len);
 
 	ifobj->xsk->pkt_stream = pkt_stream;
-	pkt_stream->nb_valid_entries /= 2;
 }
 
 static void pkt_stream_replace_half(struct test_spec *test, u32 pkt_len, int offset)
@@ -715,9 +735,10 @@  static void pkt_stream_replace_half(struct test_spec *test, u32 pkt_len, int off
 static void pkt_stream_receive_half(struct test_spec *test)
 {
 	struct pkt_stream *pkt_stream = test->ifobj_tx->xsk->pkt_stream;
+	struct xsk_umem_info *umem = test->ifobj_rx->umem;
 	u32 i;
 
-	test->ifobj_rx->xsk->pkt_stream = pkt_stream_generate(pkt_stream->nb_pkts,
+	test->ifobj_rx->xsk->pkt_stream = pkt_stream_generate(umem, pkt_stream->nb_pkts,
 							      pkt_stream->pkts[0].len);
 	pkt_stream = test->ifobj_rx->xsk->pkt_stream;
 	for (i = 1; i < pkt_stream->nb_pkts; i += 2)
@@ -733,12 +754,12 @@  static void pkt_stream_even_odd_sequence(struct test_spec *test)
 
 	for (i = 0; i < test->nb_sockets; i++) {
 		pkt_stream = test->ifobj_tx->xsk_arr[i].pkt_stream;
-		pkt_stream = __pkt_stream_generate(pkt_stream->nb_pkts / 2,
+		pkt_stream = __pkt_stream_generate(test->ifobj_tx->umem, pkt_stream->nb_pkts / 2,
 						   pkt_stream->pkts[0].len, i, 2);
 		test->ifobj_tx->xsk_arr[i].pkt_stream = pkt_stream;
 
 		pkt_stream = test->ifobj_rx->xsk_arr[i].pkt_stream;
-		pkt_stream = __pkt_stream_generate(pkt_stream->nb_pkts / 2,
+		pkt_stream = __pkt_stream_generate(test->ifobj_rx->umem, pkt_stream->nb_pkts / 2,
 						   pkt_stream->pkts[0].len, i, 2);
 		test->ifobj_rx->xsk_arr[i].pkt_stream = pkt_stream;
 	}
@@ -1961,7 +1982,8 @@  static int testapp_stats_tx_invalid_descs(struct test_spec *test)
 static int testapp_stats_rx_full(struct test_spec *test)
 {
 	pkt_stream_replace(test, DEFAULT_UMEM_BUFFERS + DEFAULT_UMEM_BUFFERS / 2, MIN_PKT_SIZE);
-	test->ifobj_rx->xsk->pkt_stream = pkt_stream_generate(DEFAULT_UMEM_BUFFERS, MIN_PKT_SIZE);
+	test->ifobj_rx->xsk->pkt_stream = pkt_stream_generate(test->ifobj_rx->umem,
+							      DEFAULT_UMEM_BUFFERS, MIN_PKT_SIZE);
 
 	test->ifobj_rx->xsk->rxqsize = DEFAULT_UMEM_BUFFERS;
 	test->ifobj_rx->release_rx = false;
@@ -1972,7 +1994,8 @@  static int testapp_stats_rx_full(struct test_spec *test)
 static int testapp_stats_fill_empty(struct test_spec *test)
 {
 	pkt_stream_replace(test, DEFAULT_UMEM_BUFFERS + DEFAULT_UMEM_BUFFERS / 2, MIN_PKT_SIZE);
-	test->ifobj_rx->xsk->pkt_stream = pkt_stream_generate(DEFAULT_UMEM_BUFFERS, MIN_PKT_SIZE);
+	test->ifobj_rx->xsk->pkt_stream = pkt_stream_generate(test->ifobj_rx->umem,
+							      DEFAULT_UMEM_BUFFERS, MIN_PKT_SIZE);
 
 	test->ifobj_rx->use_fill_ring = false;
 	test->ifobj_rx->validation_func = validate_fill_empty;
@@ -2526,8 +2549,8 @@  int main(int argc, char **argv)
 	init_iface(ifobj_tx, worker_testapp_validate_tx);
 
 	test_spec_init(&test, ifobj_tx, ifobj_rx, 0, &tests[0]);
-	tx_pkt_stream_default = pkt_stream_generate(DEFAULT_PKT_CNT, MIN_PKT_SIZE);
-	rx_pkt_stream_default = pkt_stream_generate(DEFAULT_PKT_CNT, MIN_PKT_SIZE);
+	tx_pkt_stream_default = pkt_stream_generate(ifobj_tx->umem, DEFAULT_PKT_CNT, MIN_PKT_SIZE);
+	rx_pkt_stream_default = pkt_stream_generate(ifobj_rx->umem, DEFAULT_PKT_CNT, MIN_PKT_SIZE);
 	if (!tx_pkt_stream_default || !rx_pkt_stream_default)
 		exit_with_error(ENOMEM);
 	test.tx_pkt_stream_default = tx_pkt_stream_default;