diff mbox series

[bpf-next,v7,08/18] ice: Support XDP hints in AF_XDP ZC mode

Message ID 20231115175301.534113-9-larysa.zaremba@intel.com (mailing list archive)
State Changes Requested
Delegated to: BPF
Headers show
Series XDP metadata via kfuncs for ice + VLAN hint | expand

Checks

Context Check Description
bpf/vmtest-bpf-next-PR success PR summary
bpf/vmtest-bpf-next-VM_Test-4 success Logs for aarch64-gcc / test (test_maps, false, 360) / test_maps on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-7 success Logs for aarch64-gcc / test (test_verifier, false, 360) / test_verifier on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-6 success Logs for aarch64-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-9 success Logs for s390x-gcc / build / build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-14 success Logs for s390x-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-15 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-16 success Logs for x86_64-gcc / build / build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-17 success Logs for x86_64-gcc / test (test_maps, false, 360) / test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-18 success Logs for x86_64-gcc / test (test_progs, false, 360) / test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-20 success Logs for x86_64-gcc / test (test_progs_no_alu32_parallel, true, 30) / test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-21 success Logs for x86_64-gcc / test (test_progs_parallel, true, 30) / test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-22 success Logs for x86_64-gcc / test (test_verifier, false, 360) / test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-23 success Logs for x86_64-gcc / veristat / veristat on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-25 success Logs for x86_64-llvm-16 / test (test_maps, false, 360) / test_maps on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-26 success Logs for x86_64-llvm-16 / test (test_progs, false, 360) / test_progs on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-27 success Logs for x86_64-llvm-16 / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-28 success Logs for x86_64-llvm-16 / test (test_verifier, false, 360) / test_verifier on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-29 success Logs for x86_64-llvm-16 / veristat
netdev/series_format fail Series longer than 15 patches (and no cover letter)
netdev/tree_selection success Clearly marked for bpf-next
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit fail Errors and warnings before: 1134 this patch: 1135
netdev/cc_maintainers warning 5 maintainers not CCed: anthony.l.nguyen@intel.com jesse.brandeburg@intel.com pabeni@redhat.com edumazet@google.com intel-wired-lan@lists.osuosl.org
netdev/build_clang fail Errors and warnings before: 1161 this patch: 1162
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn fail Errors and warnings before: 1161 this patch: 1162
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 83 lines checked
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc fail Errors and warnings before: 0 this patch: 1
netdev/source_inline success Was 0 now: 0
bpf/vmtest-bpf-next-VM_Test-12 success Logs for s390x-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-13 success Logs for s390x-gcc / test (test_verifier, false, 360) / test_verifier on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-0 success Logs for Lint
bpf/vmtest-bpf-next-VM_Test-2 success Logs for Validate matrix.py
bpf/vmtest-bpf-next-VM_Test-5 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-3 success Logs for aarch64-gcc / build / build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-8 success Logs for aarch64-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-10 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-11 success Logs for x86_64-gcc / build / build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-19 success Logs for x86_64-llvm-16 / build / build for x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-24 success Logs for x86_64-llvm-16 / veristat

Commit Message

Larysa Zaremba Nov. 15, 2023, 5:52 p.m. UTC
In AF_XDP ZC, xdp_buff is not stored on ring,
instead it is provided by xsk_buff_pool.
Space for metadata sources right after such buffers was already reserved
in commit 94ecc5ca4dbf ("xsk: Add cb area to struct xdp_buff_xsk").

Some things (such as pointer to packet context) do not change on a
per-packet basis, so they can be set at the same time as RX queue info.
On the other hand, RX descriptor is unique for each packet, but is already
known when setting DMA addresses. This minimizes performance impact of
hints on regular packet processing.

Update AF_XDP ZC packet processing to support XDP hints.

Co-developed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Larysa Zaremba <larysa.zaremba@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_base.c | 13 +++++++++++++
 drivers/net/ethernet/intel/ice/ice_xsk.c  | 17 +++++++++++------
 2 files changed, 24 insertions(+), 6 deletions(-)

Comments

Larysa Zaremba Nov. 16, 2023, 3:01 a.m. UTC | #1
On Thu, Nov 16, 2023 at 01:49:36PM +0100, Maciej Fijalkowski wrote:
> On Wed, Nov 15, 2023 at 06:52:50PM +0100, Larysa Zaremba wrote:
> > In AF_XDP ZC, xdp_buff is not stored on ring,
> > instead it is provided by xsk_buff_pool.
> > Space for metadata sources right after such buffers was already reserved
> > in commit 94ecc5ca4dbf ("xsk: Add cb area to struct xdp_buff_xsk").
> > 
> > Some things (such as pointer to packet context) do not change on a
> > per-packet basis, so they can be set at the same time as RX queue info.
> > On the other hand, RX descriptor is unique for each packet, but is already
> > known when setting DMA addresses. This minimizes performance impact of
> > hints on regular packet processing.
> > 
> > Update AF_XDP ZC packet processing to support XDP hints.
> > 
> > Co-developed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> > Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> > Signed-off-by: Larysa Zaremba <larysa.zaremba@intel.com>
> > ---
> >  drivers/net/ethernet/intel/ice/ice_base.c | 13 +++++++++++++
> >  drivers/net/ethernet/intel/ice/ice_xsk.c  | 17 +++++++++++------
> >  2 files changed, 24 insertions(+), 6 deletions(-)
> > 
> > diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c
> > index 2d83f3c029e7..d3396c1c87a9 100644
> > --- a/drivers/net/ethernet/intel/ice/ice_base.c
> > +++ b/drivers/net/ethernet/intel/ice/ice_base.c
> > @@ -519,6 +519,18 @@ static int ice_setup_rx_ctx(struct ice_rx_ring *ring)
> >  	return 0;
> >  }
> >  
> > +static void ice_xsk_pool_fill_cb(struct ice_rx_ring *ring)
> > +{
> > +	void *ctx_ptr = &ring->pkt_ctx;
> > +	struct xsk_cb_desc desc = {};
> > +
> > +	desc.src = &ctx_ptr;
> > +	desc.off = offsetof(struct ice_xdp_buff, pkt_ctx) -
> > +		   sizeof(struct xdp_buff);
> 
> Took me a while to figure out this offset calculation:D
>

Do you have a suggestion, how to make it easier to understand?
 
> > +	desc.bytes = sizeof(ctx_ptr);
> > +	xsk_pool_fill_cb(ring->xsk_pool, &desc);
> > +}
> > +
> >  /**
> >   * ice_vsi_cfg_rxq - Configure an Rx queue
> >   * @ring: the ring being configured
> > @@ -553,6 +565,7 @@ int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
> >  			if (err)
> >  				return err;
> >  			xsk_pool_set_rxq_info(ring->xsk_pool, &ring->xdp_rxq);
> 
> A good place for XSK_CHECK_PRIV_TYPE() ?

Seems so, have not noticed that it was missing :(

> 
> > +			ice_xsk_pool_fill_cb(ring);
> >  
> >  			dev_info(dev, "Registered XDP mem model MEM_TYPE_XSK_BUFF_POOL on Rx ring %d\n",
> >  				 ring->q_index);
> > diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
> > index 906e383e864a..a690e34ea8ae 100644
> > --- a/drivers/net/ethernet/intel/ice/ice_xsk.c
> > +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
> > @@ -433,7 +433,7 @@ int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool, u16 qid)
> >  
> >  /**
> >   * ice_fill_rx_descs - pick buffers from XSK buffer pool and use it
> > - * @pool: XSK Buffer pool to pull the buffers from
> > + * @rx_ring: rx ring
> >   * @xdp: SW ring of xdp_buff that will hold the buffers
> >   * @rx_desc: Pointer to Rx descriptors that will be filled
> >   * @count: The number of buffers to allocate
> > @@ -445,19 +445,24 @@ int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool, u16 qid)
> >   *
> >   * Returns the amount of allocated Rx descriptors
> >   */
> > -static u16 ice_fill_rx_descs(struct xsk_buff_pool *pool, struct xdp_buff **xdp,
> > +static u16 ice_fill_rx_descs(struct ice_rx_ring *rx_ring, struct xdp_buff **xdp,
> 
> Might be lack of caffeine on my side, but I don't see a reason for passing
> rx_ring down to ice_fill_rx_descs() ?
> 
> I see I introduced this in the code example previously but it must have
> been some leftover from previous hacking...
> 
> Help!

It is lack of caffeine on both sides. I have missed the fact that indeed, we do 
not need rx_ring here.

> 
> >  			     union ice_32b_rx_flex_desc *rx_desc, u16 count)
> >  {
> >  	dma_addr_t dma;
> >  	u16 buffs;
> >  	int i;
> >  
> > -	buffs = xsk_buff_alloc_batch(pool, xdp, count);
> > +	buffs = xsk_buff_alloc_batch(rx_ring->xsk_pool, xdp, count);
> >  	for (i = 0; i < buffs; i++) {
> >  		dma = xsk_buff_xdp_get_dma(*xdp);
> >  		rx_desc->read.pkt_addr = cpu_to_le64(dma);
> >  		rx_desc->wb.status_error0 = 0;
> >  
> > +		/* Put private info that changes on a per-packet basis
> > +		 * into xdp_buff_xsk->cb.
> > +		 */
> > +		ice_xdp_meta_set_desc(*xdp, rx_desc);
> > +
> >  		rx_desc++;
> >  		xdp++;
> >  	}
> > @@ -488,8 +493,7 @@ static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count)
> >  	xdp = ice_xdp_buf(rx_ring, ntu);
> >  
> >  	if (ntu + count >= rx_ring->count) {
> > -		nb_buffs_extra = ice_fill_rx_descs(rx_ring->xsk_pool, xdp,
> > -						   rx_desc,
> > +		nb_buffs_extra = ice_fill_rx_descs(rx_ring, xdp, rx_desc,
> >  						   rx_ring->count - ntu);
> >  		if (nb_buffs_extra != rx_ring->count - ntu) {
> >  			ntu += nb_buffs_extra;
> > @@ -502,7 +506,7 @@ static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count)
> >  		ice_release_rx_desc(rx_ring, 0);
> >  	}
> >  
> > -	nb_buffs = ice_fill_rx_descs(rx_ring->xsk_pool, xdp, rx_desc, count);
> > +	nb_buffs = ice_fill_rx_descs(rx_ring, xdp, rx_desc, count);
> >  
> >  	ntu += nb_buffs;
> >  	if (ntu == rx_ring->count)
> > @@ -752,6 +756,7 @@ static int ice_xmit_xdp_tx_zc(struct xdp_buff *xdp,
> >   * @xdp: xdp_buff used as input to the XDP program
> >   * @xdp_prog: XDP program to run
> >   * @xdp_ring: ring to be used for XDP_TX action
> > + * @rx_desc: packet descriptor
> 
> leftover comment from previous approach
> 

Will fix.

> >   *
> >   * Returns any of ICE_XDP_{PASS, CONSUMED, TX, REDIR}
> >   */
> > -- 
> > 2.41.0
> >
Fijalkowski, Maciej Nov. 16, 2023, 12:49 p.m. UTC | #2
On Wed, Nov 15, 2023 at 06:52:50PM +0100, Larysa Zaremba wrote:
> In AF_XDP ZC, xdp_buff is not stored on ring,
> instead it is provided by xsk_buff_pool.
> Space for metadata sources right after such buffers was already reserved
> in commit 94ecc5ca4dbf ("xsk: Add cb area to struct xdp_buff_xsk").
> 
> Some things (such as pointer to packet context) do not change on a
> per-packet basis, so they can be set at the same time as RX queue info.
> On the other hand, RX descriptor is unique for each packet, but is already
> known when setting DMA addresses. This minimizes performance impact of
> hints on regular packet processing.
> 
> Update AF_XDP ZC packet processing to support XDP hints.
> 
> Co-developed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> Signed-off-by: Larysa Zaremba <larysa.zaremba@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_base.c | 13 +++++++++++++
>  drivers/net/ethernet/intel/ice/ice_xsk.c  | 17 +++++++++++------
>  2 files changed, 24 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c
> index 2d83f3c029e7..d3396c1c87a9 100644
> --- a/drivers/net/ethernet/intel/ice/ice_base.c
> +++ b/drivers/net/ethernet/intel/ice/ice_base.c
> @@ -519,6 +519,18 @@ static int ice_setup_rx_ctx(struct ice_rx_ring *ring)
>  	return 0;
>  }
>  
> +static void ice_xsk_pool_fill_cb(struct ice_rx_ring *ring)
> +{
> +	void *ctx_ptr = &ring->pkt_ctx;
> +	struct xsk_cb_desc desc = {};
> +
> +	desc.src = &ctx_ptr;
> +	desc.off = offsetof(struct ice_xdp_buff, pkt_ctx) -
> +		   sizeof(struct xdp_buff);

Took me a while to figure out this offset calculation:D

> +	desc.bytes = sizeof(ctx_ptr);
> +	xsk_pool_fill_cb(ring->xsk_pool, &desc);
> +}
> +
>  /**
>   * ice_vsi_cfg_rxq - Configure an Rx queue
>   * @ring: the ring being configured
> @@ -553,6 +565,7 @@ int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
>  			if (err)
>  				return err;
>  			xsk_pool_set_rxq_info(ring->xsk_pool, &ring->xdp_rxq);

A good place for XSK_CHECK_PRIV_TYPE() ?

> +			ice_xsk_pool_fill_cb(ring);
>  
>  			dev_info(dev, "Registered XDP mem model MEM_TYPE_XSK_BUFF_POOL on Rx ring %d\n",
>  				 ring->q_index);
> diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
> index 906e383e864a..a690e34ea8ae 100644
> --- a/drivers/net/ethernet/intel/ice/ice_xsk.c
> +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
> @@ -433,7 +433,7 @@ int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool, u16 qid)
>  
>  /**
>   * ice_fill_rx_descs - pick buffers from XSK buffer pool and use it
> - * @pool: XSK Buffer pool to pull the buffers from
> + * @rx_ring: rx ring
>   * @xdp: SW ring of xdp_buff that will hold the buffers
>   * @rx_desc: Pointer to Rx descriptors that will be filled
>   * @count: The number of buffers to allocate
> @@ -445,19 +445,24 @@ int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool, u16 qid)
>   *
>   * Returns the amount of allocated Rx descriptors
>   */
> -static u16 ice_fill_rx_descs(struct xsk_buff_pool *pool, struct xdp_buff **xdp,
> +static u16 ice_fill_rx_descs(struct ice_rx_ring *rx_ring, struct xdp_buff **xdp,

Might be lack of caffeine on my side, but I don't see a reason for passing
rx_ring down to ice_fill_rx_descs() ?

I see I introduced this in the code example previously but it must have
been some leftover from previous hacking...

Help!

>  			     union ice_32b_rx_flex_desc *rx_desc, u16 count)
>  {
>  	dma_addr_t dma;
>  	u16 buffs;
>  	int i;
>  
> -	buffs = xsk_buff_alloc_batch(pool, xdp, count);
> +	buffs = xsk_buff_alloc_batch(rx_ring->xsk_pool, xdp, count);
>  	for (i = 0; i < buffs; i++) {
>  		dma = xsk_buff_xdp_get_dma(*xdp);
>  		rx_desc->read.pkt_addr = cpu_to_le64(dma);
>  		rx_desc->wb.status_error0 = 0;
>  
> +		/* Put private info that changes on a per-packet basis
> +		 * into xdp_buff_xsk->cb.
> +		 */
> +		ice_xdp_meta_set_desc(*xdp, rx_desc);
> +
>  		rx_desc++;
>  		xdp++;
>  	}
> @@ -488,8 +493,7 @@ static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count)
>  	xdp = ice_xdp_buf(rx_ring, ntu);
>  
>  	if (ntu + count >= rx_ring->count) {
> -		nb_buffs_extra = ice_fill_rx_descs(rx_ring->xsk_pool, xdp,
> -						   rx_desc,
> +		nb_buffs_extra = ice_fill_rx_descs(rx_ring, xdp, rx_desc,
>  						   rx_ring->count - ntu);
>  		if (nb_buffs_extra != rx_ring->count - ntu) {
>  			ntu += nb_buffs_extra;
> @@ -502,7 +506,7 @@ static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count)
>  		ice_release_rx_desc(rx_ring, 0);
>  	}
>  
> -	nb_buffs = ice_fill_rx_descs(rx_ring->xsk_pool, xdp, rx_desc, count);
> +	nb_buffs = ice_fill_rx_descs(rx_ring, xdp, rx_desc, count);
>  
>  	ntu += nb_buffs;
>  	if (ntu == rx_ring->count)
> @@ -752,6 +756,7 @@ static int ice_xmit_xdp_tx_zc(struct xdp_buff *xdp,
>   * @xdp: xdp_buff used as input to the XDP program
>   * @xdp_prog: XDP program to run
>   * @xdp_ring: ring to be used for XDP_TX action
> + * @rx_desc: packet descriptor

leftover comment from previous approach

>   *
>   * Returns any of ICE_XDP_{PASS, CONSUMED, TX, REDIR}
>   */
> -- 
> 2.41.0
>
Fijalkowski, Maciej Nov. 16, 2023, 3:01 p.m. UTC | #3
On Thu, Nov 16, 2023 at 04:01:01AM +0100, Larysa Zaremba wrote:
> On Thu, Nov 16, 2023 at 01:49:36PM +0100, Maciej Fijalkowski wrote:
> > On Wed, Nov 15, 2023 at 06:52:50PM +0100, Larysa Zaremba wrote:
> > > In AF_XDP ZC, xdp_buff is not stored on ring,
> > > instead it is provided by xsk_buff_pool.
> > > Space for metadata sources right after such buffers was already reserved
> > > in commit 94ecc5ca4dbf ("xsk: Add cb area to struct xdp_buff_xsk").
> > > 
> > > Some things (such as pointer to packet context) do not change on a
> > > per-packet basis, so they can be set at the same time as RX queue info.
> > > On the other hand, RX descriptor is unique for each packet, but is already
> > > known when setting DMA addresses. This minimizes performance impact of
> > > hints on regular packet processing.
> > > 
> > > Update AF_XDP ZC packet processing to support XDP hints.
> > > 
> > > Co-developed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> > > Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> > > Signed-off-by: Larysa Zaremba <larysa.zaremba@intel.com>
> > > ---
> > >  drivers/net/ethernet/intel/ice/ice_base.c | 13 +++++++++++++
> > >  drivers/net/ethernet/intel/ice/ice_xsk.c  | 17 +++++++++++------
> > >  2 files changed, 24 insertions(+), 6 deletions(-)
> > > 
> > > diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c
> > > index 2d83f3c029e7..d3396c1c87a9 100644
> > > --- a/drivers/net/ethernet/intel/ice/ice_base.c
> > > +++ b/drivers/net/ethernet/intel/ice/ice_base.c
> > > @@ -519,6 +519,18 @@ static int ice_setup_rx_ctx(struct ice_rx_ring *ring)
> > >  	return 0;
> > >  }
> > >  
> > > +static void ice_xsk_pool_fill_cb(struct ice_rx_ring *ring)
> > > +{
> > > +	void *ctx_ptr = &ring->pkt_ctx;
> > > +	struct xsk_cb_desc desc = {};
> > > +
> > > +	desc.src = &ctx_ptr;
> > > +	desc.off = offsetof(struct ice_xdp_buff, pkt_ctx) -
> > > +		   sizeof(struct xdp_buff);
> > 
> > Took me a while to figure out this offset calculation:D
> >
> 
> Do you have a suggestion, how to make it easier to understand?
>  

Not really, thought about moving the xdp_buff size subtraction to
xp_fill_cb but that would limit its usage. Let us keep it this way, I was
just probably being slow.
diff mbox series

Patch

diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c
index 2d83f3c029e7..d3396c1c87a9 100644
--- a/drivers/net/ethernet/intel/ice/ice_base.c
+++ b/drivers/net/ethernet/intel/ice/ice_base.c
@@ -519,6 +519,18 @@  static int ice_setup_rx_ctx(struct ice_rx_ring *ring)
 	return 0;
 }
 
+static void ice_xsk_pool_fill_cb(struct ice_rx_ring *ring)
+{
+	void *ctx_ptr = &ring->pkt_ctx;
+	struct xsk_cb_desc desc = {};
+
+	desc.src = &ctx_ptr;
+	desc.off = offsetof(struct ice_xdp_buff, pkt_ctx) -
+		   sizeof(struct xdp_buff);
+	desc.bytes = sizeof(ctx_ptr);
+	xsk_pool_fill_cb(ring->xsk_pool, &desc);
+}
+
 /**
  * ice_vsi_cfg_rxq - Configure an Rx queue
  * @ring: the ring being configured
@@ -553,6 +565,7 @@  int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
 			if (err)
 				return err;
 			xsk_pool_set_rxq_info(ring->xsk_pool, &ring->xdp_rxq);
+			ice_xsk_pool_fill_cb(ring);
 
 			dev_info(dev, "Registered XDP mem model MEM_TYPE_XSK_BUFF_POOL on Rx ring %d\n",
 				 ring->q_index);
diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
index 906e383e864a..a690e34ea8ae 100644
--- a/drivers/net/ethernet/intel/ice/ice_xsk.c
+++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
@@ -433,7 +433,7 @@  int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool, u16 qid)
 
 /**
  * ice_fill_rx_descs - pick buffers from XSK buffer pool and use it
- * @pool: XSK Buffer pool to pull the buffers from
+ * @rx_ring: rx ring
  * @xdp: SW ring of xdp_buff that will hold the buffers
  * @rx_desc: Pointer to Rx descriptors that will be filled
  * @count: The number of buffers to allocate
@@ -445,19 +445,24 @@  int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool, u16 qid)
  *
  * Returns the amount of allocated Rx descriptors
  */
-static u16 ice_fill_rx_descs(struct xsk_buff_pool *pool, struct xdp_buff **xdp,
+static u16 ice_fill_rx_descs(struct ice_rx_ring *rx_ring, struct xdp_buff **xdp,
 			     union ice_32b_rx_flex_desc *rx_desc, u16 count)
 {
 	dma_addr_t dma;
 	u16 buffs;
 	int i;
 
-	buffs = xsk_buff_alloc_batch(pool, xdp, count);
+	buffs = xsk_buff_alloc_batch(rx_ring->xsk_pool, xdp, count);
 	for (i = 0; i < buffs; i++) {
 		dma = xsk_buff_xdp_get_dma(*xdp);
 		rx_desc->read.pkt_addr = cpu_to_le64(dma);
 		rx_desc->wb.status_error0 = 0;
 
+		/* Put private info that changes on a per-packet basis
+		 * into xdp_buff_xsk->cb.
+		 */
+		ice_xdp_meta_set_desc(*xdp, rx_desc);
+
 		rx_desc++;
 		xdp++;
 	}
@@ -488,8 +493,7 @@  static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count)
 	xdp = ice_xdp_buf(rx_ring, ntu);
 
 	if (ntu + count >= rx_ring->count) {
-		nb_buffs_extra = ice_fill_rx_descs(rx_ring->xsk_pool, xdp,
-						   rx_desc,
+		nb_buffs_extra = ice_fill_rx_descs(rx_ring, xdp, rx_desc,
 						   rx_ring->count - ntu);
 		if (nb_buffs_extra != rx_ring->count - ntu) {
 			ntu += nb_buffs_extra;
@@ -502,7 +506,7 @@  static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count)
 		ice_release_rx_desc(rx_ring, 0);
 	}
 
-	nb_buffs = ice_fill_rx_descs(rx_ring->xsk_pool, xdp, rx_desc, count);
+	nb_buffs = ice_fill_rx_descs(rx_ring, xdp, rx_desc, count);
 
 	ntu += nb_buffs;
 	if (ntu == rx_ring->count)
@@ -752,6 +756,7 @@  static int ice_xmit_xdp_tx_zc(struct xdp_buff *xdp,
  * @xdp: xdp_buff used as input to the XDP program
  * @xdp_prog: XDP program to run
  * @xdp_ring: ring to be used for XDP_TX action
+ * @rx_desc: packet descriptor
  *
  * Returns any of ICE_XDP_{PASS, CONSUMED, TX, REDIR}
  */