diff mbox series

[v8,bpf-next,02/14] xdp: add xdp_shared_info data structure

Message ID b204c5d4514134e1b2de9c1959da71514d1f1340.1617885385.git.lorenzo@kernel.org (mailing list archive)
State Changes Requested
Delegated to: BPF
Headers show
Series mvneta: introduce XDP multi-buffer support | expand

Checks

Context Check Description
netdev/cover_letter success Link
netdev/fixes_present success Link
netdev/patch_count success Link
netdev/tree_selection success Clearly marked for bpf-next
netdev/subject_prefix success Link
netdev/cc_maintainers warning 7 maintainers not CCed: yhs@fb.com kpsingh@kernel.org hawk@kernel.org andrii@kernel.org kafai@fb.com thomas.petazzoni@bootlin.com songliubraving@fb.com
netdev/source_inline success Was 0 now: 0
netdev/verify_signedoff success Link
netdev/module_param success Was 0 now: 0
netdev/build_32bit fail Errors and warnings before: 6680 this patch: 5518
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/verify_fixes success Link
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 255 lines checked
netdev/build_allmodconfig_warn fail Errors and warnings before: 6893 this patch: 6277
netdev/header_inline success Link

Commit Message

Lorenzo Bianconi April 8, 2021, 12:50 p.m. UTC
Introduce xdp_shared_info data structure to contain info about
"non-linear" xdp frame. xdp_shared_info will alias skb_shared_info
allowing to keep most of the frags in the same cache-line.
Introduce some xdp_shared_info helpers aligned to skb_frag* ones

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/ethernet/marvell/mvneta.c | 62 +++++++++++++++------------
 include/net/xdp.h                     | 55 ++++++++++++++++++++++--
 2 files changed, 85 insertions(+), 32 deletions(-)

Comments

Vladimir Oltean April 8, 2021, 1:39 p.m. UTC | #1
Hi Lorenzo,

On Thu, Apr 08, 2021 at 02:50:54PM +0200, Lorenzo Bianconi wrote:
> Introduce xdp_shared_info data structure to contain info about
> "non-linear" xdp frame. xdp_shared_info will alias skb_shared_info
> allowing to keep most of the frags in the same cache-line.
> Introduce some xdp_shared_info helpers aligned to skb_frag* ones
> 
> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> ---

Would you mind updating all drivers that use skb_shared_info, such as
enetc, and not just mvneta? At the moment I get some build warnings:

drivers/net/ethernet/freescale/enetc/enetc.c: In function ‘enetc_xdp_frame_to_xdp_tx_swbd’:
drivers/net/ethernet/freescale/enetc/enetc.c:888:9: error: assignment to ‘struct skb_shared_info *’ from incompatible pointer type ‘struct xdp_shared_info *’ [-Werror=incompatible-pointer-types]
  888 |  shinfo = xdp_get_shared_info_from_frame(xdp_frame);
      |         ^
drivers/net/ethernet/freescale/enetc/enetc.c: In function ‘enetc_map_rx_buff_to_xdp’:
drivers/net/ethernet/freescale/enetc/enetc.c:975:9: error: assignment to ‘struct skb_shared_info *’ from incompatible pointer type ‘struct xdp_shared_info *’ [-Werror=incompatible-pointer-types]
  975 |  shinfo = xdp_get_shared_info_from_buff(xdp_buff);
      |         ^
drivers/net/ethernet/freescale/enetc/enetc.c: In function ‘enetc_add_rx_buff_to_xdp’:
drivers/net/ethernet/freescale/enetc/enetc.c:982:35: error: initialization of ‘struct skb_shared_info *’ from incompatible pointer type ‘struct xdp_shared_info *’ [-Werror=incompatible-pointer-types]
  982 |  struct skb_shared_info *shinfo = xdp_get_shared_info_from_buff(xdp_buff);
      |                                   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Lorenzo Bianconi April 8, 2021, 2:26 p.m. UTC | #2
> Hi Lorenzo,
> 
> On Thu, Apr 08, 2021 at 02:50:54PM +0200, Lorenzo Bianconi wrote:
> > Introduce xdp_shared_info data structure to contain info about
> > "non-linear" xdp frame. xdp_shared_info will alias skb_shared_info
> > allowing to keep most of the frags in the same cache-line.
> > Introduce some xdp_shared_info helpers aligned to skb_frag* ones
> > 
> > Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> > ---
> 
> Would you mind updating all drivers that use skb_shared_info, such as
> enetc, and not just mvneta? At the moment I get some build warnings:
> 
> drivers/net/ethernet/freescale/enetc/enetc.c: In function ‘enetc_xdp_frame_to_xdp_tx_swbd’:
> drivers/net/ethernet/freescale/enetc/enetc.c:888:9: error: assignment to ‘struct skb_shared_info *’ from incompatible pointer type ‘struct xdp_shared_info *’ [-Werror=incompatible-pointer-types]
>   888 |  shinfo = xdp_get_shared_info_from_frame(xdp_frame);
>       |         ^
> drivers/net/ethernet/freescale/enetc/enetc.c: In function ‘enetc_map_rx_buff_to_xdp’:
> drivers/net/ethernet/freescale/enetc/enetc.c:975:9: error: assignment to ‘struct skb_shared_info *’ from incompatible pointer type ‘struct xdp_shared_info *’ [-Werror=incompatible-pointer-types]
>   975 |  shinfo = xdp_get_shared_info_from_buff(xdp_buff);
>       |         ^
> drivers/net/ethernet/freescale/enetc/enetc.c: In function ‘enetc_add_rx_buff_to_xdp’:
> drivers/net/ethernet/freescale/enetc/enetc.c:982:35: error: initialization of ‘struct skb_shared_info *’ from incompatible pointer type ‘struct xdp_shared_info *’ [-Werror=incompatible-pointer-types]
>   982 |  struct skb_shared_info *shinfo = xdp_get_shared_info_from_buff(xdp_buff);
>       |                                   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> 

Hi Vladimir,

Ack, I will fix it in v9; enetc was not compiled on my machine, thanks for pointing
this out :)

Regards,
Lorenzo
kernel test robot April 8, 2021, 6:06 p.m. UTC | #3
Hi Lorenzo,

I love your patch! Yet something to improve:

[auto build test ERROR on bpf-next/master]

url:    https://github.com/0day-ci/linux/commits/Lorenzo-Bianconi/mvneta-introduce-XDP-multi-buffer-support/20210408-205429
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
config: ia64-allmodconfig (attached as .config)
compiler: ia64-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/3652b59c1a912ad4f2d609e074eeb332f44ba4d7
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Lorenzo-Bianconi/mvneta-introduce-XDP-multi-buffer-support/20210408-205429
        git checkout 3652b59c1a912ad4f2d609e074eeb332f44ba4d7
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=ia64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   drivers/net/ethernet/freescale/enetc/enetc.c: In function 'enetc_xdp_frame_to_xdp_tx_swbd':
>> drivers/net/ethernet/freescale/enetc/enetc.c:888:9: error: assignment to 'struct skb_shared_info *' from incompatible pointer type 'struct xdp_shared_info *' [-Werror=incompatible-pointer-types]
     888 |  shinfo = xdp_get_shared_info_from_frame(xdp_frame);
         |         ^
   drivers/net/ethernet/freescale/enetc/enetc.c: In function 'enetc_map_rx_buff_to_xdp':
   drivers/net/ethernet/freescale/enetc/enetc.c:975:9: error: assignment to 'struct skb_shared_info *' from incompatible pointer type 'struct xdp_shared_info *' [-Werror=incompatible-pointer-types]
     975 |  shinfo = xdp_get_shared_info_from_buff(xdp_buff);
         |         ^
   drivers/net/ethernet/freescale/enetc/enetc.c: In function 'enetc_add_rx_buff_to_xdp':
>> drivers/net/ethernet/freescale/enetc/enetc.c:982:35: error: initialization of 'struct skb_shared_info *' from incompatible pointer type 'struct xdp_shared_info *' [-Werror=incompatible-pointer-types]
     982 |  struct skb_shared_info *shinfo = xdp_get_shared_info_from_buff(xdp_buff);
         |                                   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   cc1: some warnings being treated as errors


vim +888 drivers/net/ethernet/freescale/enetc/enetc.c

7ed2bc80074ed4 Vladimir Oltean 2021-03-31  858  
9d2b68cc108db2 Vladimir Oltean 2021-03-31  859  static int enetc_xdp_frame_to_xdp_tx_swbd(struct enetc_bdr *tx_ring,
9d2b68cc108db2 Vladimir Oltean 2021-03-31  860  					  struct enetc_tx_swbd *xdp_tx_arr,
9d2b68cc108db2 Vladimir Oltean 2021-03-31  861  					  struct xdp_frame *xdp_frame)
9d2b68cc108db2 Vladimir Oltean 2021-03-31  862  {
9d2b68cc108db2 Vladimir Oltean 2021-03-31  863  	struct enetc_tx_swbd *xdp_tx_swbd = &xdp_tx_arr[0];
9d2b68cc108db2 Vladimir Oltean 2021-03-31  864  	struct skb_shared_info *shinfo;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  865  	void *data = xdp_frame->data;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  866  	int len = xdp_frame->len;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  867  	skb_frag_t *frag;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  868  	dma_addr_t dma;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  869  	unsigned int f;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  870  	int n = 0;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  871  
9d2b68cc108db2 Vladimir Oltean 2021-03-31  872  	dma = dma_map_single(tx_ring->dev, data, len, DMA_TO_DEVICE);
9d2b68cc108db2 Vladimir Oltean 2021-03-31  873  	if (unlikely(dma_mapping_error(tx_ring->dev, dma))) {
9d2b68cc108db2 Vladimir Oltean 2021-03-31  874  		netdev_err(tx_ring->ndev, "DMA map error\n");
9d2b68cc108db2 Vladimir Oltean 2021-03-31  875  		return -1;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  876  	}
9d2b68cc108db2 Vladimir Oltean 2021-03-31  877  
9d2b68cc108db2 Vladimir Oltean 2021-03-31  878  	xdp_tx_swbd->dma = dma;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  879  	xdp_tx_swbd->dir = DMA_TO_DEVICE;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  880  	xdp_tx_swbd->len = len;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  881  	xdp_tx_swbd->is_xdp_redirect = true;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  882  	xdp_tx_swbd->is_eof = false;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  883  	xdp_tx_swbd->xdp_frame = NULL;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  884  
9d2b68cc108db2 Vladimir Oltean 2021-03-31  885  	n++;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  886  	xdp_tx_swbd = &xdp_tx_arr[n];
9d2b68cc108db2 Vladimir Oltean 2021-03-31  887  
9d2b68cc108db2 Vladimir Oltean 2021-03-31 @888  	shinfo = xdp_get_shared_info_from_frame(xdp_frame);
9d2b68cc108db2 Vladimir Oltean 2021-03-31  889  
9d2b68cc108db2 Vladimir Oltean 2021-03-31  890  	for (f = 0, frag = &shinfo->frags[0]; f < shinfo->nr_frags;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  891  	     f++, frag++) {
9d2b68cc108db2 Vladimir Oltean 2021-03-31  892  		data = skb_frag_address(frag);
9d2b68cc108db2 Vladimir Oltean 2021-03-31  893  		len = skb_frag_size(frag);
9d2b68cc108db2 Vladimir Oltean 2021-03-31  894  
9d2b68cc108db2 Vladimir Oltean 2021-03-31  895  		dma = dma_map_single(tx_ring->dev, data, len, DMA_TO_DEVICE);
9d2b68cc108db2 Vladimir Oltean 2021-03-31  896  		if (unlikely(dma_mapping_error(tx_ring->dev, dma))) {
9d2b68cc108db2 Vladimir Oltean 2021-03-31  897  			/* Undo the DMA mapping for all fragments */
9d2b68cc108db2 Vladimir Oltean 2021-03-31  898  			while (n-- >= 0)
9d2b68cc108db2 Vladimir Oltean 2021-03-31  899  				enetc_unmap_tx_buff(tx_ring, &xdp_tx_arr[n]);
9d2b68cc108db2 Vladimir Oltean 2021-03-31  900  
9d2b68cc108db2 Vladimir Oltean 2021-03-31  901  			netdev_err(tx_ring->ndev, "DMA map error\n");
9d2b68cc108db2 Vladimir Oltean 2021-03-31  902  			return -1;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  903  		}
9d2b68cc108db2 Vladimir Oltean 2021-03-31  904  
9d2b68cc108db2 Vladimir Oltean 2021-03-31  905  		xdp_tx_swbd->dma = dma;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  906  		xdp_tx_swbd->dir = DMA_TO_DEVICE;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  907  		xdp_tx_swbd->len = len;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  908  		xdp_tx_swbd->is_xdp_redirect = true;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  909  		xdp_tx_swbd->is_eof = false;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  910  		xdp_tx_swbd->xdp_frame = NULL;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  911  
9d2b68cc108db2 Vladimir Oltean 2021-03-31  912  		n++;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  913  		xdp_tx_swbd = &xdp_tx_arr[n];
9d2b68cc108db2 Vladimir Oltean 2021-03-31  914  	}
9d2b68cc108db2 Vladimir Oltean 2021-03-31  915  
9d2b68cc108db2 Vladimir Oltean 2021-03-31  916  	xdp_tx_arr[n - 1].is_eof = true;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  917  	xdp_tx_arr[n - 1].xdp_frame = xdp_frame;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  918  
9d2b68cc108db2 Vladimir Oltean 2021-03-31  919  	return n;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  920  }
9d2b68cc108db2 Vladimir Oltean 2021-03-31  921  
9d2b68cc108db2 Vladimir Oltean 2021-03-31  922  int enetc_xdp_xmit(struct net_device *ndev, int num_frames,
9d2b68cc108db2 Vladimir Oltean 2021-03-31  923  		   struct xdp_frame **frames, u32 flags)
9d2b68cc108db2 Vladimir Oltean 2021-03-31  924  {
9d2b68cc108db2 Vladimir Oltean 2021-03-31  925  	struct enetc_tx_swbd xdp_redirect_arr[ENETC_MAX_SKB_FRAGS] = {0};
9d2b68cc108db2 Vladimir Oltean 2021-03-31  926  	struct enetc_ndev_priv *priv = netdev_priv(ndev);
9d2b68cc108db2 Vladimir Oltean 2021-03-31  927  	struct enetc_bdr *tx_ring;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  928  	int xdp_tx_bd_cnt, i, k;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  929  	int xdp_tx_frm_cnt = 0;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  930  
9d2b68cc108db2 Vladimir Oltean 2021-03-31  931  	tx_ring = priv->tx_ring[smp_processor_id()];
9d2b68cc108db2 Vladimir Oltean 2021-03-31  932  
9d2b68cc108db2 Vladimir Oltean 2021-03-31  933  	prefetchw(ENETC_TXBD(*tx_ring, tx_ring->next_to_use));
9d2b68cc108db2 Vladimir Oltean 2021-03-31  934  
9d2b68cc108db2 Vladimir Oltean 2021-03-31  935  	for (k = 0; k < num_frames; k++) {
9d2b68cc108db2 Vladimir Oltean 2021-03-31  936  		xdp_tx_bd_cnt = enetc_xdp_frame_to_xdp_tx_swbd(tx_ring,
9d2b68cc108db2 Vladimir Oltean 2021-03-31  937  							       xdp_redirect_arr,
9d2b68cc108db2 Vladimir Oltean 2021-03-31  938  							       frames[k]);
9d2b68cc108db2 Vladimir Oltean 2021-03-31  939  		if (unlikely(xdp_tx_bd_cnt < 0))
9d2b68cc108db2 Vladimir Oltean 2021-03-31  940  			break;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  941  
9d2b68cc108db2 Vladimir Oltean 2021-03-31  942  		if (unlikely(!enetc_xdp_tx(tx_ring, xdp_redirect_arr,
9d2b68cc108db2 Vladimir Oltean 2021-03-31  943  					   xdp_tx_bd_cnt))) {
9d2b68cc108db2 Vladimir Oltean 2021-03-31  944  			for (i = 0; i < xdp_tx_bd_cnt; i++)
9d2b68cc108db2 Vladimir Oltean 2021-03-31  945  				enetc_unmap_tx_buff(tx_ring,
9d2b68cc108db2 Vladimir Oltean 2021-03-31  946  						    &xdp_redirect_arr[i]);
9d2b68cc108db2 Vladimir Oltean 2021-03-31  947  			tx_ring->stats.xdp_tx_drops++;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  948  			break;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  949  		}
9d2b68cc108db2 Vladimir Oltean 2021-03-31  950  
9d2b68cc108db2 Vladimir Oltean 2021-03-31  951  		xdp_tx_frm_cnt++;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  952  	}
9d2b68cc108db2 Vladimir Oltean 2021-03-31  953  
9d2b68cc108db2 Vladimir Oltean 2021-03-31  954  	if (unlikely((flags & XDP_XMIT_FLUSH) || k != xdp_tx_frm_cnt))
9d2b68cc108db2 Vladimir Oltean 2021-03-31  955  		enetc_update_tx_ring_tail(tx_ring);
9d2b68cc108db2 Vladimir Oltean 2021-03-31  956  
9d2b68cc108db2 Vladimir Oltean 2021-03-31  957  	tx_ring->stats.xdp_tx += xdp_tx_frm_cnt;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  958  
9d2b68cc108db2 Vladimir Oltean 2021-03-31  959  	return xdp_tx_frm_cnt;
9d2b68cc108db2 Vladimir Oltean 2021-03-31  960  }
9d2b68cc108db2 Vladimir Oltean 2021-03-31  961  
d1b15102dd16ad Vladimir Oltean 2021-03-31  962  static void enetc_map_rx_buff_to_xdp(struct enetc_bdr *rx_ring, int i,
d1b15102dd16ad Vladimir Oltean 2021-03-31  963  				     struct xdp_buff *xdp_buff, u16 size)
d1b15102dd16ad Vladimir Oltean 2021-03-31  964  {
d1b15102dd16ad Vladimir Oltean 2021-03-31  965  	struct enetc_rx_swbd *rx_swbd = enetc_get_rx_buff(rx_ring, i, size);
d1b15102dd16ad Vladimir Oltean 2021-03-31  966  	void *hard_start = page_address(rx_swbd->page) + rx_swbd->page_offset;
d1b15102dd16ad Vladimir Oltean 2021-03-31  967  	struct skb_shared_info *shinfo;
d1b15102dd16ad Vladimir Oltean 2021-03-31  968  
7ed2bc80074ed4 Vladimir Oltean 2021-03-31  969  	/* To be used for XDP_TX */
7ed2bc80074ed4 Vladimir Oltean 2021-03-31  970  	rx_swbd->len = size;
7ed2bc80074ed4 Vladimir Oltean 2021-03-31  971  
d1b15102dd16ad Vladimir Oltean 2021-03-31  972  	xdp_prepare_buff(xdp_buff, hard_start - rx_ring->buffer_offset,
d1b15102dd16ad Vladimir Oltean 2021-03-31  973  			 rx_ring->buffer_offset, size, false);
d1b15102dd16ad Vladimir Oltean 2021-03-31  974  
d1b15102dd16ad Vladimir Oltean 2021-03-31  975  	shinfo = xdp_get_shared_info_from_buff(xdp_buff);
d1b15102dd16ad Vladimir Oltean 2021-03-31  976  	shinfo->nr_frags = 0;
d1b15102dd16ad Vladimir Oltean 2021-03-31  977  }
d1b15102dd16ad Vladimir Oltean 2021-03-31  978  
d1b15102dd16ad Vladimir Oltean 2021-03-31  979  static void enetc_add_rx_buff_to_xdp(struct enetc_bdr *rx_ring, int i,
d1b15102dd16ad Vladimir Oltean 2021-03-31  980  				     u16 size, struct xdp_buff *xdp_buff)
d1b15102dd16ad Vladimir Oltean 2021-03-31  981  {
d1b15102dd16ad Vladimir Oltean 2021-03-31 @982  	struct skb_shared_info *shinfo = xdp_get_shared_info_from_buff(xdp_buff);
d1b15102dd16ad Vladimir Oltean 2021-03-31  983  	struct enetc_rx_swbd *rx_swbd = enetc_get_rx_buff(rx_ring, i, size);
d1b15102dd16ad Vladimir Oltean 2021-03-31  984  	skb_frag_t *frag = &shinfo->frags[shinfo->nr_frags];
d1b15102dd16ad Vladimir Oltean 2021-03-31  985  
7ed2bc80074ed4 Vladimir Oltean 2021-03-31  986  	/* To be used for XDP_TX */
7ed2bc80074ed4 Vladimir Oltean 2021-03-31  987  	rx_swbd->len = size;
7ed2bc80074ed4 Vladimir Oltean 2021-03-31  988  
d1b15102dd16ad Vladimir Oltean 2021-03-31  989  	skb_frag_off_set(frag, rx_swbd->page_offset);
d1b15102dd16ad Vladimir Oltean 2021-03-31  990  	skb_frag_size_set(frag, size);
d1b15102dd16ad Vladimir Oltean 2021-03-31  991  	__skb_frag_set_page(frag, rx_swbd->page);
d1b15102dd16ad Vladimir Oltean 2021-03-31  992  
d1b15102dd16ad Vladimir Oltean 2021-03-31  993  	shinfo->nr_frags++;
d1b15102dd16ad Vladimir Oltean 2021-03-31  994  }
d1b15102dd16ad Vladimir Oltean 2021-03-31  995  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
diff mbox series

Patch

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index f20dfd1d7a6b..a52e132fd2cf 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -2036,14 +2036,17 @@  int mvneta_rx_refill_queue(struct mvneta_port *pp, struct mvneta_rx_queue *rxq)
 
 static void
 mvneta_xdp_put_buff(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
-		    struct xdp_buff *xdp, struct skb_shared_info *sinfo,
+		    struct xdp_buff *xdp, struct xdp_shared_info *xdp_sinfo,
 		    int sync_len)
 {
 	int i;
 
-	for (i = 0; i < sinfo->nr_frags; i++)
+	for (i = 0; i < xdp_sinfo->nr_frags; i++) {
+		skb_frag_t *frag = &xdp_sinfo->frags[i];
+
 		page_pool_put_full_page(rxq->page_pool,
-					skb_frag_page(&sinfo->frags[i]), true);
+					xdp_get_frag_page(frag), true);
+	}
 	page_pool_put_page(rxq->page_pool, virt_to_head_page(xdp->data),
 			   sync_len, true);
 }
@@ -2181,7 +2184,7 @@  mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 	       struct bpf_prog *prog, struct xdp_buff *xdp,
 	       u32 frame_sz, struct mvneta_stats *stats)
 {
-	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
+	struct xdp_shared_info *xdp_sinfo = xdp_get_shared_info_from_buff(xdp);
 	unsigned int len, data_len, sync;
 	u32 ret, act;
 
@@ -2202,7 +2205,7 @@  mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 
 		err = xdp_do_redirect(pp->dev, xdp, prog);
 		if (unlikely(err)) {
-			mvneta_xdp_put_buff(pp, rxq, xdp, sinfo, sync);
+			mvneta_xdp_put_buff(pp, rxq, xdp, xdp_sinfo, sync);
 			ret = MVNETA_XDP_DROPPED;
 		} else {
 			ret = MVNETA_XDP_REDIR;
@@ -2213,7 +2216,7 @@  mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 	case XDP_TX:
 		ret = mvneta_xdp_xmit_back(pp, xdp);
 		if (ret != MVNETA_XDP_TX)
-			mvneta_xdp_put_buff(pp, rxq, xdp, sinfo, sync);
+			mvneta_xdp_put_buff(pp, rxq, xdp, xdp_sinfo, sync);
 		break;
 	default:
 		bpf_warn_invalid_xdp_action(act);
@@ -2222,7 +2225,7 @@  mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 		trace_xdp_exception(pp->dev, prog, act);
 		fallthrough;
 	case XDP_DROP:
-		mvneta_xdp_put_buff(pp, rxq, xdp, sinfo, sync);
+		mvneta_xdp_put_buff(pp, rxq, xdp, xdp_sinfo, sync);
 		ret = MVNETA_XDP_DROPPED;
 		stats->xdp_drop++;
 		break;
@@ -2243,9 +2246,9 @@  mvneta_swbm_rx_frame(struct mvneta_port *pp,
 {
 	unsigned char *data = page_address(page);
 	int data_len = -MVNETA_MH_SIZE, len;
+	struct xdp_shared_info *xdp_sinfo;
 	struct net_device *dev = pp->dev;
 	enum dma_data_direction dma_dir;
-	struct skb_shared_info *sinfo;
 
 	if (*size > MVNETA_MAX_RX_BUF_SIZE) {
 		len = MVNETA_MAX_RX_BUF_SIZE;
@@ -2268,8 +2271,8 @@  mvneta_swbm_rx_frame(struct mvneta_port *pp,
 	xdp_prepare_buff(xdp, data, pp->rx_offset_correction + MVNETA_MH_SIZE,
 			 data_len, false);
 
-	sinfo = xdp_get_shared_info_from_buff(xdp);
-	sinfo->nr_frags = 0;
+	xdp_sinfo = xdp_get_shared_info_from_buff(xdp);
+	xdp_sinfo->nr_frags = 0;
 }
 
 static void
@@ -2277,7 +2280,7 @@  mvneta_swbm_add_rx_fragment(struct mvneta_port *pp,
 			    struct mvneta_rx_desc *rx_desc,
 			    struct mvneta_rx_queue *rxq,
 			    struct xdp_buff *xdp, int *size,
-			    struct skb_shared_info *xdp_sinfo,
+			    struct xdp_shared_info *xdp_sinfo,
 			    struct page *page)
 {
 	struct net_device *dev = pp->dev;
@@ -2300,13 +2303,13 @@  mvneta_swbm_add_rx_fragment(struct mvneta_port *pp,
 	if (data_len > 0 && xdp_sinfo->nr_frags < MAX_SKB_FRAGS) {
 		skb_frag_t *frag = &xdp_sinfo->frags[xdp_sinfo->nr_frags++];
 
-		skb_frag_off_set(frag, pp->rx_offset_correction);
-		skb_frag_size_set(frag, data_len);
-		__skb_frag_set_page(frag, page);
+		xdp_set_frag_offset(frag, pp->rx_offset_correction);
+		xdp_set_frag_size(frag, data_len);
+		xdp_set_frag_page(frag, page);
 
 		/* last fragment */
 		if (len == *size) {
-			struct skb_shared_info *sinfo;
+			struct xdp_shared_info *sinfo;
 
 			sinfo = xdp_get_shared_info_from_buff(xdp);
 			sinfo->nr_frags = xdp_sinfo->nr_frags;
@@ -2323,10 +2326,13 @@  static struct sk_buff *
 mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 		      struct xdp_buff *xdp, u32 desc_status)
 {
-	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
-	int i, num_frags = sinfo->nr_frags;
+	struct xdp_shared_info *xdp_sinfo = xdp_get_shared_info_from_buff(xdp);
+	int i, num_frags = xdp_sinfo->nr_frags;
+	skb_frag_t frag_list[MAX_SKB_FRAGS];
 	struct sk_buff *skb;
 
+	memcpy(frag_list, xdp_sinfo->frags, sizeof(skb_frag_t) * num_frags);
+
 	skb = build_skb(xdp->data_hard_start, PAGE_SIZE);
 	if (!skb)
 		return ERR_PTR(-ENOMEM);
@@ -2338,12 +2344,12 @@  mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 	mvneta_rx_csum(pp, desc_status, skb);
 
 	for (i = 0; i < num_frags; i++) {
-		skb_frag_t *frag = &sinfo->frags[i];
+		struct page *page = xdp_get_frag_page(&frag_list[i]);
 
 		skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
-				skb_frag_page(frag), skb_frag_off(frag),
-				skb_frag_size(frag), PAGE_SIZE);
-		page_pool_release_page(rxq->page_pool, skb_frag_page(frag));
+				page, xdp_get_frag_offset(&frag_list[i]),
+				xdp_get_frag_size(&frag_list[i]), PAGE_SIZE);
+		page_pool_release_page(rxq->page_pool, page);
 	}
 
 	return skb;
@@ -2356,7 +2362,7 @@  static int mvneta_rx_swbm(struct napi_struct *napi,
 {
 	int rx_proc = 0, rx_todo, refill, size = 0;
 	struct net_device *dev = pp->dev;
-	struct skb_shared_info sinfo;
+	struct xdp_shared_info xdp_sinfo;
 	struct mvneta_stats ps = {};
 	struct bpf_prog *xdp_prog;
 	u32 desc_status, frame_sz;
@@ -2365,7 +2371,7 @@  static int mvneta_rx_swbm(struct napi_struct *napi,
 	xdp_init_buff(&xdp_buf, PAGE_SIZE, &rxq->xdp_rxq);
 	xdp_buf.data_hard_start = NULL;
 
-	sinfo.nr_frags = 0;
+	xdp_sinfo.nr_frags = 0;
 
 	/* Get number of received packets */
 	rx_todo = mvneta_rxq_busy_desc_num_get(pp, rxq);
@@ -2409,7 +2415,7 @@  static int mvneta_rx_swbm(struct napi_struct *napi,
 			}
 
 			mvneta_swbm_add_rx_fragment(pp, rx_desc, rxq, &xdp_buf,
-						    &size, &sinfo, page);
+						    &size, &xdp_sinfo, page);
 		} /* Middle or Last descriptor */
 
 		if (!(rx_status & MVNETA_RXD_LAST_DESC))
@@ -2417,7 +2423,7 @@  static int mvneta_rx_swbm(struct napi_struct *napi,
 			continue;
 
 		if (size) {
-			mvneta_xdp_put_buff(pp, rxq, &xdp_buf, &sinfo, -1);
+			mvneta_xdp_put_buff(pp, rxq, &xdp_buf, &xdp_sinfo, -1);
 			goto next;
 		}
 
@@ -2429,7 +2435,7 @@  static int mvneta_rx_swbm(struct napi_struct *napi,
 		if (IS_ERR(skb)) {
 			struct mvneta_pcpu_stats *stats = this_cpu_ptr(pp->stats);
 
-			mvneta_xdp_put_buff(pp, rxq, &xdp_buf, &sinfo, -1);
+			mvneta_xdp_put_buff(pp, rxq, &xdp_buf, &xdp_sinfo, -1);
 
 			u64_stats_update_begin(&stats->syncp);
 			stats->es.skb_alloc_error++;
@@ -2446,12 +2452,12 @@  static int mvneta_rx_swbm(struct napi_struct *napi,
 		napi_gro_receive(napi, skb);
 next:
 		xdp_buf.data_hard_start = NULL;
-		sinfo.nr_frags = 0;
+		xdp_sinfo.nr_frags = 0;
 	}
 	rcu_read_unlock();
 
 	if (xdp_buf.data_hard_start)
-		mvneta_xdp_put_buff(pp, rxq, &xdp_buf, &sinfo, -1);
+		mvneta_xdp_put_buff(pp, rxq, &xdp_buf, &xdp_sinfo, -1);
 
 	if (ps.xdp_redirect)
 		xdp_do_flush_map();
diff --git a/include/net/xdp.h b/include/net/xdp.h
index 842580a61563..02aea7696d15 100644
--- a/include/net/xdp.h
+++ b/include/net/xdp.h
@@ -109,10 +109,54 @@  xdp_prepare_buff(struct xdp_buff *xdp, unsigned char *hard_start,
 	((xdp)->data_hard_start + (xdp)->frame_sz -	\
 	 SKB_DATA_ALIGN(sizeof(struct skb_shared_info)))
 
-static inline struct skb_shared_info *
+struct xdp_shared_info {
+	u16 nr_frags;
+	u16 data_length; /* paged area length */
+	skb_frag_t frags[MAX_SKB_FRAGS];
+};
+
+static inline struct xdp_shared_info *
 xdp_get_shared_info_from_buff(struct xdp_buff *xdp)
 {
-	return (struct skb_shared_info *)xdp_data_hard_end(xdp);
+	BUILD_BUG_ON(sizeof(struct xdp_shared_info) >
+		     sizeof(struct skb_shared_info));
+	return (struct xdp_shared_info *)xdp_data_hard_end(xdp);
+}
+
+static inline struct page *xdp_get_frag_page(const skb_frag_t *frag)
+{
+	return frag->bv_page;
+}
+
+static inline unsigned int xdp_get_frag_offset(const skb_frag_t *frag)
+{
+	return frag->bv_offset;
+}
+
+static inline unsigned int xdp_get_frag_size(const skb_frag_t *frag)
+{
+	return frag->bv_len;
+}
+
+static inline void *xdp_get_frag_address(const skb_frag_t *frag)
+{
+	return page_address(xdp_get_frag_page(frag)) +
+	       xdp_get_frag_offset(frag);
+}
+
+static inline void xdp_set_frag_page(skb_frag_t *frag, struct page *page)
+{
+	frag->bv_page = page;
+}
+
+static inline void xdp_set_frag_offset(skb_frag_t *frag, u32 offset)
+{
+	frag->bv_offset = offset;
+}
+
+static inline void xdp_set_frag_size(skb_frag_t *frag, u32 size)
+{
+	frag->bv_len = size;
 }
 
 struct xdp_frame {
@@ -142,12 +186,15 @@  static __always_inline void xdp_frame_bulk_init(struct xdp_frame_bulk *bq)
 	bq->xa = NULL;
 }
 
-static inline struct skb_shared_info *
+static inline struct xdp_shared_info *
 xdp_get_shared_info_from_frame(struct xdp_frame *frame)
 {
 	void *data_hard_start = frame->data - frame->headroom - sizeof(*frame);
 
-	return (struct skb_shared_info *)(data_hard_start + frame->frame_sz -
+	/* xdp_shared_info struct must be aligned to skb_shared_info
+	 * area in buffer tailroom
+	 */
+	return (struct xdp_shared_info *)(data_hard_start + frame->frame_sz -
 				SKB_DATA_ALIGN(sizeof(struct skb_shared_info)));
 }