From patchwork Thu Jun 20 22:19:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yan Zhai X-Patchwork-Id: 13706445 Received: from mail-qk1-f174.google.com (mail-qk1-f174.google.com [209.85.222.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 46117154C17 for ; Thu, 20 Jun 2024 22:19:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718921974; cv=none; b=IxuFNdGgwUX62gc/g2J2LFZawTGjKDwI1GWNGwkUl8HdW749Y8cIW+ymg2Pf9ToDvgGuQym7Gt7i9uE6jnIRUR70yRyIGykaG4Ipii8odWKSA/pS7Ur1yGmyyZnw21xxDfWKt71Y3VONXj8lasci/AxywSWUbEQrS0Ut3NyT2bE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718921974; c=relaxed/simple; bh=5NFFxPVvZiB990JZW1dZ4CHiL7ACNygPdHqSlrsa7z4=; h=From:Date:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=tMmBqzXRXQlDUzTmdRbKm5FgruddNCxoTF3B2YPkrMdJjpMiG1mUR1zNIt73Z2PhQIN4s/v5O1Ga7LBK5hkIV5SkihzCvg8q8fo2NEkhQWfU2TcvNqqf6qlfArySgkud9QBuwhLUWmJwb677RnYKQngftF3DfoRyVhtcG9ERAQM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cloudflare.com; spf=pass smtp.mailfrom=cloudflare.com; dkim=pass (2048-bit key) header.d=cloudflare.com header.i=@cloudflare.com header.b=aHW3Xsw3; arc=none smtp.client-ip=209.85.222.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cloudflare.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cloudflare.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=cloudflare.com header.i=@cloudflare.com header.b="aHW3Xsw3" Received: by mail-qk1-f174.google.com with SMTP id af79cd13be357-7965d034cedso91988385a.3 for ; Thu, 20 Jun 2024 15:19:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloudflare.com; s=google09082023; t=1718921971; x=1719526771; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:from:to:cc:subject:date:message-id:reply-to; bh=rykvLwThOhLrOOeeNqCjBeseo9IibL0Bdz+4MdzT9Nw=; b=aHW3Xsw3/rvH7W/PSf5PZXZHx2lKH3vDD9o81M3R6NwUGEC8JUDdpLuZ58ecj6Atjh BWgsD04L5bi/PfHxs32ocX5eJUPAhwROQPVAfzNtKjMrilewV5NrydvG/ApSEzuY8Wkf YVrnROFCEAO6QpuaGv7iYwWL5TXxB2tmiXDq5yGD1s9T9rlirHZa7WD/6805N5sw9bZJ ci1Fnu23NsIeSNhQtn57/2cQNt7ocmojtcN9mPSOs3U8u69R/0xPSDwwerWRkxs4f4EL 8WI9mymLVBaMiuk+d+CoJBoOBSp03gvB/AhPkBcIkHEn07h/YaVf/AIajbiTLPrULlgg s/MA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718921971; x=1719526771; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=rykvLwThOhLrOOeeNqCjBeseo9IibL0Bdz+4MdzT9Nw=; b=w22q9mMYwW4kTCmUyxlId1Zlk9W5N/9nQ6cf+CdlWSB5LYDyRH8PBObt5PjjO1bkom RMo0RLLCeTbXIw1wtUSg52WEYzNEs9oBDh0k1DgPInQ1XF7ChTHiAj4iIUhi27lRC9t3 cNx8G0PqDO6hygRfKTPWqljAYfMaoP+/lhmIApiJra4KsyWITor6pLVdE27euX3O+1p/ Zwdv2F314vrEGGBDOAJ0QcIL5Xm5DCcjlwWqsUERomJOXJDJi7OEvDX9qZFSlDPBR7xE x+8Sw+0Mgk4LmGFZ7352+yQ5UfZ1hrMjCHKfzhe4ypOxajayJPzB8PkPvhEm0cFdhhyf vZTQ== X-Forwarded-Encrypted: i=1; AJvYcCXyMVH4vxY3dlh+Mf+/cEWZPtt6pXjM61X56oZ/g+bNM59UsTZT07S3qaY+CxFpCFjDe++YuuRGnWBu3QuF1LziXz/Gz1m4Ep8nmg== X-Gm-Message-State: AOJu0YyPA+cvFEOfdeidXJnnU44AFvLQUxNhgW8GlHxe36xgRRmuOg2A /kbGNkqAcBOEPcz5/eigH/wXr1R82g5yyCdw5UlK/l9Jm/OA2vqdh8LKdPx1UIk= X-Google-Smtp-Source: AGHT+IG8SQF3m3IlxRlHF/5JsZ+TboQGIMJ+L1bIDhhhEA3zuZEzDdHGY8kz+7qvbwzEpIV8spi//w== X-Received: by 2002:a05:622a:305:b0:440:607a:dcb with SMTP id d75a77b69052e-444a79e14bdmr75009831cf.29.1718921971108; Thu, 20 Jun 2024 15:19:31 -0700 (PDT) Received: from debian.debian ([2a09:bac5:7a49:19cd::292:40]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-444c2b96f1bsm1964241cf.50.2024.06.20.15.19.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jun 2024 15:19:30 -0700 (PDT) From: Jesper Dangaard Brouer X-Google-Original-From: Jesper Dangaard Brouer Date: Thu, 20 Jun 2024 15:19:28 -0700 To: netdev@vger.kernel.org Cc: Saeed Mahameed , Leon Romanovsky , Tariq Toukan , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Yan Zhai , Dragos Tatulea , Alexander Lobakin , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org Subject: [RFC net-next 7/9] mlx5: move xdp_buff scope one level up Message-ID: <5b7a761d6efa1be2ace4c12c1681f341a87d8d24.1718919473.git.yan@cloudflare.com> References: Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: This is in preparation for changes. Signed-off-by: Jesper Dangaard Brouer --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 6 +- .../ethernet/mellanox/mlx5/core/en/xsk/rx.c | 6 +- .../ethernet/mellanox/mlx5/core/en/xsk/rx.h | 6 +- .../net/ethernet/mellanox/mlx5/core/en_rx.c | 103 +++++++++--------- 4 files changed, 66 insertions(+), 55 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index 6a343a8f162f..3d26f976f692 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -580,14 +580,16 @@ struct mlx5e_mpw_info { #define MLX5E_MAX_RX_FRAGS 4 struct mlx5e_rq; +struct mlx5e_xdp_buff; typedef void (*mlx5e_fp_handle_rx_cqe)(struct mlx5e_rq*, struct mlx5_cqe64*); typedef struct sk_buff * (*mlx5e_fp_skb_from_cqe_mpwrq)(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, struct mlx5_cqe64 *cqe, u16 cqe_bcnt, - u32 head_offset, u32 page_idx); + u32 head_offset, u32 page_idx, + struct mlx5e_xdp_buff *mxbuf); typedef struct sk_buff * (*mlx5e_fp_skb_from_cqe)(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, - struct mlx5_cqe64 *cqe, u32 cqe_bcnt); + struct mlx5_cqe64 *cqe, u32 cqe_bcnt, struct mlx5e_xdp_buff *mxbuf); typedef bool (*mlx5e_fp_post_rx_wqes)(struct mlx5e_rq *rq); typedef void (*mlx5e_fp_dealloc_wqe)(struct mlx5e_rq*, u16); typedef void (*mlx5e_fp_shampo_dealloc_hd)(struct mlx5e_rq*, u16, u16, bool); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c index 1b7132fa70de..4dacaa61e106 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c @@ -249,7 +249,8 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset, - u32 page_idx) + u32 page_idx, + struct mlx5e_xdp_buff *mxbuf_) { struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(wi->alloc_units.xsk_buffs[page_idx]); struct bpf_prog *prog; @@ -304,7 +305,8 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, struct mlx5_cqe64 *cqe, - u32 cqe_bcnt) + u32 cqe_bcnt, + struct mlx5e_xdp_buff *mxbuf_) { struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(*wi->xskp); struct bpf_prog *prog; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h index cefc0ef6105d..0890c975042c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h @@ -16,10 +16,12 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset, - u32 page_idx); + u32 page_idx, + struct mlx5e_xdp_buff *mxbuf_); struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, struct mlx5_cqe64 *cqe, - u32 cqe_bcnt); + u32 cqe_bcnt, + struct mlx5e_xdp_buff *mxbuf_); #endif /* __MLX5_EN_XSK_RX_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index 225da8d691fc..1a592a1ab988 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -63,11 +63,11 @@ static struct sk_buff * mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset, - u32 page_idx); + u32 page_idx, struct mlx5e_xdp_buff *mxbuf); static struct sk_buff * mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset, - u32 page_idx); + u32 page_idx, struct mlx5e_xdp_buff *mxbuf); static void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe); static void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe); static void mlx5e_handle_rx_cqe_mpwrq_shampo(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe); @@ -1658,7 +1658,8 @@ static void mlx5e_fill_mxbuf(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe, static struct sk_buff * mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, - struct mlx5_cqe64 *cqe, u32 cqe_bcnt) + struct mlx5_cqe64 *cqe, u32 cqe_bcnt, + struct mlx5e_xdp_buff *mxbuf) { struct mlx5e_frag_page *frag_page = wi->frag_page; u16 rx_headroom = rq->buff.headroom; @@ -1680,17 +1681,15 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, prog = rcu_dereference(rq->xdp_prog); if (prog) { - struct mlx5e_xdp_buff mxbuf; - net_prefetchw(va); /* xdp_frame data area */ mlx5e_fill_mxbuf(rq, cqe, va, rx_headroom, rq->buff.frame0_sz, - cqe_bcnt, &mxbuf); - if (mlx5e_xdp_handle(rq, prog, &mxbuf)) + cqe_bcnt, mxbuf); + if (mlx5e_xdp_handle(rq, prog, mxbuf)) return NULL; /* page/packet was consumed by XDP */ - rx_headroom = mxbuf.xdp.data - mxbuf.xdp.data_hard_start; - metasize = mxbuf.xdp.data - mxbuf.xdp.data_meta; - cqe_bcnt = mxbuf.xdp.data_end - mxbuf.xdp.data; + rx_headroom = mxbuf->xdp.data - mxbuf->xdp.data_hard_start; + metasize = mxbuf->xdp.data - mxbuf->xdp.data_meta; + cqe_bcnt = mxbuf->xdp.data_end - mxbuf->xdp.data; } frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt); skb = mlx5e_build_linear_skb(rq, va, frag_size, rx_headroom, cqe_bcnt, metasize); @@ -1706,14 +1705,14 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, static struct sk_buff * mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, - struct mlx5_cqe64 *cqe, u32 cqe_bcnt) + struct mlx5_cqe64 *cqe, u32 cqe_bcnt, + struct mlx5e_xdp_buff *mxbuf) { struct mlx5e_rq_frag_info *frag_info = &rq->wqe.info.arr[0]; struct mlx5e_wqe_frag_info *head_wi = wi; u16 rx_headroom = rq->buff.headroom; struct mlx5e_frag_page *frag_page; struct skb_shared_info *sinfo; - struct mlx5e_xdp_buff mxbuf; u32 frag_consumed_bytes; struct bpf_prog *prog; struct sk_buff *skb; @@ -1733,8 +1732,8 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi net_prefetch(va + rx_headroom); mlx5e_fill_mxbuf(rq, cqe, va, rx_headroom, rq->buff.frame0_sz, - frag_consumed_bytes, &mxbuf); - sinfo = xdp_get_shared_info_from_buff(&mxbuf.xdp); + frag_consumed_bytes, mxbuf); + sinfo = xdp_get_shared_info_from_buff(&mxbuf->xdp); truesize = 0; cqe_bcnt -= frag_consumed_bytes; @@ -1746,7 +1745,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi frag_consumed_bytes = min_t(u32, frag_info->frag_size, cqe_bcnt); - mlx5e_add_skb_shared_info_frag(rq, sinfo, &mxbuf.xdp, frag_page, + mlx5e_add_skb_shared_info_frag(rq, sinfo, &mxbuf->xdp, frag_page, wi->offset, frag_consumed_bytes); truesize += frag_info->frag_stride; @@ -1756,7 +1755,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi } prog = rcu_dereference(rq->xdp_prog); - if (prog && mlx5e_xdp_handle(rq, prog, &mxbuf)) { + if (prog && mlx5e_xdp_handle(rq, prog, mxbuf)) { if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) { struct mlx5e_wqe_frag_info *pwi; @@ -1766,21 +1765,21 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi return NULL; /* page/packet was consumed by XDP */ } - skb = mlx5e_build_linear_skb(rq, mxbuf.xdp.data_hard_start, rq->buff.frame0_sz, - mxbuf.xdp.data - mxbuf.xdp.data_hard_start, - mxbuf.xdp.data_end - mxbuf.xdp.data, - mxbuf.xdp.data - mxbuf.xdp.data_meta); + skb = mlx5e_build_linear_skb(rq, mxbuf->xdp.data_hard_start, rq->buff.frame0_sz, + mxbuf->xdp.data - mxbuf->xdp.data_hard_start, + mxbuf->xdp.data_end - mxbuf->xdp.data, + mxbuf->xdp.data - mxbuf->xdp.data_meta); if (unlikely(!skb)) return NULL; skb_mark_for_recycle(skb); head_wi->frag_page->frags++; - if (xdp_buff_has_frags(&mxbuf.xdp)) { + if (xdp_buff_has_frags(&mxbuf->xdp)) { /* sinfo->nr_frags is reset by build_skb, calculate again. */ xdp_update_skb_shared_info(skb, wi - head_wi - 1, sinfo->xdp_frags_size, truesize, - xdp_buff_is_frag_pfmemalloc(&mxbuf.xdp)); + xdp_buff_is_frag_pfmemalloc(&mxbuf->xdp)); for (struct mlx5e_wqe_frag_info *pwi = head_wi + 1; pwi < wi; pwi++) pwi->frag_page->frags++; @@ -1811,6 +1810,7 @@ static void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe) { struct mlx5_wq_cyc *wq = &rq->wqe.wq; struct mlx5e_wqe_frag_info *wi; + struct mlx5e_xdp_buff mxbuf; struct sk_buff *skb; u32 cqe_bcnt; u16 ci; @@ -1828,7 +1828,7 @@ static void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe) mlx5e_skb_from_cqe_linear, mlx5e_skb_from_cqe_nonlinear, mlx5e_xsk_skb_from_cqe_linear, - rq, wi, cqe, cqe_bcnt); + rq, wi, cqe, cqe_bcnt, &mxbuf); if (!skb) { /* probably for XDP */ if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) @@ -1859,6 +1859,7 @@ static void mlx5e_handle_rx_cqe_rep(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe) struct mlx5_eswitch_rep *rep = rpriv->rep; struct mlx5_wq_cyc *wq = &rq->wqe.wq; struct mlx5e_wqe_frag_info *wi; + struct mlx5e_xdp_buff mxbuf; struct sk_buff *skb; u32 cqe_bcnt; u16 ci; @@ -1875,7 +1876,7 @@ static void mlx5e_handle_rx_cqe_rep(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe) skb = INDIRECT_CALL_2(rq->wqe.skb_from_cqe, mlx5e_skb_from_cqe_linear, mlx5e_skb_from_cqe_nonlinear, - rq, wi, cqe, cqe_bcnt); + rq, wi, cqe, cqe_bcnt, &mxbuf); if (!skb) { /* probably for XDP */ if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) @@ -1903,6 +1904,7 @@ static void mlx5e_handle_rx_cqe_mpwrq_rep(struct mlx5e_rq *rq, struct mlx5_cqe64 u32 wqe_offset = stride_ix << rq->mpwqe.log_stride_sz; u32 head_offset = wqe_offset & ((1 << rq->mpwqe.page_shift) - 1); u32 page_idx = wqe_offset >> rq->mpwqe.page_shift; + struct mlx5e_xdp_buff mxbuf; struct mlx5e_rx_wqe_ll *wqe; struct mlx5_wq_ll *wq; struct sk_buff *skb; @@ -1928,7 +1930,7 @@ static void mlx5e_handle_rx_cqe_mpwrq_rep(struct mlx5e_rq *rq, struct mlx5_cqe64 skb = INDIRECT_CALL_2(rq->mpwqe.skb_from_cqe_mpwrq, mlx5e_skb_from_cqe_mpwrq_linear, mlx5e_skb_from_cqe_mpwrq_nonlinear, - rq, wi, cqe, cqe_bcnt, head_offset, page_idx); + rq, wi, cqe, cqe_bcnt, head_offset, page_idx, &mxbuf); if (!skb) goto mpwrq_cqe_out; @@ -1975,7 +1977,7 @@ mlx5e_shampo_fill_skb_data(struct sk_buff *skb, struct mlx5e_rq *rq, static struct sk_buff * mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset, - u32 page_idx) + u32 page_idx, struct mlx5e_xdp_buff *mxbuf) { struct mlx5e_frag_page *frag_page = &wi->alloc_units.frag_pages[page_idx]; u16 headlen = min_t(u16, MLX5E_RX_MAX_HEAD, cqe_bcnt); @@ -1983,7 +1985,6 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w u32 frag_offset = head_offset; u32 byte_cnt = cqe_bcnt; struct skb_shared_info *sinfo; - struct mlx5e_xdp_buff mxbuf; unsigned int truesize = 0; struct bpf_prog *prog; struct sk_buff *skb; @@ -2029,9 +2030,9 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w } } - mlx5e_fill_mxbuf(rq, cqe, va, linear_hr, linear_frame_sz, linear_data_len, &mxbuf); + mlx5e_fill_mxbuf(rq, cqe, va, linear_hr, linear_frame_sz, linear_data_len, mxbuf); - sinfo = xdp_get_shared_info_from_buff(&mxbuf.xdp); + sinfo = xdp_get_shared_info_from_buff(&mxbuf->xdp); while (byte_cnt) { /* Non-linear mode, hence non-XSK, which always uses PAGE_SIZE. */ @@ -2042,7 +2043,7 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w else truesize += ALIGN(pg_consumed_bytes, BIT(rq->mpwqe.log_stride_sz)); - mlx5e_add_skb_shared_info_frag(rq, sinfo, &mxbuf.xdp, frag_page, frag_offset, + mlx5e_add_skb_shared_info_frag(rq, sinfo, &mxbuf->xdp, frag_page, frag_offset, pg_consumed_bytes); byte_cnt -= pg_consumed_bytes; frag_offset = 0; @@ -2050,7 +2051,7 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w } if (prog) { - if (mlx5e_xdp_handle(rq, prog, &mxbuf)) { + if (mlx5e_xdp_handle(rq, prog, mxbuf)) { if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) { struct mlx5e_frag_page *pfp; @@ -2063,10 +2064,10 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w return NULL; /* page/packet was consumed by XDP */ } - skb = mlx5e_build_linear_skb(rq, mxbuf.xdp.data_hard_start, + skb = mlx5e_build_linear_skb(rq, mxbuf->xdp.data_hard_start, linear_frame_sz, - mxbuf.xdp.data - mxbuf.xdp.data_hard_start, 0, - mxbuf.xdp.data - mxbuf.xdp.data_meta); + mxbuf->xdp.data - mxbuf->xdp.data_hard_start, 0, + mxbuf->xdp.data - mxbuf->xdp.data_meta); if (unlikely(!skb)) { mlx5e_page_release_fragmented(rq, &wi->linear_page); return NULL; @@ -2076,13 +2077,13 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w wi->linear_page.frags++; mlx5e_page_release_fragmented(rq, &wi->linear_page); - if (xdp_buff_has_frags(&mxbuf.xdp)) { + if (xdp_buff_has_frags(&mxbuf->xdp)) { struct mlx5e_frag_page *pagep; /* sinfo->nr_frags is reset by build_skb, calculate again. */ xdp_update_skb_shared_info(skb, frag_page - head_page, sinfo->xdp_frags_size, truesize, - xdp_buff_is_frag_pfmemalloc(&mxbuf.xdp)); + xdp_buff_is_frag_pfmemalloc(&mxbuf->xdp)); pagep = head_page; do @@ -2093,12 +2094,12 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w } else { dma_addr_t addr; - if (xdp_buff_has_frags(&mxbuf.xdp)) { + if (xdp_buff_has_frags(&mxbuf->xdp)) { struct mlx5e_frag_page *pagep; xdp_update_skb_shared_info(skb, sinfo->nr_frags, sinfo->xdp_frags_size, truesize, - xdp_buff_is_frag_pfmemalloc(&mxbuf.xdp)); + xdp_buff_is_frag_pfmemalloc(&mxbuf->xdp)); pagep = frag_page - sinfo->nr_frags; do @@ -2120,7 +2121,7 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w static struct sk_buff * mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset, - u32 page_idx) + u32 page_idx, struct mlx5e_xdp_buff *mxbuf) { struct mlx5e_frag_page *frag_page = &wi->alloc_units.frag_pages[page_idx]; u16 rx_headroom = rq->buff.headroom; @@ -2148,20 +2149,19 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, prog = rcu_dereference(rq->xdp_prog); if (prog) { - struct mlx5e_xdp_buff mxbuf; net_prefetchw(va); /* xdp_frame data area */ mlx5e_fill_mxbuf(rq, cqe, va, rx_headroom, rq->buff.frame0_sz, - cqe_bcnt, &mxbuf); - if (mlx5e_xdp_handle(rq, prog, &mxbuf)) { + cqe_bcnt, mxbuf); + if (mlx5e_xdp_handle(rq, prog, mxbuf)) { if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) frag_page->frags++; return NULL; /* page/packet was consumed by XDP */ } - rx_headroom = mxbuf.xdp.data - mxbuf.xdp.data_hard_start; - metasize = mxbuf.xdp.data - mxbuf.xdp.data_meta; - cqe_bcnt = mxbuf.xdp.data_end - mxbuf.xdp.data; + rx_headroom = mxbuf->xdp.data - mxbuf->xdp.data_hard_start; + metasize = mxbuf->xdp.data - mxbuf->xdp.data_meta; + cqe_bcnt = mxbuf->xdp.data_end - mxbuf->xdp.data; } frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt); skb = mlx5e_build_linear_skb(rq, va, frag_size, rx_headroom, cqe_bcnt, metasize); @@ -2283,12 +2283,14 @@ static void mlx5e_handle_rx_cqe_mpwrq_shampo(struct mlx5e_rq *rq, struct mlx5_cq bool flush = cqe->shampo.flush; bool match = cqe->shampo.match; struct mlx5e_rq_stats *stats = rq->stats; + struct mlx5e_xdp_buff mxbuf; struct mlx5e_rx_wqe_ll *wqe; struct mlx5e_mpw_info *wi; struct mlx5_wq_ll *wq; wi = mlx5e_get_mpw_info(rq, wqe_id); wi->consumed_strides += cstrides; + mxbuf.xdp.flags = 0; if (unlikely(MLX5E_RX_ERR_CQE(cqe))) { mlx5e_handle_rx_err_cqe(rq, cqe); @@ -2311,7 +2313,7 @@ static void mlx5e_handle_rx_cqe_mpwrq_shampo(struct mlx5e_rq *rq, struct mlx5_cq *skb = mlx5e_skb_from_cqe_shampo(rq, wi, cqe, header_index); else *skb = mlx5e_skb_from_cqe_mpwrq_nonlinear(rq, wi, cqe, cqe_bcnt, - data_offset, page_idx); + data_offset, page_idx, &mxbuf); if (unlikely(!*skb)) goto free_hd_entry; @@ -2369,6 +2371,7 @@ static void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cq u32 wqe_offset = stride_ix << rq->mpwqe.log_stride_sz; u32 head_offset = wqe_offset & ((1 << rq->mpwqe.page_shift) - 1); u32 page_idx = wqe_offset >> rq->mpwqe.page_shift; + struct mlx5e_xdp_buff mxbuf; struct mlx5e_rx_wqe_ll *wqe; struct mlx5_wq_ll *wq; struct sk_buff *skb; @@ -2396,7 +2399,7 @@ static void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cq mlx5e_skb_from_cqe_mpwrq_nonlinear, mlx5e_xsk_skb_from_cqe_mpwrq_linear, rq, wi, cqe, cqe_bcnt, head_offset, - page_idx); + page_idx, &mxbuf); if (!skb) goto mpwrq_cqe_out; @@ -2624,6 +2627,7 @@ static void mlx5i_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe) { struct mlx5_wq_cyc *wq = &rq->wqe.wq; struct mlx5e_wqe_frag_info *wi; + struct mlx5e_xdp_buff mxbuf; struct sk_buff *skb; u32 cqe_bcnt; u16 ci; @@ -2640,7 +2644,7 @@ static void mlx5i_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe) skb = INDIRECT_CALL_2(rq->wqe.skb_from_cqe, mlx5e_skb_from_cqe_linear, mlx5e_skb_from_cqe_nonlinear, - rq, wi, cqe, cqe_bcnt); + rq, wi, cqe, cqe_bcnt, &mxbuf); if (!skb) goto wq_cyc_pop; @@ -2714,6 +2718,7 @@ static void mlx5e_trap_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe { struct mlx5_wq_cyc *wq = &rq->wqe.wq; struct mlx5e_wqe_frag_info *wi; + struct mlx5e_xdp_buff mxbuf; struct sk_buff *skb; u32 cqe_bcnt; u16 trap_id; @@ -2729,7 +2734,7 @@ static void mlx5e_trap_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe goto wq_cyc_pop; } - skb = mlx5e_skb_from_cqe_nonlinear(rq, wi, cqe, cqe_bcnt); + skb = mlx5e_skb_from_cqe_nonlinear(rq, wi, cqe, cqe_bcnt, &mxbuf); if (!skb) goto wq_cyc_pop; From patchwork Thu Jun 20 22:19:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yan Zhai X-Patchwork-Id: 13706446 Received: from mail-yb1-f182.google.com (mail-yb1-f182.google.com [209.85.219.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 73D2A155380 for ; Thu, 20 Jun 2024 22:19:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718921977; cv=none; b=AlafdDqK9NLg4BCQW8VItrGJFi1qXmbYlwUXDZWVjE1EptfBKV5ZN9cfOWFxUwZRlADC+1r1MJQlSgJK8ocQK8F5VfiEin5EI/ayovFDn+QkQPf5/yMrLTUPXZJ4jwqpT+A7hrwOy8KM8EFROrsyv+0IjP8isd45yDHUxrlLpwo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718921977; c=relaxed/simple; bh=VWUAy/ZNxE6p7rDLY6MrQyac9w5PJLnLtwGFxAwwDMI=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=FM9+z+Re0aUKw0h/OFuNQFiiOlNywg8II1VB5cH9Gii5cyExM+TxW+ZuEYT8pelvXo6443l+uqFqXMFp9GKyZTquEgaWjJZe3EIvjk+6MpAE+gKuFv9SDaRt4UrWvu4sROZxgGp4gdFs9X6wy4i2Hc0jAJYe3yEw9wsZNOF1Y9U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cloudflare.com; spf=pass smtp.mailfrom=cloudflare.com; dkim=pass (2048-bit key) header.d=cloudflare.com header.i=@cloudflare.com header.b=N/LPXcTR; arc=none smtp.client-ip=209.85.219.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cloudflare.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cloudflare.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=cloudflare.com header.i=@cloudflare.com header.b="N/LPXcTR" Received: by mail-yb1-f182.google.com with SMTP id 3f1490d57ef6-e02b571b0f6so1433359276.0 for ; Thu, 20 Jun 2024 15:19:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloudflare.com; s=google09082023; t=1718921974; x=1719526774; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=eDikH4YaLTXcuYnnpoS1DDj9V6N9+XuA7sB909Kk9Go=; b=N/LPXcTR8PnDVduo2vOD3K8Moi9pNDsojIHKgrRnBaT84fe3X72siB2ySDxy+uXmVz SYAw6fKmFkui14CNcPo7VHplcZ/E+FuHZ3DndGKSNWnWgaiyXyttGNG5jfzTLZdcvIoB A928unxRAUV6cGzd2G/kKPPSj67n5QmXa1C+F9t5CZyUu/7A68H1aW1TFPLYjka85Ox5 xAgFrgdMJZswQA3F+69+aWNfxA+W8E/VNaQbdL4rhbDQQSGWQoTAVO9+me3J2mIKsqkO abDlJNQpbVK4P5BOBMYL4NMXl5tzpNPbk1GvlFXyXt8ddYTJChol2Q0TX9tZ2MgeWqgA 3M5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718921974; x=1719526774; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=eDikH4YaLTXcuYnnpoS1DDj9V6N9+XuA7sB909Kk9Go=; b=etP5G02mEd7kSM6HtXMIw3ms30jJLr0y/NWhP8XYQARS8rdbxTYm20BfTSH0s9ppYG r6LWGP1/HnLw/50NzSQhFx1AbGfkVFZ0jpL3QthTmj7CGUFmABogCYvGhE2izh/KiWKr o5HFaRq7A6SRwgffeI+f9d3A+72L8B/mBYfZKmGtFyezhk1wvNgSETXvqp9VGvvxVRw2 yCpOCDH6xg+BZmuWqr3cYvJiQAvXyDWGwEOIEN+FwsVwYWCJWaw+lhORGRYBHaLhRiC6 Izwtxh59sdbgLG1eSTNG3ahY7k2ud1/bR6Du4/O+Ago71mu4ywV70a0a88jwGvkoNdYB VcPQ== X-Forwarded-Encrypted: i=1; AJvYcCUm39oojQEGmZ7m7nuP5peuqTaUZ5vDWj7zvhAU+XVPYYuTvcfSwGDqbe3/rT4CVnq4o+jkZoOCCg3vDgXIyR1NSxezY6PiUA2kwQ== X-Gm-Message-State: AOJu0YxTln9cZhPdcv87gFWUTVTlNwwsapBvM0BHNCbR9GQCBeYS2i/e Bu/FMkTpHcN7Y1QAildkFTzP9lyxG6Kyrkzp57Rs6dfDFNdTH1ZTdOmTRXSJGnE= X-Google-Smtp-Source: AGHT+IHNipijp9iknX8bwqwkZcNc9ZQwSjOEiZMJ1rIL/QjtE7CaIIK/m+10m7GL91Ub5FwPFGeljA== X-Received: by 2002:a25:6801:0:b0:dfb:61e:3ee0 with SMTP id 3f1490d57ef6-e02be10a99emr7068721276.10.1718921974450; Thu, 20 Jun 2024 15:19:34 -0700 (PDT) Received: from debian.debian ([2a09:bac5:7a49:19cd::292:40]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-444c2b665a7sm1942401cf.26.2024.06.20.15.19.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jun 2024 15:19:33 -0700 (PDT) Date: Thu, 20 Jun 2024 15:19:31 -0700 From: Yan Zhai To: netdev@vger.kernel.org Cc: Saeed Mahameed , Leon Romanovsky , Tariq Toukan , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Alexander Lobakin , Yan Zhai , Dragos Tatulea , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org Subject: [RFC net-next 8/9] mlx5: apply XDP offloading fixup when building skb Message-ID: <17595a278ee72964b83c0bd0b502152aa025f600.1718919473.git.yan@cloudflare.com> References: Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: Add a common point to transfer offloading info from XDP context to skb. Signed-off-by: Yan Zhai Signed-off-by: Jesper Dangaard Brouer --- .../net/ethernet/mellanox/mlx5/core/en/xsk/rx.c | 8 ++++++-- drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 14 ++++++++++++++ 2 files changed, 20 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c index 4dacaa61e106..9bf49ff2e0dd 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c @@ -250,7 +250,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, u16 cqe_bcnt, u32 head_offset, u32 page_idx, - struct mlx5e_xdp_buff *mxbuf_) + struct mlx5e_xdp_buff *mxbuf_caller) { struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(wi->alloc_units.xsk_buffs[page_idx]); struct bpf_prog *prog; @@ -270,6 +270,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, /* mxbuf->rq is set on allocation, but cqe is per-packet so set it here */ mxbuf->cqe = cqe; + xdp_init_buff_minimal(&mxbuf->xdp); xsk_buff_set_size(&mxbuf->xdp, cqe_bcnt); xsk_buff_dma_sync_for_cpu(&mxbuf->xdp); net_prefetch(mxbuf->xdp.data); @@ -295,6 +296,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, __set_bit(page_idx, wi->skip_release_bitmap); /* non-atomic */ return NULL; /* page/packet was consumed by XDP */ } + mxbuf_caller->xdp.flags = mxbuf->xdp.flags; /* XDP_PASS: copy the data from the UMEM to a new SKB and reuse the * frame. On SKB allocation failure, NULL is returned. @@ -306,7 +308,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, struct mlx5_cqe64 *cqe, u32 cqe_bcnt, - struct mlx5e_xdp_buff *mxbuf_) + struct mlx5e_xdp_buff *mxbuf_caller) { struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(*wi->xskp); struct bpf_prog *prog; @@ -320,6 +322,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq, /* mxbuf->rq is set on allocation, but cqe is per-packet so set it here */ mxbuf->cqe = cqe; + xdp_init_buff_minimal(&mxbuf->xdp); xsk_buff_set_size(&mxbuf->xdp, cqe_bcnt); xsk_buff_dma_sync_for_cpu(&mxbuf->xdp); net_prefetch(mxbuf->xdp.data); @@ -330,6 +333,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq, wi->flags |= BIT(MLX5E_WQE_FRAG_SKIP_RELEASE); return NULL; /* page/packet was consumed by XDP */ } + mxbuf_caller->xdp.flags = mxbuf->xdp.flags; /* XDP_PASS: copy the data from the UMEM to a new SKB. The frame reuse * will be handled by mlx5e_free_rx_wqe. diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index 1a592a1ab988..0a47889e281e 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -1670,6 +1670,8 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, dma_addr_t addr; u32 frag_size; + xdp_init_buff_minimal(&mxbuf->xdp); + va = page_address(frag_page->page) + wi->offset; data = va + rx_headroom; frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt); @@ -1721,6 +1723,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi void *va; frag_page = wi->frag_page; + xdp_init_buff_minimal(&mxbuf->xdp); va = page_address(frag_page->page) + wi->offset; frag_consumed_bytes = min_t(u32, frag_info->frag_size, cqe_bcnt); @@ -1837,6 +1840,7 @@ static void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe) } mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb); + xdp_buff_fixup_skb_offloading(&mxbuf.xdp, skb); if (mlx5e_cqe_regb_chain(cqe)) if (!mlx5e_tc_update_skb_nic(cqe, skb)) { @@ -1885,6 +1889,7 @@ static void mlx5e_handle_rx_cqe_rep(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe) } mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb); + xdp_buff_fixup_skb_offloading(&mxbuf.xdp, skb); if (rep->vlan && skb_vlan_tag_present(skb)) skb_vlan_pop(skb); @@ -1935,6 +1940,7 @@ static void mlx5e_handle_rx_cqe_mpwrq_rep(struct mlx5e_rq *rq, struct mlx5_cqe64 goto mpwrq_cqe_out; mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb); + xdp_buff_fixup_skb_offloading(&mxbuf.xdp, skb); mlx5e_rep_tc_receive(cqe, rq, skb); @@ -2138,6 +2144,8 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, return NULL; } + xdp_init_buff_minimal(&mxbuf->xdp); + va = page_address(frag_page->page) + head_offset; data = va + rx_headroom; frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt); @@ -2345,6 +2353,8 @@ static void mlx5e_handle_rx_cqe_mpwrq_shampo(struct mlx5e_rq *rq, struct mlx5_cq } mlx5e_shampo_complete_rx_cqe(rq, cqe, cqe_bcnt, *skb); + xdp_buff_fixup_skb_offloading(&mxbuf.xdp, *skb); + if (flush && rq->hw_gro_data->skb) mlx5e_shampo_flush_skb(rq, cqe, match); free_hd_entry: @@ -2404,6 +2414,7 @@ static void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cq goto mpwrq_cqe_out; mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb); + xdp_buff_fixup_skb_offloading(&mxbuf.xdp, skb); if (mlx5e_cqe_regb_chain(cqe)) if (!mlx5e_tc_update_skb_nic(cqe, skb)) { @@ -2649,6 +2660,8 @@ static void mlx5i_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe) goto wq_cyc_pop; mlx5i_complete_rx_cqe(rq, cqe, cqe_bcnt, skb); + xdp_buff_fixup_skb_offloading(&mxbuf.xdp, skb); + if (unlikely(!skb->dev)) { dev_kfree_skb_any(skb); goto wq_cyc_pop; @@ -2740,6 +2753,7 @@ static void mlx5e_trap_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb); skb_push(skb, ETH_HLEN); + xdp_buff_fixup_skb_offloading(&mxbuf.xdp, skb); mlx5_devlink_trap_report(rq->mdev, trap_id, skb, rq->netdev->devlink_port);