From patchwork Thu Jun 20 22:19:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yan Zhai X-Patchwork-Id: 13706435 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-qk1-f176.google.com (mail-qk1-f176.google.com [209.85.222.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8CAEA13DDCC for ; Thu, 20 Jun 2024 22:19:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718921956; cv=none; b=fyJ1nJNFxsqXgfGhnkLhgzrF1+D9eNJ3gqaQQrbObIR1oTBvJBBrwgEGwyA+lIUGGXFJk6JoRrfU9E72J1SjcGQH5g1t+DDIOkvzZbqDA1Ii/rno0VYipu1lM/TBkPY+fgmOLwWC7q6uO2sOSdaPYo5d0PsyBtwD+1bKfT77mmY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718921956; c=relaxed/simple; bh=pHtp0uDmiO5wfmdzU4s/C08CnF5M0OpaIo5ctXGQv8c=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=DR0+XmUNbenCMgvLUSEGOC4J1aTsolNROIHXL9tMfzg7sshYgAaeGM+rQ7ztbIEkfyk6F1O8+Jl+IBm2zS+EumD99afXw26GfkZxPZHhM1wWFB26U4eJmEH1lucq/JSEu3SH1yy728sAARM3D2QIqlkSlZZ7OVj3vLNxrMUJ6m4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cloudflare.com; spf=pass smtp.mailfrom=cloudflare.com; dkim=pass (2048-bit key) header.d=cloudflare.com header.i=@cloudflare.com header.b=XRsIm6Xs; arc=none smtp.client-ip=209.85.222.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cloudflare.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cloudflare.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=cloudflare.com header.i=@cloudflare.com header.b="XRsIm6Xs" Received: by mail-qk1-f176.google.com with SMTP id af79cd13be357-797a8cfc4ecso87717985a.3 for ; Thu, 20 Jun 2024 15:19:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloudflare.com; s=google09082023; t=1718921953; x=1719526753; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=vglLIcmqkxz9MKZSitoSYTZWttEKNgPr+WI84+LeUZI=; b=XRsIm6Xsv3Zj1Pbacr6FN7Dgm4G6ld4o9nCkBVLeY9TFzcRGT4P3H2tUCHogztmCOi IL66xY5ji2VeX+6CUQ9GWSZB1GMAvPD8tVdOJts9T2OIjyDDqnC2vNc7yWhgTltsIV4L WTkjjQXQCO33TnccciNhaJfPAK+fYMLjB5iBLlEJgQXoeM1D/mOlw1srf99Dk8BqLJqx v+uW2rlYD6i2qfpa+yIrLIZYcCq1wduFR04QJ9DrZCB6paLlOi/0WkMKkJ2Uhn51avul uizcKvjcg23BvEFP/2aem2cMrvzhpnbLt02fZ17ChIvqQXdSu1wfMdVI8nxPCQ+2PtzK vRmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718921953; x=1719526753; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=vglLIcmqkxz9MKZSitoSYTZWttEKNgPr+WI84+LeUZI=; b=S50gMQF1Cx8bJBP1xq7PAm4h7EMqMD8ZNQtOPL8lZCmVwWpJj0TJO97bJGwSqVPOaZ b6Oa+dPu3d7Xrn6Uz1Am13tOfg4tNXam0mI/uU43Ph+bTp+LbiSR7zUsotVQ2GDiz1E/ AFfv9qepasI7ZJgfL00IPLb88LXVgmQydc8rQJOsOMrYbvjZ7BdkBkGbKLj6k8NXDBJ6 4GCJnClC+iFCVjv/mJnvOli4QZYNO408zAo2nZ3mp5E+XQs71xBEDZROdiKAaS69kB0+ qS9rY/xitM42kChP6OHtIVEKwLrxkbINq4KOzU7aBt8fxce0ewhfA7M5NZFpvfcQgAfG 1aFQ== X-Forwarded-Encrypted: i=1; AJvYcCUna/B4d4+GxTM9zP6wDKxVMb0VVfBp9jJtVXwD6OPW+0Uvyk7SdkzA2mriT84AhHk0ys5ws9zoKD/BgbdgM2ANuzSy X-Gm-Message-State: AOJu0YyJAfMgfV/Zg0CFoSlGcRLCzvcQWKuQ4PnJWk0d/FWrKjKgX3nu GbXCZYsgN5fUJ0V8Jvlxap0FVtMDA+NChgtFbDy7XUhFdhjyBTKYQcrsx+/dn/s= X-Google-Smtp-Source: AGHT+IEVgAA7E+oyYVF7rzTogs/p51VHdVu9fF5exJ0i6vUUWb2pWCPpmU7B1QKXSrEbD2j41c1bAQ== X-Received: by 2002:a05:6214:a68:b0:6b2:e201:a366 with SMTP id 6a1803df08f44-6b501e63d52mr73822526d6.39.1718921953538; Thu, 20 Jun 2024 15:19:13 -0700 (PDT) Received: from debian.debian ([2a09:bac5:7a49:19cd::292:40]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6b51ef30dadsm806786d6.77.2024.06.20.15.19.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jun 2024 15:19:12 -0700 (PDT) Date: Thu, 20 Jun 2024 15:19:10 -0700 From: Yan Zhai To: netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Willem de Bruijn , Simon Horman , Florian Westphal , Mina Almasry , Abhishek Chauhan , David Howells , Alexander Lobakin , David Ahern , Richard Gobert , Antoine Tenart , Yan Zhai , Felix Fietkau , Soheil Hassas Yeganeh , Pavel Begunkov , Lorenzo Bianconi , Thomas =?utf-8?q?Wei=C3=9Fschuh?= , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org Subject: [RFC net-next 1/9] skb: introduce gro_disabled bit Message-ID: References: Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Software GRO is currently controlled by a single switch, i.e. ethtool -K dev gro on|off However, this is not always desired. When GRO is enabled, even if the kernel cannot GRO certain traffic, it has to run through the GRO receive handlers with no benefit. There are also scenarios that turning off GRO is a requirement. For example, our production environment has a scenario that a TC egress hook may add multiple encapsulation headers to forwarded skbs for load balancing and isolation purpose. The encapsulation is implemented via BPF. But the problem arises then: there is no way to properly offload a double-encapsulated packet, since skb only has network_header and inner_network_header to track one layer of encapsulation, but not two. On the other hand, not all the traffic through this device needs double encapsulation. But we have to turn off GRO completely for any ingress device as a result. Introduce a bit on skb so that GRO engine can be notified to skip GRO on this skb, rather than having to be 0-or-1 for all traffic. Signed-off-by: Yan Zhai --- include/linux/netdevice.h | 9 +++++++-- include/linux/skbuff.h | 10 ++++++++++ net/Kconfig | 10 ++++++++++ net/core/gro.c | 2 +- net/core/gro_cells.c | 2 +- net/core/skbuff.c | 4 ++++ 6 files changed, 33 insertions(+), 4 deletions(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index c83b390191d4..2ca0870b1221 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -2415,11 +2415,16 @@ struct net_device { ((dev)->devlink_port = (port)); \ }) -static inline bool netif_elide_gro(const struct net_device *dev) +static inline bool netif_elide_gro(const struct sk_buff *skb) { - if (!(dev->features & NETIF_F_GRO) || dev->xdp_prog) + if (!(skb->dev->features & NETIF_F_GRO) || skb->dev->xdp_prog) return true; + +#ifdef CONFIG_SKB_GRO_CONTROL + return skb->gro_disabled; +#else return false; +#endif } #define NETDEV_ALIGN 32 diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index f4cda3fbdb75..48b10ece95b5 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -1008,6 +1008,9 @@ struct sk_buff { #if IS_ENABLED(CONFIG_IP_SCTP) __u8 csum_not_inet:1; #endif +#ifdef CONFIG_SKB_GRO_CONTROL + __u8 gro_disabled:1; +#endif #if defined(CONFIG_NET_SCHED) || defined(CONFIG_NET_XGRESS) __u16 tc_index; /* traffic control index */ @@ -1215,6 +1218,13 @@ static inline bool skb_wifi_acked_valid(const struct sk_buff *skb) #endif } +static inline void skb_disable_gro(struct sk_buff *skb) +{ +#ifdef CONFIG_SKB_GRO_CONTROL + skb->gro_disabled = 1; +#endif +} + /** * skb_unref - decrement the skb's reference count * @skb: buffer diff --git a/net/Kconfig b/net/Kconfig index 9fe65fa26e48..47d1ee92df15 100644 --- a/net/Kconfig +++ b/net/Kconfig @@ -289,6 +289,16 @@ config MAX_SKB_FRAGS and in drivers using build_skb(). If unsure, say 17. +config SKB_GRO_CONTROL + bool "allow disable GRO on per-packet basis" + default y + help + By default GRO can only be enabled or disabled per network device. + This can be cumbersome for certain scenarios. + Toggling this option will allow disabling GRO for selected packets, + e.g. by XDP programs, so that it is more flexibile. + Extra overhead should be minimal. + config RPS bool "Receive packet steering" depends on SMP && SYSFS diff --git a/net/core/gro.c b/net/core/gro.c index b3b43de1a650..46232a0d1983 100644 --- a/net/core/gro.c +++ b/net/core/gro.c @@ -476,7 +476,7 @@ static enum gro_result dev_gro_receive(struct napi_struct *napi, struct sk_buff enum gro_result ret; int same_flow; - if (netif_elide_gro(skb->dev)) + if (netif_elide_gro(skb)) goto normal; gro_list_prepare(&gro_list->list, skb); diff --git a/net/core/gro_cells.c b/net/core/gro_cells.c index ff8e5b64bf6b..1bf15783300f 100644 --- a/net/core/gro_cells.c +++ b/net/core/gro_cells.c @@ -20,7 +20,7 @@ int gro_cells_receive(struct gro_cells *gcells, struct sk_buff *skb) if (unlikely(!(dev->flags & IFF_UP))) goto drop; - if (!gcells->cells || skb_cloned(skb) || netif_elide_gro(dev)) { + if (!gcells->cells || skb_cloned(skb) || netif_elide_gro(skb)) { res = netif_rx(skb); goto unlock; } diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 2315c088e91d..82bd297921c1 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -6030,6 +6030,10 @@ void skb_scrub_packet(struct sk_buff *skb, bool xnet) ipvs_reset(skb); skb->mark = 0; skb_clear_tstamp(skb); +#ifdef CONFIG_SKB_GRO_CONTROL + /* hand back GRO control to next netns */ + skb->gro_disabled = 0; +#endif } EXPORT_SYMBOL_GPL(skb_scrub_packet); From patchwork Thu Jun 20 22:19:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yan Zhai X-Patchwork-Id: 13706436 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-qt1-f177.google.com (mail-qt1-f177.google.com [209.85.160.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3ECFD14374E for ; Thu, 20 Jun 2024 22:19:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718921958; cv=none; b=PoJiobkmzUzp4dFn9cqpZYr9mR6cpPNhb0JCpWvGezTF40m0W0ds3xYtz5WAR3RjZJQ5iAhMsRktwvsfI4FJWxGmyJ/alPky0jiyuUOJaalbOkMuoHTH+EU0PvD7d384vQMqBkjOS255pOH9JxNeYyq8tEKTDOj8/ZmZc2LelqM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718921958; c=relaxed/simple; bh=5J3U+2tXyIkdzHN6Xl/FaBxF32TrMa5RnMJ5U+eZdNU=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=nr1oFvsHL+QE5DdoKcWw3I5TnTz/dx/l22WzYEN0FH90D2O+TKl7vw2o/3Zjc2ERSbYodbSvjXBRljwSgKywvRzlfJvcXfzE7fp+POCxHEgSBQsiJhwyB7KphWGn+dCwRmOunekqAq//9DG1HMekhRzD4CPCO2Eq7Mg55SX0LiY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cloudflare.com; spf=pass smtp.mailfrom=cloudflare.com; dkim=pass (2048-bit key) header.d=cloudflare.com header.i=@cloudflare.com header.b=EFUa91pQ; arc=none smtp.client-ip=209.85.160.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cloudflare.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cloudflare.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=cloudflare.com header.i=@cloudflare.com header.b="EFUa91pQ" Received: by mail-qt1-f177.google.com with SMTP id d75a77b69052e-43ff9d1e0bbso6449341cf.3 for ; Thu, 20 Jun 2024 15:19:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloudflare.com; s=google09082023; t=1718921956; x=1719526756; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=W4K1BN+SPtXdd3ql7WAzKbubsDUROfUSXcn2h8iFQhw=; b=EFUa91pQMLekj8R45UBktScdzaoX/+rW7Cwj1mf3FKuy2kJbfTYCw4pJGpSXy2SICx goC8iWI1op4/8Qfem5UgBph7ZMgaS5QV5opoHbxWBFLPxvY0e3BkIYuMDHRCr4zn+YvF UBNyEXn8cfHY/OglVQgt1glVyNmxltuHH1ln81hvDAg/dAD93Fdbj7W0FC5bkYY27VfD eLA6zTDIQuqLncAQYqM3Nep34MasM7+kYlguzgf3hS/ZcpHKYw27V6Xf/vAYnA0YOskQ 0clV0xpm7YgaQJDNv+Asr5WilRon88H4rKytJqLi5+BBABjr+HyNY+r6OZD4XCPd/a1X BtLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718921956; x=1719526756; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=W4K1BN+SPtXdd3ql7WAzKbubsDUROfUSXcn2h8iFQhw=; b=nAcROpEsjmTQKMyiyaPHmSU0giJBcUAnwArTrNQnLzROERz6t9Aaa8D+8OVjt561Kb ycL/iCddq/0uSwtS0n9KOxsjv0TA9leRuJEM7fAepzKgrqylcjYpy6q/q7z/am2/nVD/ aP0OWl2iP6+ppLNo8CMmyNKHjV5BAEXU1wfdYNJtsmFBMF5+CBWP4U5UV5O70kqIiCss lQZ7VCT92k61CzmiBEn8l02cvgbXzTQ4/j4OjmawRiauoaczj5WGUq9kqkvvjixLqEak 5EtAp6n2xCgIsKTQJGN1giHUxSitYRxhSYbJUlvYgqE/Dz0fuZ4sN3/tqOJY3wZ5rDNP yqEw== X-Forwarded-Encrypted: i=1; AJvYcCU4KOiZ2S4KegwSImh90Wa7Ld0PwptfEJEuDvtWohPuvMe7iEYcqic2tFc45GEOWtCpPvqt6rzSqe5SAG4aozdl+qt7 X-Gm-Message-State: AOJu0YwgcyOpPbvwcZWfoCJ5pi313f53e1eWKV3ukgARw5ZcFUxAd4YY ij1Sp40asN/zan5r3JcDDRrqXsXw+I2fiZZhpYJq17FeHc/Xs0OWLFYjRrAVFJj23WSzsLHeo5Z WWBs= X-Google-Smtp-Source: AGHT+IH/QkmWwQA9hpRMOy20IKpIVcI8ok6r3p/vf6l/W9sSsgCcJ0RyunGU/cUVp/0nDeJFWt/+YA== X-Received: by 2002:a05:622a:110e:b0:444:a0df:3115 with SMTP id d75a77b69052e-444a79dacc6mr68195771cf.31.1718921956175; Thu, 20 Jun 2024 15:19:16 -0700 (PDT) Received: from debian.debian ([2a09:bac5:7a49:19cd::292:40]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-444c2c3c334sm1844221cf.60.2024.06.20.15.19.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jun 2024 15:19:15 -0700 (PDT) Date: Thu, 20 Jun 2024 15:19:13 -0700 From: Yan Zhai To: netdev@vger.kernel.org Cc: Alexei Starovoitov , Daniel Borkmann , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , John Fastabend , Eric Dumazet , Paolo Abeni , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RFC net-next 2/9] xdp: add XDP_FLAGS_GRO_DISABLED flag Message-ID: <39f5cbdfaa67e3319bce64ee8e4e5585b9e0c11e.1718919473.git.yan@cloudflare.com> References: Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Allow XDP program to set XDP_FLAGS_GRO_DISABLED flag in xdp_buff and xdp_frame, and disable GRO when building an sk_buff if this flag is set. Signed-off-by: Yan Zhai --- include/net/xdp.h | 38 +++++++++++++++++++++++++++++++++++++- 1 file changed, 37 insertions(+), 1 deletion(-) diff --git a/include/net/xdp.h b/include/net/xdp.h index e6770dd40c91..cc3bce8817b0 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -75,6 +75,7 @@ enum xdp_buff_flags { XDP_FLAGS_FRAGS_PF_MEMALLOC = BIT(1), /* xdp paged memory is under * pressure */ + XDP_FLAGS_GRO_DISABLED = BIT(2), /* GRO disabled */ }; struct xdp_buff { @@ -113,12 +114,35 @@ static __always_inline void xdp_buff_set_frag_pfmemalloc(struct xdp_buff *xdp) xdp->flags |= XDP_FLAGS_FRAGS_PF_MEMALLOC; } +static __always_inline void xdp_buff_disable_gro(struct xdp_buff *xdp) +{ + xdp->flags |= XDP_FLAGS_GRO_DISABLED; +} + +static __always_inline bool xdp_buff_gro_disabled(struct xdp_buff *xdp) +{ + return !!(xdp->flags & XDP_FLAGS_GRO_DISABLED); +} + +static __always_inline void +xdp_buff_fixup_skb_offloading(struct xdp_buff *xdp, struct sk_buff *skb) +{ + if (xdp_buff_gro_disabled(xdp)) + skb_disable_gro(skb); +} + +static __always_inline void +xdp_init_buff_minimal(struct xdp_buff *xdp) +{ + xdp->flags = 0; +} + static __always_inline void xdp_init_buff(struct xdp_buff *xdp, u32 frame_sz, struct xdp_rxq_info *rxq) { xdp->frame_sz = frame_sz; xdp->rxq = rxq; - xdp->flags = 0; + xdp_init_buff_minimal(xdp); } static __always_inline void @@ -187,6 +211,18 @@ static __always_inline bool xdp_frame_is_frag_pfmemalloc(struct xdp_frame *frame return !!(frame->flags & XDP_FLAGS_FRAGS_PF_MEMALLOC); } +static __always_inline bool xdp_frame_gro_disabled(struct xdp_frame *frame) +{ + return !!(frame->flags & XDP_FLAGS_GRO_DISABLED); +} + +static __always_inline void +xdp_frame_fixup_skb_offloading(struct xdp_frame *frame, struct sk_buff *skb) +{ + if (xdp_frame_gro_disabled(frame)) + skb_disable_gro(skb); +} + #define XDP_BULK_QUEUE_SIZE 16 struct xdp_frame_bulk { int count; From patchwork Thu Jun 20 22:19:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yan Zhai X-Patchwork-Id: 13706437 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-vs1-f45.google.com (mail-vs1-f45.google.com [209.85.217.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 75B6714F13F for ; Thu, 20 Jun 2024 22:19:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.217.45 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718921962; cv=none; b=sW6ciBgrxBkAUqlYHkvHEtGCWUfK2+U60kMxVartJ3N3EmpFa3EZCKw4eN593Mi4crl3pMma6iuFIwbU8KiXERTgqyGokIgrXdXCalg0EXc6xSRk4IzOsQ2CQZMEIYGeWcy9C12M3iTD60OgIM6FgpNuup9C/Udfb7uFd0nDBQs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718921962; c=relaxed/simple; bh=32x+O/uigML8NJ7XRYMJBXLGcM/DNjaft/fQ6ZGj9FA=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=N1e0zyRqOXxTFdEfl/pdsd+dmrWtJzijvGTqbeyUxwDO/Fo0ULUXhfueQLkkLadz1BT1nt7WkV4fMw96SQFJCt7sgOWtrUIzTJUQEFsT5wsd+7afIYchnW+xz5JkgzfJDcPTITXjHgbIdXGNNvFnyYZuvtLSvQY+Kp/aP3v3yYA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cloudflare.com; spf=pass smtp.mailfrom=cloudflare.com; dkim=pass (2048-bit key) header.d=cloudflare.com header.i=@cloudflare.com header.b=fzKJTPJa; arc=none smtp.client-ip=209.85.217.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cloudflare.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cloudflare.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=cloudflare.com header.i=@cloudflare.com header.b="fzKJTPJa" Received: by mail-vs1-f45.google.com with SMTP id ada2fe7eead31-48c478b0fd9so418571137.2 for ; Thu, 20 Jun 2024 15:19:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloudflare.com; s=google09082023; t=1718921959; x=1719526759; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=Wl7TiTwQUumMiW4m4cKincFWubuOSv6Mg3YKWODD3jc=; b=fzKJTPJaicBERd8CWyHAtXq4F6oyMmqjLDw3hac60JWvELF32YXdSTFodn2jYvrkwC MnJVGDhyShORGOvcZGkY0+NfsafZ9eh0tdWDS28lJ+JW5kRzPNfezlG0m9zk9+C7C1M/ tyU6s+i6BUSlL9JNr12BVt5GrXMaBsGWEsK5eKdvw8KlBPAObgiEh9QBCEvrimnojV3D zS2euioS7ZrlsVpv7zhFujo7C0a4n+GeMZ51BIqG44/GksMw0wF0ozJtYzihen66au6P kS7p3TMgW17CJsvs9ZgkxnHepfcohTivEGFQ+YzruZZKG4+KPfttHI4K+WMPJQCwXNr2 uB7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718921959; x=1719526759; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=Wl7TiTwQUumMiW4m4cKincFWubuOSv6Mg3YKWODD3jc=; b=MLpiUOwheGeKSSEL4wsZJO58Tl+YGOX7hI2XZe0BnLFkx/Gxo9YT11/kcYM99AN+Ks GSGk9Zsr/wrpe/SRyqrbNgUy6cagHhGo38suL1v/PM5NNVGe/9bN4S4PmvW7RzJyMTgr /tWcN53PHccoCbclP6fhgcNd8p2+Ba6/sc8NJeEb6sYwsr1BtHBhuwCE2MWw3dSEYPfE z3xZ1w66Bspjvc5wbqWccmKS0tDmbYY7XPt8MJwhNXlYLEbZBaAbohG4uW3SYnKyx8oE l24XLXiRgiYufc2ecbxlzfjGu6z4ixoeqYh0ar+29sN87ekonWDbwkxs0/7J5YBM7Cuf u/jw== X-Forwarded-Encrypted: i=1; AJvYcCW+9L9M6oZ+L9KiNyI/wo5w3FaG/pbvUvvTxtsJHuhcDfWATBm3duE7JG4QbTKdDYSW60tfvCs7i8oFEOdHQfoP1pDL X-Gm-Message-State: AOJu0Yy3KQ7NKOTD5gjFPgCY+koInJVP+PtggZn25YXE6A4JVZ++XxQ7 fpIFnavs4ITZb0WbZ+w3NaR9xo5hfty1LKrXPQrSKBr3STrlTXY+vjCJcQuH3wY= X-Google-Smtp-Source: AGHT+IHjgSMTq2xDbJSSKDE+rkyZk34gRQgY7afYxp/41reT7sQo1zzV37XQCPuh2Rgngca8xB4+iQ== X-Received: by 2002:a05:6102:366a:b0:48f:135f:55ef with SMTP id ada2fe7eead31-48f135f5696mr6983814137.15.1718921959364; Thu, 20 Jun 2024 15:19:19 -0700 (PDT) Received: from debian.debian ([2a09:bac5:7a49:19cd::292:40]) by smtp.gmail.com with ESMTPSA id af79cd13be357-79bce92033bsm17250685a.87.2024.06.20.15.19.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jun 2024 15:19:18 -0700 (PDT) Date: Thu, 20 Jun 2024 15:19:16 -0700 From: Yan Zhai To: netdev@vger.kernel.org Cc: Alexei Starovoitov , Daniel Borkmann , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , John Fastabend , Eric Dumazet , Paolo Abeni , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RFC net-next 3/9] xdp: implement bpf_xdp_disable_gro kfunc Message-ID: References: Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Add a kfunc bpf_xdp_disable_gro to toggle XDP_FLAGS_GRO_DISABLED flag. Signed-off-by: Yan Zhai --- net/core/xdp.c | 27 ++++++++++++++++++++++++++- 1 file changed, 26 insertions(+), 1 deletion(-) diff --git a/net/core/xdp.c b/net/core/xdp.c index 41693154e426..d6e5f98a0081 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -770,6 +770,20 @@ __bpf_kfunc int bpf_xdp_metadata_rx_vlan_tag(const struct xdp_md *ctx, return -EOPNOTSUPP; } +/** + * bpf_xdp_disable_gro - Set a flag on underlying XDP buffer, indicating + * the stack to skip this packet for GRO processing. This flag which be + * passed on when the driver builds the skb. + * + * Return: + * * always returns 0 + */ +__bpf_kfunc int bpf_xdp_disable_gro(struct xdp_md *ctx) +{ + xdp_buff_disable_gro((struct xdp_buff *)ctx); + return 0; +} + __bpf_kfunc_end_defs(); BTF_KFUNCS_START(xdp_metadata_kfunc_ids) @@ -799,9 +813,20 @@ bool bpf_dev_bound_kfunc_id(u32 btf_id) return btf_id_set8_contains(&xdp_metadata_kfunc_ids, btf_id); } +BTF_KFUNCS_START(xdp_common_kfunc_ids) +BTF_ID_FLAGS(func, bpf_xdp_disable_gro, KF_TRUSTED_ARGS) +BTF_KFUNCS_END(xdp_common_kfunc_ids) + +static const struct btf_kfunc_id_set xdp_common_kfunc_set = { + .owner = THIS_MODULE, + .set = &xdp_common_kfunc_ids, +}; + static int __init xdp_metadata_init(void) { - return register_btf_kfunc_id_set(BPF_PROG_TYPE_XDP, &xdp_metadata_kfunc_set); + int ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_XDP, &xdp_metadata_kfunc_set); + + return ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_XDP, &xdp_common_kfunc_set); } late_initcall(xdp_metadata_init); From patchwork Thu Jun 20 22:19:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yan Zhai X-Patchwork-Id: 13706438 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-qk1-f171.google.com (mail-qk1-f171.google.com [209.85.222.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 202C1139CE5 for ; Thu, 20 Jun 2024 22:19:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718921964; cv=none; b=MvJMZVcI/lENVy8Tc7Ookrig0jW/nTjrswkiFYElOAO9skMKgU4LT4MtxYv6wpRVX1Wg9KMDDgrpohVUY+n5kCOgPs4tvdyc/HxYOvJ30u6pSkxylVhiIzkQNeAbrpvsJ4Xa8ERH5Dbo2VNkQ+ED/Obc3d2JsvaN2GjXGPKu4X0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718921964; c=relaxed/simple; bh=67A6Ac6YQftcvhwDMNA4TWQLwqsK0VkfsMZwmQEXDgY=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=iX7iRTK4jd3O9J58AO5QAepqTwiU/H0CT7UBCCCp/eQp9bUQHWJc6DJAgbOYat3Gah+X18KlOL+/0P3L3MclqQCV5cilCZbI/pMi1qZ1eZiwGgtUHCoGVIlebGyoLiE9QzztXM4REHpTeZ2nA6yu9laq11NC8WiiWTNiEd7R+xc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cloudflare.com; spf=pass smtp.mailfrom=cloudflare.com; dkim=pass (2048-bit key) header.d=cloudflare.com header.i=@cloudflare.com header.b=FXV5Q4jd; arc=none smtp.client-ip=209.85.222.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cloudflare.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cloudflare.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=cloudflare.com header.i=@cloudflare.com header.b="FXV5Q4jd" Received: by mail-qk1-f171.google.com with SMTP id af79cd13be357-7955841fddaso125021985a.1 for ; Thu, 20 Jun 2024 15:19:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloudflare.com; s=google09082023; t=1718921962; x=1719526762; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=OK8tYlg8joxvSAleL9jkPigjYKgVARJbZCH4f6lw/EE=; b=FXV5Q4jdOkNB1z6uft14vTWz7CyW1eFRmOqMlxd6/tKVLcic5MM92UAvJae65oLwuz 3ljaedH018WXEzXjdXGH6QRGJ8Lm/GbvUPcaigyEXxxqpO3LVG3/6otkntzpp+vhlbbR j11sH4KX9maxjyf7DZQJZnQsvlXu3va6YZMRVU5FRu2wLFEM68FWe4L2rn3pSx6X4+rn kafmT5pMRV/IxdwuClcd14EFnj09FUU0Ca6UQdg/ouhTRPL0PgXJmC5wkY6m3Zh8QlVB Gf3f4+uckR6eCYQlvJbx5UMUVoUxi04exgWpfqwADDepHSUCnqBcutmGwhNuP1t7T2XL vm1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718921962; x=1719526762; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=OK8tYlg8joxvSAleL9jkPigjYKgVARJbZCH4f6lw/EE=; b=dmTn9cHzuPVF7rB0k6NohoQF+m0hlIKGIgVkOzU4Y5XTVrH3U5/sFJCxQRaDe3yJz2 NBedS1etfVIfd1uwMLRqN3/0N6XUTq66dVTiI0bkH89Puzr8vjTpvzP+XZusA3+cZAha GeYGa2DNqRpJm8NZ/m0WTY1wQSxsxvYC6TkxsUzlIjS2Nex6RGdgZB3CvoLTCMYHQ0Zq Trv9H20WpO4pQ2vgznhvA9HBj1zUVNZR3i/GzDTrTL5V9kS8jOj8BrD48zdqpRLEN62J KjhuTg7ZIcbr05UxYdzmverQCzqjAf8QGonZ7mNFTGjhO3U2zN+sl81t3n87770S/R+V ojWw== X-Forwarded-Encrypted: i=1; AJvYcCX0CGlI4Jw8o388zVmQ2mmVdiwE/95ez1qVR2L1Pa1OnI5DE8PzeF94VLgKQBGywgsCRhyV5Fy+aw0RuD1V+aTkROlD X-Gm-Message-State: AOJu0YwuIoH+/04PhDgtQrnAVKQF75y6ZgTnwG3Y0agD7FlD8GCiHScV 8ZW5EKvMbYjmKsIII67KYRnpZ0cB5MxSSloheabEcpcHbAM8xZw2KmV7qmqU3FI= X-Google-Smtp-Source: AGHT+IFQ0EzdKYHmQwg+zZ/JUI4tnTmadz5hANlPNEhcpyyd7GKGUXhNuPpeaqnT9c/ZhEIbyMJApQ== X-Received: by 2002:a05:620a:2942:b0:795:890c:3f57 with SMTP id af79cd13be357-79bb365483amr1078233385a.37.1718921962113; Thu, 20 Jun 2024 15:19:22 -0700 (PDT) Received: from debian.debian ([2a09:bac5:7a49:19cd::292:40]) by smtp.gmail.com with ESMTPSA id af79cd13be357-79bce91dc4esm17234085a.90.2024.06.20.15.19.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jun 2024 15:19:21 -0700 (PDT) Date: Thu, 20 Jun 2024 15:19:19 -0700 From: Yan Zhai To: netdev@vger.kernel.org Cc: Michael Chan , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org Subject: [RFC net-next 4/9] bnxt: apply XDP offloading fixup when building skb Message-ID: References: Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Add a common point to transfer offloading info from XDP context to skb. Signed-off-by: Yan Zhai Signed-off-by: Jesper Dangaard Brouer --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 7dc00c0d8992..0036c4752f0d 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -2252,6 +2252,10 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, } } } + + if (xdp_active) + xdp_buff_fixup_skb_offloading(&xdp, skb); + bnxt_deliver_skb(bp, bnapi, skb); rc = 1; From patchwork Thu Jun 20 22:19:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yan Zhai X-Patchwork-Id: 13706439 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-qk1-f182.google.com (mail-qk1-f182.google.com [209.85.222.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 66F5F1527A5 for ; Thu, 20 Jun 2024 22:19:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718921967; cv=none; b=MD5KTtqKpz2JpmBzRtAVHyU5+gsIosLV7G3W29QREHMXM8lWFE6KTLHocC/bOcz9BNlSDVnob+CywEs1SeEM5sTxQujqse3WVoM8XF7XKHaZi4mtXh4S7aYpx7xkSm1/BPiuj8PWzDnva5lVYiBYqCF7/tOBcnJkPLetjkW4kPw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718921967; c=relaxed/simple; bh=F57HceSCarSUFFP4CorQ7f5+Hq22MkL63tl52DzITRw=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=aaDEU/Ds4cwTnIunohx5aoy8AhJ1bIeKTTpskcRAyHhXC18YJmUDE7akSnKjDDXllejI+JDUUV0wTTHZ1JqW/4cN8xibCiHOpjfOxaRpD+oava4+7CBrrv1yd//WR1Ax3cZAeLESE5k2mXHn/x9ddl54pWYOUp4TJZnbXUV2E1Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cloudflare.com; spf=pass smtp.mailfrom=cloudflare.com; dkim=pass (2048-bit key) header.d=cloudflare.com header.i=@cloudflare.com header.b=BKkMeEDe; arc=none smtp.client-ip=209.85.222.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cloudflare.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cloudflare.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=cloudflare.com header.i=@cloudflare.com header.b="BKkMeEDe" Received: by mail-qk1-f182.google.com with SMTP id af79cd13be357-7955dc86cacso85705485a.0 for ; Thu, 20 Jun 2024 15:19:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloudflare.com; s=google09082023; t=1718921965; x=1719526765; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=uOnJGiRbyO/FJXIIwBJKoyy1x0jsuLmmE0P/5OKn6WU=; b=BKkMeEDe7+4ayjUJrfnivzyhNx+umSzTdrk3ImQu+QVT4HrhuFKv4PUjgZJaLZjV2V JTE9hQUdz3McVLjtB/WNfSon7c65gxdYZCUtjgsIlzAO9CjOLLbfMQyMBQm9XsU9cdzX 5jyq9WcLRfGDdtOvFzXddrNNHYI6l1GkuTQp/ol6L8Hicv73Bh2pgCnQbU9CK2pxq6de Ny5BFoPf1epzyBvB6jyZF+i7niXBGl6RNgZnKKGV8wsEm+rXczaWFGffmiZesG5kS37Z QkzihWbu14QgW6smkCBy8SsAwDH+lS9x6mOjYYE3ncqdNKTjBdz0b7/ofoTxxvxBKOHW JklA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718921965; x=1719526765; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=uOnJGiRbyO/FJXIIwBJKoyy1x0jsuLmmE0P/5OKn6WU=; b=oeD09+ORSG7iSwBXsicwlV0V5H2UklS65FT7SHzFlZA3lwfML1LIc/7ECYKgp+35HY bPaz3AA2oWoK08wc46mbJ36VyAy9CauhLKh33+DPXw+F6tSsZLPKq6EsuU5KhYd8vdeA qLIiyzo90CwwM9Bc9DQxAdWnDAAb4OSv9FvLXZG1aoY6eJ14d2W2jQuIG85ZDbnfNQDO +LsBpsvpsO+8+VaeHoCRlSqxL2weuiry+5Ns43VVEuO5XHT1nwysZBZjuefb5frgdNZl bwTy7lIZvlCV+luPMA9jP4uMhSgU6W9C/otDWyccwqH83x+Gn0TU4ZuTfA4oM3Zluth+ edrQ== X-Forwarded-Encrypted: i=1; AJvYcCWwuB8071+GI/k31y3IuWYp8u6+gKczdDHPifg4MIANSkf9YBiLOL4cUb0WEmQo1cWdrsQnxM8wuWVlqCNlF2dSaWeF X-Gm-Message-State: AOJu0Yy6fkeT7vOhGz5ADRRGJgL+rA0Nec5E0F0UxvLqcellIgj+C3Hz I+F5zHXFqvCPbWHxx55jHSjZ9dUUpLDApyG1tli/kFj0/dBMlvMM63XGLsStTic= X-Google-Smtp-Source: AGHT+IF9DmzujCRKuDPqvcTpHJ5tK40LwVxagHzuDnQcW/zjLbmWI370fRf/NDqGQdzoIWlhSdGjVw== X-Received: by 2002:a0c:db83:0:b0:6b0:4542:e42e with SMTP id 6a1803df08f44-6b501e3f5a3mr78560556d6.28.1718921965457; Thu, 20 Jun 2024 15:19:25 -0700 (PDT) Received: from debian.debian ([2a09:bac5:7a49:19cd::292:40]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6b51ed7ed94sm813456d6.73.2024.06.20.15.19.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jun 2024 15:19:24 -0700 (PDT) Date: Thu, 20 Jun 2024 15:19:22 -0700 From: Yan Zhai To: netdev@vger.kernel.org Cc: Jesse Brandeburg , Tony Nguyen , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org Subject: [RFC net-next 5/9] ice: apply XDP offloading fixup when building skb Message-ID: References: Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Add a common point to transfer offloading info from XDP context to skb. Signed-off-by: Yan Zhai Signed-off-by: Jesper Dangaard Brouer --- drivers/net/ethernet/intel/ice/ice_txrx.c | 2 ++ drivers/net/ethernet/intel/ice/ice_xsk.c | 6 +++++- include/net/xdp_sock_drv.h | 2 +- 3 files changed, 8 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index 8bb743f78fcb..a247306837ed 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -1222,6 +1222,7 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget) hard_start = page_address(rx_buf->page) + rx_buf->page_offset - offset; + xdp_init_buff_minimal(xdp); xdp_prepare_buff(xdp, hard_start, offset, size, !!offset); #if (PAGE_SIZE > 4096) /* At larger PAGE_SIZE, frame_sz depend on len size */ @@ -1287,6 +1288,7 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget) /* populate checksum, VLAN, and protocol */ ice_process_skb_fields(rx_ring, rx_desc, skb); + xdp_buff_fixup_skb_offloading(xdp, skb); ice_trace(clean_rx_irq_indicate, rx_ring, rx_desc, skb); /* send completed skb up the stack */ diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index a65955eb23c0..367658acaab8 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -845,8 +845,10 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget) xdp_prog = READ_ONCE(rx_ring->xdp_prog); xdp_ring = rx_ring->xdp_ring; - if (ntc != rx_ring->first_desc) + if (ntc != rx_ring->first_desc) { first = *ice_xdp_buf(rx_ring, rx_ring->first_desc); + xdp_init_buff_minimal(first); + } while (likely(total_rx_packets < (unsigned int)budget)) { union ice_32b_rx_flex_desc *rx_desc; @@ -920,6 +922,7 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget) break; } + xdp = first; first = NULL; rx_ring->first_desc = ntc; @@ -934,6 +937,7 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget) vlan_tci = ice_get_vlan_tci(rx_desc); ice_process_skb_fields(rx_ring, rx_desc, skb); + xdp_buff_fixup_skb_offloading(xdp, skb); ice_receive_skb(rx_ring, skb, vlan_tci); } diff --git a/include/net/xdp_sock_drv.h b/include/net/xdp_sock_drv.h index 0a5dca2b2b3f..02243dc064c2 100644 --- a/include/net/xdp_sock_drv.h +++ b/include/net/xdp_sock_drv.h @@ -181,7 +181,7 @@ static inline void xsk_buff_set_size(struct xdp_buff *xdp, u32 size) xdp->data = xdp->data_hard_start + XDP_PACKET_HEADROOM; xdp->data_meta = xdp->data; xdp->data_end = xdp->data + size; - xdp->flags = 0; + xdp_init_buff_minimal(xdp); } static inline dma_addr_t xsk_buff_raw_get_dma(struct xsk_buff_pool *pool, From patchwork Thu Jun 20 22:19:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yan Zhai X-Patchwork-Id: 13706440 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-qv1-f52.google.com (mail-qv1-f52.google.com [209.85.219.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 09810153569 for ; Thu, 20 Jun 2024 22:19:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718921970; cv=none; b=XxcIDu1hR9O4oNGZjyImGRQx5BURzz4286fVT+XsQIi8lpqXxiEmpZwRmmfBXNrMQN6ffJ6EutoX/EmJLITQ9DQ9SX3u3uGKyTRdp6HhgBiiW38iVxBbe1BN72+bpbUCRaHfxVLDxrvL62iP8/hneM6YW/GyrBqBDb/kEEKt6gM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718921970; c=relaxed/simple; bh=AzuBu/nYd4noTnvcgiEnKbx32LQxb17oyJeVJ8ndhOk=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=F38rDuhU7nZcgddfRyoTXNGLp48CKNrNrtArgIFf5ajZSh04e80kDFB1QDbjVTQZbvmlNMByR6YyTYMEeeF0GOGOPsfFCNKJiA7KyVE53zTOMMts+dfveC6BhHItTiiBuFpuQCFRlFDdJ+wOj6FJ8jndGBh7Qo/nLrc5W3sDNTg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cloudflare.com; spf=pass smtp.mailfrom=cloudflare.com; dkim=pass (2048-bit key) header.d=cloudflare.com header.i=@cloudflare.com header.b=HtLvSN0k; arc=none smtp.client-ip=209.85.219.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cloudflare.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cloudflare.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=cloudflare.com header.i=@cloudflare.com header.b="HtLvSN0k" Received: by mail-qv1-f52.google.com with SMTP id 6a1803df08f44-6ae093e8007so3593196d6.3 for ; Thu, 20 Jun 2024 15:19:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloudflare.com; s=google09082023; t=1718921968; x=1719526768; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=W5407+z82JKtvMOrobuAn0ratWMuTj2tNIkXM8wd5dc=; b=HtLvSN0kZiZXE+JFmxRY6iFK9eTRGriwaSfVKvUbqKH/77dcW3jOYqrOGVl1Se/S4/ ykpF23sXHvepgwQlxeWEdrOtBn7kBt3W+weZG5JptvKINIo8U0AzJMTZeXoVXfplDqUG y2kwz7dDkXCwSlzh8jidcVYK5pQaRvB+Z/GQtW/Xc3/iBNkVcGJURTizb5GypurA1+g7 MrsamVqDuDS2E05B6xUN8hCcB6HtZxEJ1duQ+YqQ7ryLiskkaB1m4alP63zASyj9rLMk oUox0b2qJi8bg27fBV0/KNahtmlXWZ5JDtGiOhlJuDfLspp9rD4gSNeZj/Bj5rkyyze1 ZYKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718921968; x=1719526768; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=W5407+z82JKtvMOrobuAn0ratWMuTj2tNIkXM8wd5dc=; b=fo+LqFVcXuow20nxati+pajmdHmYZ6V+Ayb6/TaLJlxqx88ryPsCwGnRmIwVpMV/rL wVDgxJgMdraQ5ONC+Di7k6MrJvpULwvoNsdcbxLrVw5mVLyOKCz6HPQ+d7Fwh5MCovZC NkbQ6O8t4kTxaSEdKe/PN/pAZAsTW/u4coxcJU5H+foPD2Xpkdax4QwMyq/qYZbUZcYG 77McoK+hPZRbiaC0T6JcHH+MCONXPQyAuCBVOlpfaAuYlL2g4vWx9GlXm6LUZ/TvkGGE NzUO37ad+9ctr6ZrjoSsEo7OvIrX4yISZByJKziTwog4KQuwD3aQ4SD93PxJAKeJKoY0 2ByQ== X-Forwarded-Encrypted: i=1; AJvYcCVtSwwLC8t6wJKF/v4bDb3jFwaFF1zivJ2POR2rRGG5jS34ycs22uAofTtj4NCetzo6TdwB0rnlfMnorBQNhvytnqTc X-Gm-Message-State: AOJu0Yzh0WosvblKRoCn9K9M5RVsnzT4cL5ENPHvpktah1koReZXwTEE 95ZbHlJYbGk0nDlrKSf0lfWqcc/TKmOdD7xkRcd8MgLfN+aKOkL5kkp2LaM7QWs= X-Google-Smtp-Source: AGHT+IHgBBxlD+qDZaVaOqDtQcbNvXT8j2u4+YOx1AyLUR2YCxxWQ0XJoivZxNhP3Jz56mudjlNUSQ== X-Received: by 2002:ad4:57aa:0:b0:6b0:738f:faf1 with SMTP id 6a1803df08f44-6b501e3f1b4mr65791656d6.38.1718921968099; Thu, 20 Jun 2024 15:19:28 -0700 (PDT) Received: from debian.debian ([2a09:bac5:7a49:19cd::292:40]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6b51ecfe6d9sm907396d6.11.2024.06.20.15.19.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jun 2024 15:19:27 -0700 (PDT) Date: Thu, 20 Jun 2024 15:19:25 -0700 From: Yan Zhai To: netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org Subject: [RFC net-next 6/9] veth: apply XDP offloading fixup when building skb Message-ID: References: Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Add a common point to transfer offloading info from XDP context to skb. Signed-off-by: Yan Zhai --- drivers/net/veth.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/net/veth.c b/drivers/net/veth.c index 426e68a95067..c799362a839c 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -703,6 +703,8 @@ static void veth_xdp_rcv_bulk_skb(struct veth_rq *rq, void **frames, stats->rx_drops++; continue; } + + xdp_frame_fixup_skb_offloading(frames[i], skb); napi_gro_receive(&rq->xdp_napi, skb); } } @@ -855,6 +857,8 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq, metalen = xdp->data - xdp->data_meta; if (metalen) skb_metadata_set(skb, metalen); + + xdp_buff_fixup_skb_offloading(xdp, skb); out: return skb; drop: From patchwork Thu Jun 20 22:19:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yan Zhai X-Patchwork-Id: 13706441 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-qt1-f178.google.com (mail-qt1-f178.google.com [209.85.160.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4DB801552E0 for ; Thu, 20 Jun 2024 22:19:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718921975; cv=none; b=Lzk78SwA76Hg6a4j/fIKOhEwaJqlaJUMewp63wAnKdXGJeLbwy/1LxCRjT+gahdGi3Niie/GGoyfljClu1zlHJWWQHR7dlTd0KGFOEFVFYaOcwiC//IWiajddp5pmfVMrtn2OKNThIQjMu81OWnKua2hXN7xELx/XZdnphUp+7A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718921975; c=relaxed/simple; bh=5NFFxPVvZiB990JZW1dZ4CHiL7ACNygPdHqSlrsa7z4=; h=From:Date:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=CqiMfcAp+SOin0YER1Qo5S04uuoE2P+prNxPvqPwePyVm4+hsxE4X+4YJ+0rTaeOlCo6ElQ1vfNx0iK/TAwn4K7xR526i996IGOp2jWRIc+JrKh3e898UtPi6LE858C2VT6JphXPJ4Mg0cHQYqpTh0fzRqTbSDatDKAPzqM1SOU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cloudflare.com; spf=pass smtp.mailfrom=cloudflare.com; dkim=pass (2048-bit key) header.d=cloudflare.com header.i=@cloudflare.com header.b=aHW3Xsw3; arc=none smtp.client-ip=209.85.160.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cloudflare.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cloudflare.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=cloudflare.com header.i=@cloudflare.com header.b="aHW3Xsw3" Received: by mail-qt1-f178.google.com with SMTP id d75a77b69052e-441567d352bso7958701cf.1 for ; Thu, 20 Jun 2024 15:19:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloudflare.com; s=google09082023; t=1718921971; x=1719526771; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:from:to:cc:subject:date:message-id:reply-to; bh=rykvLwThOhLrOOeeNqCjBeseo9IibL0Bdz+4MdzT9Nw=; b=aHW3Xsw3/rvH7W/PSf5PZXZHx2lKH3vDD9o81M3R6NwUGEC8JUDdpLuZ58ecj6Atjh BWgsD04L5bi/PfHxs32ocX5eJUPAhwROQPVAfzNtKjMrilewV5NrydvG/ApSEzuY8Wkf YVrnROFCEAO6QpuaGv7iYwWL5TXxB2tmiXDq5yGD1s9T9rlirHZa7WD/6805N5sw9bZJ ci1Fnu23NsIeSNhQtn57/2cQNt7ocmojtcN9mPSOs3U8u69R/0xPSDwwerWRkxs4f4EL 8WI9mymLVBaMiuk+d+CoJBoOBSp03gvB/AhPkBcIkHEn07h/YaVf/AIajbiTLPrULlgg s/MA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718921971; x=1719526771; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=rykvLwThOhLrOOeeNqCjBeseo9IibL0Bdz+4MdzT9Nw=; b=YghYhf002e5zcQ22hASVbCdXUaTbaeEtrwtLHNg46+dH2DkfB34JnAN+2Q9vZpTFb/ giyy2LZwjnkKiKfb37Lftfd4ZZDRu2rmovVasTHfRY5dRcSGSgNv5lpVtLaSWfmQUmPn Jio63URXNK9P2GfhkXfITAcDB37DSdtr6m+cz9zvNQi5oDeq87slqywN7QQ8lgh0HKhk ufTDrm+enP5UHB+kzD1cqiiQreSL2ttxZt6XHdbiR7kh9lhpY1WoLzvo02XT5DOLNn1w p6+Mud9RUNLFWFHw7S9i/Q6aSa18oxtubqItvzub+mLH7BS0tzTssbDzn+tIdIc80VUQ SOTQ== X-Forwarded-Encrypted: i=1; AJvYcCUoOdk4ulM+V9lc7/gEaPZENK9od6HWYaTnF5cNVOHvqQIUMfyE097bSm9OFXtGP5jG6+j4i5w80TYUVTA8O9HN5aSz X-Gm-Message-State: AOJu0YxPdeEIwkpWlA7VfsWEPmRP7zZq8cwPlRLPRhJIoKQTmAryIBBW vSSFCDQVStgqY3GC5iC99ELl8ujx/qSCTRjA8JeThBNLH7JrkRgZAavvLR/1cmw= X-Google-Smtp-Source: AGHT+IG8SQF3m3IlxRlHF/5JsZ+TboQGIMJ+L1bIDhhhEA3zuZEzDdHGY8kz+7qvbwzEpIV8spi//w== X-Received: by 2002:a05:622a:305:b0:440:607a:dcb with SMTP id d75a77b69052e-444a79e14bdmr75009831cf.29.1718921971108; Thu, 20 Jun 2024 15:19:31 -0700 (PDT) Received: from debian.debian ([2a09:bac5:7a49:19cd::292:40]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-444c2b96f1bsm1964241cf.50.2024.06.20.15.19.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jun 2024 15:19:30 -0700 (PDT) From: Jesper Dangaard Brouer X-Google-Original-From: Jesper Dangaard Brouer Date: Thu, 20 Jun 2024 15:19:28 -0700 To: netdev@vger.kernel.org Cc: Saeed Mahameed , Leon Romanovsky , Tariq Toukan , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Yan Zhai , Dragos Tatulea , Alexander Lobakin , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org Subject: [RFC net-next 7/9] mlx5: move xdp_buff scope one level up Message-ID: <5b7a761d6efa1be2ace4c12c1681f341a87d8d24.1718919473.git.yan@cloudflare.com> References: Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC This is in preparation for changes. Signed-off-by: Jesper Dangaard Brouer --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 6 +- .../ethernet/mellanox/mlx5/core/en/xsk/rx.c | 6 +- .../ethernet/mellanox/mlx5/core/en/xsk/rx.h | 6 +- .../net/ethernet/mellanox/mlx5/core/en_rx.c | 103 +++++++++--------- 4 files changed, 66 insertions(+), 55 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index 6a343a8f162f..3d26f976f692 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -580,14 +580,16 @@ struct mlx5e_mpw_info { #define MLX5E_MAX_RX_FRAGS 4 struct mlx5e_rq; +struct mlx5e_xdp_buff; typedef void (*mlx5e_fp_handle_rx_cqe)(struct mlx5e_rq*, struct mlx5_cqe64*); typedef struct sk_buff * (*mlx5e_fp_skb_from_cqe_mpwrq)(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, struct mlx5_cqe64 *cqe, u16 cqe_bcnt, - u32 head_offset, u32 page_idx); + u32 head_offset, u32 page_idx, + struct mlx5e_xdp_buff *mxbuf); typedef struct sk_buff * (*mlx5e_fp_skb_from_cqe)(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, - struct mlx5_cqe64 *cqe, u32 cqe_bcnt); + struct mlx5_cqe64 *cqe, u32 cqe_bcnt, struct mlx5e_xdp_buff *mxbuf); typedef bool (*mlx5e_fp_post_rx_wqes)(struct mlx5e_rq *rq); typedef void (*mlx5e_fp_dealloc_wqe)(struct mlx5e_rq*, u16); typedef void (*mlx5e_fp_shampo_dealloc_hd)(struct mlx5e_rq*, u16, u16, bool); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c index 1b7132fa70de..4dacaa61e106 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c @@ -249,7 +249,8 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset, - u32 page_idx) + u32 page_idx, + struct mlx5e_xdp_buff *mxbuf_) { struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(wi->alloc_units.xsk_buffs[page_idx]); struct bpf_prog *prog; @@ -304,7 +305,8 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, struct mlx5_cqe64 *cqe, - u32 cqe_bcnt) + u32 cqe_bcnt, + struct mlx5e_xdp_buff *mxbuf_) { struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(*wi->xskp); struct bpf_prog *prog; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h index cefc0ef6105d..0890c975042c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h @@ -16,10 +16,12 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset, - u32 page_idx); + u32 page_idx, + struct mlx5e_xdp_buff *mxbuf_); struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, struct mlx5_cqe64 *cqe, - u32 cqe_bcnt); + u32 cqe_bcnt, + struct mlx5e_xdp_buff *mxbuf_); #endif /* __MLX5_EN_XSK_RX_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index 225da8d691fc..1a592a1ab988 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -63,11 +63,11 @@ static struct sk_buff * mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset, - u32 page_idx); + u32 page_idx, struct mlx5e_xdp_buff *mxbuf); static struct sk_buff * mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset, - u32 page_idx); + u32 page_idx, struct mlx5e_xdp_buff *mxbuf); static void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe); static void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe); static void mlx5e_handle_rx_cqe_mpwrq_shampo(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe); @@ -1658,7 +1658,8 @@ static void mlx5e_fill_mxbuf(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe, static struct sk_buff * mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, - struct mlx5_cqe64 *cqe, u32 cqe_bcnt) + struct mlx5_cqe64 *cqe, u32 cqe_bcnt, + struct mlx5e_xdp_buff *mxbuf) { struct mlx5e_frag_page *frag_page = wi->frag_page; u16 rx_headroom = rq->buff.headroom; @@ -1680,17 +1681,15 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, prog = rcu_dereference(rq->xdp_prog); if (prog) { - struct mlx5e_xdp_buff mxbuf; - net_prefetchw(va); /* xdp_frame data area */ mlx5e_fill_mxbuf(rq, cqe, va, rx_headroom, rq->buff.frame0_sz, - cqe_bcnt, &mxbuf); - if (mlx5e_xdp_handle(rq, prog, &mxbuf)) + cqe_bcnt, mxbuf); + if (mlx5e_xdp_handle(rq, prog, mxbuf)) return NULL; /* page/packet was consumed by XDP */ - rx_headroom = mxbuf.xdp.data - mxbuf.xdp.data_hard_start; - metasize = mxbuf.xdp.data - mxbuf.xdp.data_meta; - cqe_bcnt = mxbuf.xdp.data_end - mxbuf.xdp.data; + rx_headroom = mxbuf->xdp.data - mxbuf->xdp.data_hard_start; + metasize = mxbuf->xdp.data - mxbuf->xdp.data_meta; + cqe_bcnt = mxbuf->xdp.data_end - mxbuf->xdp.data; } frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt); skb = mlx5e_build_linear_skb(rq, va, frag_size, rx_headroom, cqe_bcnt, metasize); @@ -1706,14 +1705,14 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, static struct sk_buff * mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, - struct mlx5_cqe64 *cqe, u32 cqe_bcnt) + struct mlx5_cqe64 *cqe, u32 cqe_bcnt, + struct mlx5e_xdp_buff *mxbuf) { struct mlx5e_rq_frag_info *frag_info = &rq->wqe.info.arr[0]; struct mlx5e_wqe_frag_info *head_wi = wi; u16 rx_headroom = rq->buff.headroom; struct mlx5e_frag_page *frag_page; struct skb_shared_info *sinfo; - struct mlx5e_xdp_buff mxbuf; u32 frag_consumed_bytes; struct bpf_prog *prog; struct sk_buff *skb; @@ -1733,8 +1732,8 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi net_prefetch(va + rx_headroom); mlx5e_fill_mxbuf(rq, cqe, va, rx_headroom, rq->buff.frame0_sz, - frag_consumed_bytes, &mxbuf); - sinfo = xdp_get_shared_info_from_buff(&mxbuf.xdp); + frag_consumed_bytes, mxbuf); + sinfo = xdp_get_shared_info_from_buff(&mxbuf->xdp); truesize = 0; cqe_bcnt -= frag_consumed_bytes; @@ -1746,7 +1745,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi frag_consumed_bytes = min_t(u32, frag_info->frag_size, cqe_bcnt); - mlx5e_add_skb_shared_info_frag(rq, sinfo, &mxbuf.xdp, frag_page, + mlx5e_add_skb_shared_info_frag(rq, sinfo, &mxbuf->xdp, frag_page, wi->offset, frag_consumed_bytes); truesize += frag_info->frag_stride; @@ -1756,7 +1755,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi } prog = rcu_dereference(rq->xdp_prog); - if (prog && mlx5e_xdp_handle(rq, prog, &mxbuf)) { + if (prog && mlx5e_xdp_handle(rq, prog, mxbuf)) { if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) { struct mlx5e_wqe_frag_info *pwi; @@ -1766,21 +1765,21 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi return NULL; /* page/packet was consumed by XDP */ } - skb = mlx5e_build_linear_skb(rq, mxbuf.xdp.data_hard_start, rq->buff.frame0_sz, - mxbuf.xdp.data - mxbuf.xdp.data_hard_start, - mxbuf.xdp.data_end - mxbuf.xdp.data, - mxbuf.xdp.data - mxbuf.xdp.data_meta); + skb = mlx5e_build_linear_skb(rq, mxbuf->xdp.data_hard_start, rq->buff.frame0_sz, + mxbuf->xdp.data - mxbuf->xdp.data_hard_start, + mxbuf->xdp.data_end - mxbuf->xdp.data, + mxbuf->xdp.data - mxbuf->xdp.data_meta); if (unlikely(!skb)) return NULL; skb_mark_for_recycle(skb); head_wi->frag_page->frags++; - if (xdp_buff_has_frags(&mxbuf.xdp)) { + if (xdp_buff_has_frags(&mxbuf->xdp)) { /* sinfo->nr_frags is reset by build_skb, calculate again. */ xdp_update_skb_shared_info(skb, wi - head_wi - 1, sinfo->xdp_frags_size, truesize, - xdp_buff_is_frag_pfmemalloc(&mxbuf.xdp)); + xdp_buff_is_frag_pfmemalloc(&mxbuf->xdp)); for (struct mlx5e_wqe_frag_info *pwi = head_wi + 1; pwi < wi; pwi++) pwi->frag_page->frags++; @@ -1811,6 +1810,7 @@ static void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe) { struct mlx5_wq_cyc *wq = &rq->wqe.wq; struct mlx5e_wqe_frag_info *wi; + struct mlx5e_xdp_buff mxbuf; struct sk_buff *skb; u32 cqe_bcnt; u16 ci; @@ -1828,7 +1828,7 @@ static void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe) mlx5e_skb_from_cqe_linear, mlx5e_skb_from_cqe_nonlinear, mlx5e_xsk_skb_from_cqe_linear, - rq, wi, cqe, cqe_bcnt); + rq, wi, cqe, cqe_bcnt, &mxbuf); if (!skb) { /* probably for XDP */ if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) @@ -1859,6 +1859,7 @@ static void mlx5e_handle_rx_cqe_rep(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe) struct mlx5_eswitch_rep *rep = rpriv->rep; struct mlx5_wq_cyc *wq = &rq->wqe.wq; struct mlx5e_wqe_frag_info *wi; + struct mlx5e_xdp_buff mxbuf; struct sk_buff *skb; u32 cqe_bcnt; u16 ci; @@ -1875,7 +1876,7 @@ static void mlx5e_handle_rx_cqe_rep(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe) skb = INDIRECT_CALL_2(rq->wqe.skb_from_cqe, mlx5e_skb_from_cqe_linear, mlx5e_skb_from_cqe_nonlinear, - rq, wi, cqe, cqe_bcnt); + rq, wi, cqe, cqe_bcnt, &mxbuf); if (!skb) { /* probably for XDP */ if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) @@ -1903,6 +1904,7 @@ static void mlx5e_handle_rx_cqe_mpwrq_rep(struct mlx5e_rq *rq, struct mlx5_cqe64 u32 wqe_offset = stride_ix << rq->mpwqe.log_stride_sz; u32 head_offset = wqe_offset & ((1 << rq->mpwqe.page_shift) - 1); u32 page_idx = wqe_offset >> rq->mpwqe.page_shift; + struct mlx5e_xdp_buff mxbuf; struct mlx5e_rx_wqe_ll *wqe; struct mlx5_wq_ll *wq; struct sk_buff *skb; @@ -1928,7 +1930,7 @@ static void mlx5e_handle_rx_cqe_mpwrq_rep(struct mlx5e_rq *rq, struct mlx5_cqe64 skb = INDIRECT_CALL_2(rq->mpwqe.skb_from_cqe_mpwrq, mlx5e_skb_from_cqe_mpwrq_linear, mlx5e_skb_from_cqe_mpwrq_nonlinear, - rq, wi, cqe, cqe_bcnt, head_offset, page_idx); + rq, wi, cqe, cqe_bcnt, head_offset, page_idx, &mxbuf); if (!skb) goto mpwrq_cqe_out; @@ -1975,7 +1977,7 @@ mlx5e_shampo_fill_skb_data(struct sk_buff *skb, struct mlx5e_rq *rq, static struct sk_buff * mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset, - u32 page_idx) + u32 page_idx, struct mlx5e_xdp_buff *mxbuf) { struct mlx5e_frag_page *frag_page = &wi->alloc_units.frag_pages[page_idx]; u16 headlen = min_t(u16, MLX5E_RX_MAX_HEAD, cqe_bcnt); @@ -1983,7 +1985,6 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w u32 frag_offset = head_offset; u32 byte_cnt = cqe_bcnt; struct skb_shared_info *sinfo; - struct mlx5e_xdp_buff mxbuf; unsigned int truesize = 0; struct bpf_prog *prog; struct sk_buff *skb; @@ -2029,9 +2030,9 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w } } - mlx5e_fill_mxbuf(rq, cqe, va, linear_hr, linear_frame_sz, linear_data_len, &mxbuf); + mlx5e_fill_mxbuf(rq, cqe, va, linear_hr, linear_frame_sz, linear_data_len, mxbuf); - sinfo = xdp_get_shared_info_from_buff(&mxbuf.xdp); + sinfo = xdp_get_shared_info_from_buff(&mxbuf->xdp); while (byte_cnt) { /* Non-linear mode, hence non-XSK, which always uses PAGE_SIZE. */ @@ -2042,7 +2043,7 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w else truesize += ALIGN(pg_consumed_bytes, BIT(rq->mpwqe.log_stride_sz)); - mlx5e_add_skb_shared_info_frag(rq, sinfo, &mxbuf.xdp, frag_page, frag_offset, + mlx5e_add_skb_shared_info_frag(rq, sinfo, &mxbuf->xdp, frag_page, frag_offset, pg_consumed_bytes); byte_cnt -= pg_consumed_bytes; frag_offset = 0; @@ -2050,7 +2051,7 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w } if (prog) { - if (mlx5e_xdp_handle(rq, prog, &mxbuf)) { + if (mlx5e_xdp_handle(rq, prog, mxbuf)) { if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) { struct mlx5e_frag_page *pfp; @@ -2063,10 +2064,10 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w return NULL; /* page/packet was consumed by XDP */ } - skb = mlx5e_build_linear_skb(rq, mxbuf.xdp.data_hard_start, + skb = mlx5e_build_linear_skb(rq, mxbuf->xdp.data_hard_start, linear_frame_sz, - mxbuf.xdp.data - mxbuf.xdp.data_hard_start, 0, - mxbuf.xdp.data - mxbuf.xdp.data_meta); + mxbuf->xdp.data - mxbuf->xdp.data_hard_start, 0, + mxbuf->xdp.data - mxbuf->xdp.data_meta); if (unlikely(!skb)) { mlx5e_page_release_fragmented(rq, &wi->linear_page); return NULL; @@ -2076,13 +2077,13 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w wi->linear_page.frags++; mlx5e_page_release_fragmented(rq, &wi->linear_page); - if (xdp_buff_has_frags(&mxbuf.xdp)) { + if (xdp_buff_has_frags(&mxbuf->xdp)) { struct mlx5e_frag_page *pagep; /* sinfo->nr_frags is reset by build_skb, calculate again. */ xdp_update_skb_shared_info(skb, frag_page - head_page, sinfo->xdp_frags_size, truesize, - xdp_buff_is_frag_pfmemalloc(&mxbuf.xdp)); + xdp_buff_is_frag_pfmemalloc(&mxbuf->xdp)); pagep = head_page; do @@ -2093,12 +2094,12 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w } else { dma_addr_t addr; - if (xdp_buff_has_frags(&mxbuf.xdp)) { + if (xdp_buff_has_frags(&mxbuf->xdp)) { struct mlx5e_frag_page *pagep; xdp_update_skb_shared_info(skb, sinfo->nr_frags, sinfo->xdp_frags_size, truesize, - xdp_buff_is_frag_pfmemalloc(&mxbuf.xdp)); + xdp_buff_is_frag_pfmemalloc(&mxbuf->xdp)); pagep = frag_page - sinfo->nr_frags; do @@ -2120,7 +2121,7 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w static struct sk_buff * mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset, - u32 page_idx) + u32 page_idx, struct mlx5e_xdp_buff *mxbuf) { struct mlx5e_frag_page *frag_page = &wi->alloc_units.frag_pages[page_idx]; u16 rx_headroom = rq->buff.headroom; @@ -2148,20 +2149,19 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, prog = rcu_dereference(rq->xdp_prog); if (prog) { - struct mlx5e_xdp_buff mxbuf; net_prefetchw(va); /* xdp_frame data area */ mlx5e_fill_mxbuf(rq, cqe, va, rx_headroom, rq->buff.frame0_sz, - cqe_bcnt, &mxbuf); - if (mlx5e_xdp_handle(rq, prog, &mxbuf)) { + cqe_bcnt, mxbuf); + if (mlx5e_xdp_handle(rq, prog, mxbuf)) { if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) frag_page->frags++; return NULL; /* page/packet was consumed by XDP */ } - rx_headroom = mxbuf.xdp.data - mxbuf.xdp.data_hard_start; - metasize = mxbuf.xdp.data - mxbuf.xdp.data_meta; - cqe_bcnt = mxbuf.xdp.data_end - mxbuf.xdp.data; + rx_headroom = mxbuf->xdp.data - mxbuf->xdp.data_hard_start; + metasize = mxbuf->xdp.data - mxbuf->xdp.data_meta; + cqe_bcnt = mxbuf->xdp.data_end - mxbuf->xdp.data; } frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt); skb = mlx5e_build_linear_skb(rq, va, frag_size, rx_headroom, cqe_bcnt, metasize); @@ -2283,12 +2283,14 @@ static void mlx5e_handle_rx_cqe_mpwrq_shampo(struct mlx5e_rq *rq, struct mlx5_cq bool flush = cqe->shampo.flush; bool match = cqe->shampo.match; struct mlx5e_rq_stats *stats = rq->stats; + struct mlx5e_xdp_buff mxbuf; struct mlx5e_rx_wqe_ll *wqe; struct mlx5e_mpw_info *wi; struct mlx5_wq_ll *wq; wi = mlx5e_get_mpw_info(rq, wqe_id); wi->consumed_strides += cstrides; + mxbuf.xdp.flags = 0; if (unlikely(MLX5E_RX_ERR_CQE(cqe))) { mlx5e_handle_rx_err_cqe(rq, cqe); @@ -2311,7 +2313,7 @@ static void mlx5e_handle_rx_cqe_mpwrq_shampo(struct mlx5e_rq *rq, struct mlx5_cq *skb = mlx5e_skb_from_cqe_shampo(rq, wi, cqe, header_index); else *skb = mlx5e_skb_from_cqe_mpwrq_nonlinear(rq, wi, cqe, cqe_bcnt, - data_offset, page_idx); + data_offset, page_idx, &mxbuf); if (unlikely(!*skb)) goto free_hd_entry; @@ -2369,6 +2371,7 @@ static void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cq u32 wqe_offset = stride_ix << rq->mpwqe.log_stride_sz; u32 head_offset = wqe_offset & ((1 << rq->mpwqe.page_shift) - 1); u32 page_idx = wqe_offset >> rq->mpwqe.page_shift; + struct mlx5e_xdp_buff mxbuf; struct mlx5e_rx_wqe_ll *wqe; struct mlx5_wq_ll *wq; struct sk_buff *skb; @@ -2396,7 +2399,7 @@ static void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cq mlx5e_skb_from_cqe_mpwrq_nonlinear, mlx5e_xsk_skb_from_cqe_mpwrq_linear, rq, wi, cqe, cqe_bcnt, head_offset, - page_idx); + page_idx, &mxbuf); if (!skb) goto mpwrq_cqe_out; @@ -2624,6 +2627,7 @@ static void mlx5i_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe) { struct mlx5_wq_cyc *wq = &rq->wqe.wq; struct mlx5e_wqe_frag_info *wi; + struct mlx5e_xdp_buff mxbuf; struct sk_buff *skb; u32 cqe_bcnt; u16 ci; @@ -2640,7 +2644,7 @@ static void mlx5i_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe) skb = INDIRECT_CALL_2(rq->wqe.skb_from_cqe, mlx5e_skb_from_cqe_linear, mlx5e_skb_from_cqe_nonlinear, - rq, wi, cqe, cqe_bcnt); + rq, wi, cqe, cqe_bcnt, &mxbuf); if (!skb) goto wq_cyc_pop; @@ -2714,6 +2718,7 @@ static void mlx5e_trap_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe { struct mlx5_wq_cyc *wq = &rq->wqe.wq; struct mlx5e_wqe_frag_info *wi; + struct mlx5e_xdp_buff mxbuf; struct sk_buff *skb; u32 cqe_bcnt; u16 trap_id; @@ -2729,7 +2734,7 @@ static void mlx5e_trap_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe goto wq_cyc_pop; } - skb = mlx5e_skb_from_cqe_nonlinear(rq, wi, cqe, cqe_bcnt); + skb = mlx5e_skb_from_cqe_nonlinear(rq, wi, cqe, cqe_bcnt, &mxbuf); if (!skb) goto wq_cyc_pop; From patchwork Thu Jun 20 22:19:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yan Zhai X-Patchwork-Id: 13706442 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yb1-f182.google.com (mail-yb1-f182.google.com [209.85.219.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A5F6015538A for ; Thu, 20 Jun 2024 22:19:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718921977; cv=none; b=AlafdDqK9NLg4BCQW8VItrGJFi1qXmbYlwUXDZWVjE1EptfBKV5ZN9cfOWFxUwZRlADC+1r1MJQlSgJK8ocQK8F5VfiEin5EI/ayovFDn+QkQPf5/yMrLTUPXZJ4jwqpT+A7hrwOy8KM8EFROrsyv+0IjP8isd45yDHUxrlLpwo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718921977; c=relaxed/simple; bh=VWUAy/ZNxE6p7rDLY6MrQyac9w5PJLnLtwGFxAwwDMI=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=FM9+z+Re0aUKw0h/OFuNQFiiOlNywg8II1VB5cH9Gii5cyExM+TxW+ZuEYT8pelvXo6443l+uqFqXMFp9GKyZTquEgaWjJZe3EIvjk+6MpAE+gKuFv9SDaRt4UrWvu4sROZxgGp4gdFs9X6wy4i2Hc0jAJYe3yEw9wsZNOF1Y9U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cloudflare.com; spf=pass smtp.mailfrom=cloudflare.com; dkim=pass (2048-bit key) header.d=cloudflare.com header.i=@cloudflare.com header.b=N/LPXcTR; arc=none smtp.client-ip=209.85.219.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cloudflare.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cloudflare.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=cloudflare.com header.i=@cloudflare.com header.b="N/LPXcTR" Received: by mail-yb1-f182.google.com with SMTP id 3f1490d57ef6-dff0712ede2so1399027276.2 for ; Thu, 20 Jun 2024 15:19:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloudflare.com; s=google09082023; t=1718921974; x=1719526774; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=eDikH4YaLTXcuYnnpoS1DDj9V6N9+XuA7sB909Kk9Go=; b=N/LPXcTR8PnDVduo2vOD3K8Moi9pNDsojIHKgrRnBaT84fe3X72siB2ySDxy+uXmVz SYAw6fKmFkui14CNcPo7VHplcZ/E+FuHZ3DndGKSNWnWgaiyXyttGNG5jfzTLZdcvIoB A928unxRAUV6cGzd2G/kKPPSj67n5QmXa1C+F9t5CZyUu/7A68H1aW1TFPLYjka85Ox5 xAgFrgdMJZswQA3F+69+aWNfxA+W8E/VNaQbdL4rhbDQQSGWQoTAVO9+me3J2mIKsqkO abDlJNQpbVK4P5BOBMYL4NMXl5tzpNPbk1GvlFXyXt8ddYTJChol2Q0TX9tZ2MgeWqgA 3M5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718921974; x=1719526774; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=eDikH4YaLTXcuYnnpoS1DDj9V6N9+XuA7sB909Kk9Go=; b=Ifk+N/6+dad+MtTw6kdwckla3vFeHQKuqXtV0s1pAPRui/LuPNItRM3NtGKvjX4pFp /lp1o1jczHX+hzDahm6Z/mNRqqrUzkUv9u/evKF56QrKI10awdwCNOsKO0P4M/2y7t4H CraE06MC1HGRAp1tiYbodzX8Sh6r1mFJOenRvUVc9dyWgPKNyB/+aML3byQMURm7QNC6 HkQx1wtZYEf1qT6EATK/VGHyoj11qe8ZpCTdBb8g+1lliwbJXwrbXGgTnLtJu0U/sD8S 5s1Qirqfbeap/xXlYsb4ts0EbGYfDNOkzy5cgz5taov6pd67eI8nB1wxTnm8/lJpiqLj //hA== X-Forwarded-Encrypted: i=1; AJvYcCVyb0q6KxXfe0Cw5gw/UrtzpG4md7GhhTvT3ehzx305SRFJH6mBVJ+WQHYFVwM2SRVWJ0H+UvGYn9W3ebxtY1eYj+Np X-Gm-Message-State: AOJu0YzTic2EucmgxNfH89xIp7WXF1W2VXs/7i5qSTRZSqXgD8EKmawY CBsjlEe1ukIPI100YFLTKsXcxPq8ghciWXzzHgpGQUXzTlppUfCpipGwsQwLXZc= X-Google-Smtp-Source: AGHT+IHNipijp9iknX8bwqwkZcNc9ZQwSjOEiZMJ1rIL/QjtE7CaIIK/m+10m7GL91Ub5FwPFGeljA== X-Received: by 2002:a25:6801:0:b0:dfb:61e:3ee0 with SMTP id 3f1490d57ef6-e02be10a99emr7068721276.10.1718921974450; Thu, 20 Jun 2024 15:19:34 -0700 (PDT) Received: from debian.debian ([2a09:bac5:7a49:19cd::292:40]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-444c2b665a7sm1942401cf.26.2024.06.20.15.19.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jun 2024 15:19:33 -0700 (PDT) Date: Thu, 20 Jun 2024 15:19:31 -0700 From: Yan Zhai To: netdev@vger.kernel.org Cc: Saeed Mahameed , Leon Romanovsky , Tariq Toukan , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Alexander Lobakin , Yan Zhai , Dragos Tatulea , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org Subject: [RFC net-next 8/9] mlx5: apply XDP offloading fixup when building skb Message-ID: <17595a278ee72964b83c0bd0b502152aa025f600.1718919473.git.yan@cloudflare.com> References: Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Add a common point to transfer offloading info from XDP context to skb. Signed-off-by: Yan Zhai Signed-off-by: Jesper Dangaard Brouer --- .../net/ethernet/mellanox/mlx5/core/en/xsk/rx.c | 8 ++++++-- drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 14 ++++++++++++++ 2 files changed, 20 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c index 4dacaa61e106..9bf49ff2e0dd 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c @@ -250,7 +250,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, u16 cqe_bcnt, u32 head_offset, u32 page_idx, - struct mlx5e_xdp_buff *mxbuf_) + struct mlx5e_xdp_buff *mxbuf_caller) { struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(wi->alloc_units.xsk_buffs[page_idx]); struct bpf_prog *prog; @@ -270,6 +270,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, /* mxbuf->rq is set on allocation, but cqe is per-packet so set it here */ mxbuf->cqe = cqe; + xdp_init_buff_minimal(&mxbuf->xdp); xsk_buff_set_size(&mxbuf->xdp, cqe_bcnt); xsk_buff_dma_sync_for_cpu(&mxbuf->xdp); net_prefetch(mxbuf->xdp.data); @@ -295,6 +296,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, __set_bit(page_idx, wi->skip_release_bitmap); /* non-atomic */ return NULL; /* page/packet was consumed by XDP */ } + mxbuf_caller->xdp.flags = mxbuf->xdp.flags; /* XDP_PASS: copy the data from the UMEM to a new SKB and reuse the * frame. On SKB allocation failure, NULL is returned. @@ -306,7 +308,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, struct mlx5_cqe64 *cqe, u32 cqe_bcnt, - struct mlx5e_xdp_buff *mxbuf_) + struct mlx5e_xdp_buff *mxbuf_caller) { struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(*wi->xskp); struct bpf_prog *prog; @@ -320,6 +322,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq, /* mxbuf->rq is set on allocation, but cqe is per-packet so set it here */ mxbuf->cqe = cqe; + xdp_init_buff_minimal(&mxbuf->xdp); xsk_buff_set_size(&mxbuf->xdp, cqe_bcnt); xsk_buff_dma_sync_for_cpu(&mxbuf->xdp); net_prefetch(mxbuf->xdp.data); @@ -330,6 +333,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq, wi->flags |= BIT(MLX5E_WQE_FRAG_SKIP_RELEASE); return NULL; /* page/packet was consumed by XDP */ } + mxbuf_caller->xdp.flags = mxbuf->xdp.flags; /* XDP_PASS: copy the data from the UMEM to a new SKB. The frame reuse * will be handled by mlx5e_free_rx_wqe. diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index 1a592a1ab988..0a47889e281e 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -1670,6 +1670,8 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, dma_addr_t addr; u32 frag_size; + xdp_init_buff_minimal(&mxbuf->xdp); + va = page_address(frag_page->page) + wi->offset; data = va + rx_headroom; frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt); @@ -1721,6 +1723,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi void *va; frag_page = wi->frag_page; + xdp_init_buff_minimal(&mxbuf->xdp); va = page_address(frag_page->page) + wi->offset; frag_consumed_bytes = min_t(u32, frag_info->frag_size, cqe_bcnt); @@ -1837,6 +1840,7 @@ static void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe) } mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb); + xdp_buff_fixup_skb_offloading(&mxbuf.xdp, skb); if (mlx5e_cqe_regb_chain(cqe)) if (!mlx5e_tc_update_skb_nic(cqe, skb)) { @@ -1885,6 +1889,7 @@ static void mlx5e_handle_rx_cqe_rep(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe) } mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb); + xdp_buff_fixup_skb_offloading(&mxbuf.xdp, skb); if (rep->vlan && skb_vlan_tag_present(skb)) skb_vlan_pop(skb); @@ -1935,6 +1940,7 @@ static void mlx5e_handle_rx_cqe_mpwrq_rep(struct mlx5e_rq *rq, struct mlx5_cqe64 goto mpwrq_cqe_out; mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb); + xdp_buff_fixup_skb_offloading(&mxbuf.xdp, skb); mlx5e_rep_tc_receive(cqe, rq, skb); @@ -2138,6 +2144,8 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, return NULL; } + xdp_init_buff_minimal(&mxbuf->xdp); + va = page_address(frag_page->page) + head_offset; data = va + rx_headroom; frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt); @@ -2345,6 +2353,8 @@ static void mlx5e_handle_rx_cqe_mpwrq_shampo(struct mlx5e_rq *rq, struct mlx5_cq } mlx5e_shampo_complete_rx_cqe(rq, cqe, cqe_bcnt, *skb); + xdp_buff_fixup_skb_offloading(&mxbuf.xdp, *skb); + if (flush && rq->hw_gro_data->skb) mlx5e_shampo_flush_skb(rq, cqe, match); free_hd_entry: @@ -2404,6 +2414,7 @@ static void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cq goto mpwrq_cqe_out; mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb); + xdp_buff_fixup_skb_offloading(&mxbuf.xdp, skb); if (mlx5e_cqe_regb_chain(cqe)) if (!mlx5e_tc_update_skb_nic(cqe, skb)) { @@ -2649,6 +2660,8 @@ static void mlx5i_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe) goto wq_cyc_pop; mlx5i_complete_rx_cqe(rq, cqe, cqe_bcnt, skb); + xdp_buff_fixup_skb_offloading(&mxbuf.xdp, skb); + if (unlikely(!skb->dev)) { dev_kfree_skb_any(skb); goto wq_cyc_pop; @@ -2740,6 +2753,7 @@ static void mlx5e_trap_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb); skb_push(skb, ETH_HLEN); + xdp_buff_fixup_skb_offloading(&mxbuf.xdp, skb); mlx5_devlink_trap_report(rq->mdev, trap_id, skb, rq->netdev->devlink_port); From patchwork Thu Jun 20 22:19:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yan Zhai X-Patchwork-Id: 13706443 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-qt1-f172.google.com (mail-qt1-f172.google.com [209.85.160.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 91988155C84 for ; Thu, 20 Jun 2024 22:19:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718921980; cv=none; b=BKdVPR4JlHcJotsv1EgXsEAxvXPfcJGIxSxsc/v0gh/AUsut7xF+7N9uIqpvakPmX6pFZRLfEJR7lWHKLtqNshufd56ADavft0u8qGhwVIXuEcfIbjNiL1cRBYt+QIJD9FMZeXMOgjDCbtrxwt4KTTNxLxbv+lgIr2vMlimocyI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718921980; c=relaxed/simple; bh=WuC+c7gIRxQRfWzku/JuPQzydIUAyqXMs6It56sQNMQ=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=gwqxNX8BTjAIW7J7P1H2d0TYNaFjbMSHn54ix3rBGy9+UfcjXFkkixGOR+Jwt8hSXXy8TNqvkNbe6ZUYYsFTtXI6eM40MLqTov+dXMSWWaCP92vKY21qTaZXg3K/QvIuxiE52xvFaNiHuBlCieSln6q5vtowxvzKKzguIQxKcdU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cloudflare.com; spf=pass smtp.mailfrom=cloudflare.com; dkim=pass (2048-bit key) header.d=cloudflare.com header.i=@cloudflare.com header.b=AroCWOny; arc=none smtp.client-ip=209.85.160.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cloudflare.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cloudflare.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=cloudflare.com header.i=@cloudflare.com header.b="AroCWOny" Received: by mail-qt1-f172.google.com with SMTP id d75a77b69052e-440f22526edso6538471cf.0 for ; Thu, 20 Jun 2024 15:19:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloudflare.com; s=google09082023; t=1718921977; x=1719526777; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=OuArwN7o7WoLSKyeVKHPzr4B3WlUmOtTdlRtE1mmap8=; b=AroCWOnyAVjCqE9HJkbK7DrfC3TaBY3hEpcK8HJ6pa1LGzCOYm7N/vo/WdS+EgFpPS vZrdY52YYBOwbFU/haZpsDQfOtBP8y99oIgLqQ8jO4M+tYaGeS7rQhsd8pzZRlNBRsh7 K+7jBeewXmTkQaJbPAvAwHsr1j6g+pr1Ep23np6sRp67oNcw5GJqxaaQqe6ItMphcA57 5HreJGL5QzhDGT97OA2g7F7GFfPt0vXWi+W2ax98nCfcLJq3a/tlGglM2aJ2N5RFBkCT OEcWdYq/DFO6Si0OXFC95PSJDprXK7pmvlirEyStl07BllW6Jw9XC9wrqpUZb5jTylBh FP/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718921977; x=1719526777; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=OuArwN7o7WoLSKyeVKHPzr4B3WlUmOtTdlRtE1mmap8=; b=rc2TBWGsO76mmZ69+fMWs0TaRuHwG/K12PhpbzNZtfK8B1IFjT1K2afUj7+sQq2MHZ 6ZL519qnFVnReeaL6LJHn7MAyclcsiWS1+KPGNGCbdLYaX6tVn3vpjLUyUYrktsMP7bR o+pCU/0UjljWT2Rqq4Q81KlqUH1BSgyl1zJXZ74Q3/fpNlmQdd4PwnsY+qu86Ogv2rke rQQtzPCRsWLhTkrhIyTCNlmCYApUXK3/SCJPSgMQ+IXCsFCfA9RWoLwtDbo2Q/xcWzB9 DswACuLNmJ7+ze+aWcnSDIiTOoictSiy57fLYcSsHGDJtqguzrCjB1f7PTIubM5zs/vL Bjsg== X-Forwarded-Encrypted: i=1; AJvYcCUWv/iUv26a4yYG7paWrojKFwcwSjyYE6qUaQr1Pfq/Vfhpooxy3/P4DVaNrIeqx+YnteHT98yFdEC0ZF3qsPQK4yzg X-Gm-Message-State: AOJu0Yw8Mxtjd4U9dIQC8Bh4Hmv9c2bIkFkWgRmZr/rwYBGlxCT7TiPY KLSTsyewXNJfHq0WCpTo3EiVFmFs4V1wJjASdlJjaHuR01uv1RXiyvkx/wujvCQ= X-Google-Smtp-Source: AGHT+IE90gJSTf/sLdYoKoD9DTjeGf3DuiiwFPO4yuige78jRidVqNJPFiNlxGOmBXNcyVc3qM70vw== X-Received: by 2002:ac8:5a4c:0:b0:440:279c:fa0a with SMTP id d75a77b69052e-444a7a5537bmr78675951cf.53.1718921977535; Thu, 20 Jun 2024 15:19:37 -0700 (PDT) Received: from debian.debian ([2a09:bac5:7a49:19cd::292:40]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-444c2b95594sm1965991cf.48.2024.06.20.15.19.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jun 2024 15:19:36 -0700 (PDT) Date: Thu, 20 Jun 2024 15:19:34 -0700 From: Yan Zhai To: netdev@vger.kernel.org Cc: Andrii Nakryiko , Eduard Zingerman , Mykola Lysenko , Alexei Starovoitov , Daniel Borkmann , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, netdev@vger.kernel.org Subject: [RFC net-next 9/9] bpf: selftests: test disabling GRO by XDP Message-ID: <04f25110b5f4c240b56dd9d449b6496096c74ab5.1718919473.git.yan@cloudflare.com> References: Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Test the case when XDP disables GRO for a packet, the effect is actually reflected on the receiving side. Signed-off-by: Yan Zhai --- tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/xdp_offloading.c | 122 ++++++++++++++++++ .../selftests/bpf/progs/xdp_offloading.c | 50 +++++++ 3 files changed, 173 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/xdp_offloading.c create mode 100644 tools/testing/selftests/bpf/progs/xdp_offloading.c diff --git a/tools/testing/selftests/bpf/config b/tools/testing/selftests/bpf/config index 2fb16da78dce..e789392f44bd 100644 --- a/tools/testing/selftests/bpf/config +++ b/tools/testing/selftests/bpf/config @@ -96,3 +96,4 @@ CONFIG_XDP_SOCKETS=y CONFIG_XFRM_INTERFACE=y CONFIG_TCP_CONG_DCTCP=y CONFIG_TCP_CONG_BBR=y +CONFIG_SKB_GRO_CONTROL=y diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_offloading.c b/tools/testing/selftests/bpf/prog_tests/xdp_offloading.c new file mode 100644 index 000000000000..462296d9689a --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/xdp_offloading.c @@ -0,0 +1,122 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include "xdp_offloading.skel.h" + +/* run tcp server in ns1, client in ns2, and transmit 10MB data */ +static void run_tcp_test(const char *server_ip) +{ + struct nstoken *ns1 = NULL, *ns2 = NULL; + struct sockaddr_storage server_addr; + int total_bytes = 10 * 1024 * 1024; + int server_fd = -1, client_fd = -1; + int server_port = 5555; + + socklen_t addrlen = sizeof(server_addr); + + if (!ASSERT_OK(make_sockaddr(AF_INET, server_ip, server_port, + &server_addr, &addrlen), "make_addr")) + goto err; + + ns1 = open_netns("ns1"); + if (!ASSERT_OK_PTR(ns1, "setns ns1")) + goto err; + + server_fd = start_server_str(AF_INET, SOCK_STREAM, "0.0.0.0", + server_port, NULL); + if (!ASSERT_NEQ(server_fd, -1, "start_server_str")) + goto err; + + ns2 = open_netns("ns2"); + if (!ASSERT_OK_PTR(ns2, "setns ns2")) + goto err; + + client_fd = connect_to_addr(SOCK_STREAM, &server_addr, addrlen, NULL); + if (!ASSERT_NEQ(client_fd, -1, "connect_to_addr")) + goto err; + + /* send 10MB data */ + if (!ASSERT_OK(send_recv_data(server_fd, client_fd, total_bytes), + "send_recv_data")) + goto err; + +err: + if (server_fd != -1) + close(server_fd); + if (client_fd != -1) + close(client_fd); + if (ns1) + close_netns(ns1); + if (ns2) + close_netns(ns2); +} + +/* This test involves two netns: + * NS1 | NS2 + * | + * ----> veth1 --> veth_offloading(xdp)-->(tp:netif_receive_skb) + * | | | + * | | v + * tcp-server | tcp-client + * + * a TCP server in NS1 sends data through veth1, and the XDP program on + * "xdp_offloading" is what we test against. This XDP program will apply + * offloadings and we examine these at netif_receive_skb tracepoint if the + * offloadings are propagated to skbs. + */ +void test_xdp_offloading(void) +{ + const char *xdp_ifname = "veth_offloading"; + struct nstoken *nstoken = NULL; + struct xdp_offloading *skel = NULL; + struct bpf_link *link_xdp, *link_tp; + const char *server_ip = "192.168.0.2"; + const char *client_ip = "192.168.0.3"; + int ifindex; + + SYS(out, "ip netns add ns1"); + SYS(out, "ip netns add ns2"); + SYS(out, "ip -n ns1 link add veth1 type veth peer name %s netns ns2", + xdp_ifname); + SYS(out, "ip -n ns1 link set veth1 up"); + SYS(out, "ip -n ns2 link set veth_offloading up"); + SYS(out, "ip -n ns1 addr add dev veth1 %s/31", server_ip); + SYS(out, "ip -n ns2 addr add dev %s %s/31", xdp_ifname, client_ip); + + SYS(out, "ip netns exec ns2 ethtool -K %s gro on", xdp_ifname); + + nstoken = open_netns("ns2"); + if (!ASSERT_OK_PTR(nstoken, "setns")) + goto out; + + skel = xdp_offloading__open(); + if (!ASSERT_OK_PTR(skel, "skel")) + return; + + ifindex = if_nametoindex(xdp_ifname); + if (!ASSERT_NEQ(ifindex, 0, "ifindex")) + goto out; + + memcpy(skel->rodata->target_ifname, xdp_ifname, IFNAMSIZ); + + if (!ASSERT_OK(xdp_offloading__load(skel), "load")) + goto out; + + link_xdp = bpf_program__attach_xdp(skel->progs.xdp_disable_gro, ifindex); + if (!ASSERT_OK_PTR(link_xdp, "xdp_attach")) + goto out; + + link_tp = bpf_program__attach(skel->progs.observe_skb_gro_disabled); + if (!ASSERT_OK_PTR(link_tp, "xdp_attach")) + goto out; + + run_tcp_test(server_ip); + + ASSERT_NEQ(__sync_fetch_and_add(&skel->bss->invalid_skb, 0), 0, + "check invalid skbs"); +out: + if (nstoken) + close_netns(nstoken); + SYS_NOFAIL("ip netns del ns1"); + SYS_NOFAIL("ip netns del ns2"); +} diff --git a/tools/testing/selftests/bpf/progs/xdp_offloading.c b/tools/testing/selftests/bpf/progs/xdp_offloading.c new file mode 100644 index 000000000000..5fd88d75b008 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/xdp_offloading.c @@ -0,0 +1,50 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include +#include + +#define IFNAMSIZ 16 + +/* using a special ifname to filter unrelated traffic */ +const __u8 target_ifname[IFNAMSIZ]; + +/* test outputs: these counters should be 0 to pass tests */ +int64_t invalid_skb = 0; + +extern int bpf_xdp_disable_gro(struct xdp_md *xdp) __ksym; + +/* + * Observing: after XDP disables GRO, gro_disabled bit should be set + * and gso_size should be 0. + */ +SEC("tp_btf/netif_receive_skb") +int BPF_PROG(observe_skb_gro_disabled, struct sk_buff *skb) +{ + struct skb_shared_info *shinfo = + (struct skb_shared_info *)(skb->head + skb->end); + char devname[IFNAMSIZ]; + int gso_size; + + __builtin_memcpy(devname, skb->dev->name, IFNAMSIZ); + if (bpf_strncmp(devname, IFNAMSIZ, (const char *)target_ifname)) + return 0; + + if (!skb->gro_disabled) + __sync_fetch_and_add(&invalid_skb, 1); + + gso_size = BPF_CORE_READ(shinfo, gso_size); + if (gso_size) + __sync_fetch_and_add(&invalid_skb, 1); + + return 0; +} + +SEC("xdp") +int xdp_disable_gro(struct xdp_md *xdp) +{ + bpf_xdp_disable_gro(xdp); + return XDP_PASS; +} + +char _license[] SEC("license") = "GPL";