From patchwork Fri Aug 13 11:47:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12435331 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 172CEC4338F for ; Fri, 13 Aug 2021 11:49:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F0E906109D for ; Fri, 13 Aug 2021 11:49:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240355AbhHMLuF (ORCPT ); Fri, 13 Aug 2021 07:50:05 -0400 Received: from mail.kernel.org ([198.145.29.99]:44760 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240145AbhHMLuF (ORCPT ); Fri, 13 Aug 2021 07:50:05 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 9225E610E9; Fri, 13 Aug 2021 11:49:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628855378; bh=2tRQCtw9lt+vbB+TWHdzzUSVVam/Wy1QuG9OPtTzEoI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SQgA/JZsSu34or4KIFO58t35zou4IfyZ0CwtllunwxYLXMMnugPKIcwP/Fttv9ktT OoZGgiOk7nwXsd99TBykl6QZ6RGA2pyicEQZxn8jzGoVRLX6FpidaQlpdRF2qRAEg2 DSFxG+eTF4VoZo1dU8PwqntzpymsorzKMbBzfNaKJAB2hhmt7Sfo4GCCuiw9okfQ0D aCoEHoUPMZWsVulmXuEAlbP+J6y10WG3fiQLl9w56UNAfvTLpu8Ap6i/w0hZDNpZHP 5n2GxmvydDwuF1zwhf7dz85Q7cp4tN220YMNQSbtMg1YJ/7OiyhLrEIdhkZ80YeR32 gdaJ07VVEHmFw== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, magnus.karlsson@intel.com, tirthendu.sarkar@intel.com, toke@redhat.com Subject: [PATCH v11 bpf-next 05/18] net: xdp: add xdp_update_skb_shared_info utility routine Date: Fri, 13 Aug 2021 13:47:46 +0200 Message-Id: X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Introduce xdp_update_skb_shared_info routine to update frags array metadata in skb_shared_info data structure converting to a skb from a xdp_buff or xdp_frame. According to the current skb_shared_info architecture in xdp_frame/xdp_buff and to the xdp multi-buff support, there is no need to run skb_add_rx_frag() and reset frags array converting the buffer to a skb since the frag array will be in the same position for xdp_buff/xdp_frame and for the skb, we just need to update memory metadata. Introduce XDP_FLAGS_PF_MEMALLOC flag in xdp_buff_flags in order to mark the xdp_buff or xdp_frame as under memory-pressure if pages of the frags array are under memory pressure. Doing so we can avoid looping over all fragments in xdp_update_skb_shared_info routine. The driver is expected to set the flag constructing the xdp_buffer using xdp_buff_set_frag_pfmemalloc utility routine. Rely on xdp_update_skb_shared_info in __xdp_build_skb_from_frame routine converting the multi-buff xdp_frame to a skb after performing a XDP_REDIRECT. Signed-off-by: Lorenzo Bianconi --- include/net/xdp.h | 33 ++++++++++++++++++++++++++++++++- net/core/xdp.c | 17 +++++++++++++++++ 2 files changed, 49 insertions(+), 1 deletion(-) diff --git a/include/net/xdp.h b/include/net/xdp.h index ed5ea784fd45..53cccdc9528c 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -67,7 +67,10 @@ struct xdp_txq_info { }; enum xdp_buff_flags { - XDP_FLAGS_MULTI_BUFF = BIT(0), /* non-linear xdp buff */ + XDP_FLAGS_MULTI_BUFF = BIT(0), /* non-linear xdp buff */ + XDP_FLAGS_FRAGS_PF_MEMALLOC = BIT(1), /* xdp multi-buff paged memory + * is under pressure + */ }; struct xdp_buff { @@ -96,6 +99,16 @@ static __always_inline void xdp_buff_clear_mb(struct xdp_buff *xdp) xdp->flags &= ~XDP_FLAGS_MULTI_BUFF; } +static __always_inline bool xdp_buff_is_frag_pfmemalloc(struct xdp_buff *xdp) +{ + return !!(xdp->flags & XDP_FLAGS_FRAGS_PF_MEMALLOC); +} + +static __always_inline void xdp_buff_set_frag_pfmemalloc(struct xdp_buff *xdp) +{ + xdp->flags |= XDP_FLAGS_FRAGS_PF_MEMALLOC; +} + static __always_inline void xdp_init_buff(struct xdp_buff *xdp, u32 frame_sz, struct xdp_rxq_info *rxq) { @@ -151,6 +164,11 @@ static __always_inline bool xdp_frame_is_mb(struct xdp_frame *frame) return !!(frame->flags & XDP_FLAGS_MULTI_BUFF); } +static __always_inline bool xdp_frame_is_frag_pfmemalloc(struct xdp_frame *frame) +{ + return !!(frame->flags & XDP_FLAGS_FRAGS_PF_MEMALLOC); +} + #define XDP_BULK_QUEUE_SIZE 16 struct xdp_frame_bulk { int count; @@ -186,6 +204,19 @@ static inline void xdp_scrub_frame(struct xdp_frame *frame) frame->dev_rx = NULL; } +static inline void +xdp_update_skb_shared_info(struct sk_buff *skb, u8 nr_frags, + unsigned int size, unsigned int truesize, + bool pfmemalloc) +{ + skb_shinfo(skb)->nr_frags = nr_frags; + + skb->len += size; + skb->data_len += size; + skb->truesize += truesize; + skb->pfmemalloc |= pfmemalloc; +} + /* Avoids inlining WARN macro in fast-path */ void xdp_warn(const char *msg, const char *func, const int line); #define XDP_WARN(msg) xdp_warn(msg, __func__, __LINE__) diff --git a/net/core/xdp.c b/net/core/xdp.c index cc92ccb38432..504be3ce3ca9 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -531,8 +531,20 @@ struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf, struct sk_buff *skb, struct net_device *dev) { + unsigned int frag_size, frag_tsize; unsigned int headroom, frame_size; void *hard_start; + u8 nr_frags; + + /* xdp multi-buff frame */ + if (unlikely(xdp_frame_is_mb(xdpf))) { + struct skb_shared_info *sinfo; + + sinfo = xdp_get_shared_info_from_frame(xdpf); + frag_tsize = sinfo->xdp_frags_tsize; + frag_size = sinfo->xdp_frags_size; + nr_frags = sinfo->nr_frags; + } /* Part of headroom was reserved to xdpf */ headroom = sizeof(*xdpf) + xdpf->headroom; @@ -552,6 +564,11 @@ struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf, if (xdpf->metasize) skb_metadata_set(skb, xdpf->metasize); + if (unlikely(xdp_frame_is_mb(xdpf))) + xdp_update_skb_shared_info(skb, nr_frags, + frag_size, frag_tsize, + xdp_frame_is_frag_pfmemalloc(xdpf)); + /* Essential SKB info: protocol and skb->dev */ skb->protocol = eth_type_trans(skb, dev);