From patchwork Fri Mar 19 21:47:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "lorenzo@kernel.org" X-Patchwork-Id: 12151981 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B55BBC433C1 for ; Fri, 19 Mar 2021 21:48:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7BA3E6198A for ; Fri, 19 Mar 2021 21:48:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230317AbhCSVsK (ORCPT ); Fri, 19 Mar 2021 17:48:10 -0400 Received: from mail.kernel.org ([198.145.29.99]:45390 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230203AbhCSVsF (ORCPT ); Fri, 19 Mar 2021 17:48:05 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id E872760232; Fri, 19 Mar 2021 21:48:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1616190485; bh=ExAlW84/WSpkNfqL/U3ioSZicNGQ0fFikhwvLVW4W0o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sljBTMft/0aiDTJ6Jk7ScsSsLUq1sIKvQMVYaIIK1G3KXgaXwwvFO7RUNO9aRcVEy 7nLH+WyFp/N9rYC1CbVBGKKIa2i8X6+A3sVCigIKbWLBBpScO/TqQWlED35J/qqAt7 3kS3RcL2GKUK28xLli9d9nx8X0H8h2z1Ggie3XsUrn91AHHmkhbWWuWLHUmTvyanMk vCz1GJ1Bl6BXYwn0X2hEy0fZawP461Pswa1xNPcKqIMiAM6Sg81CETQf9bKxSR0RR5 pG4+rwBD+NGq4IU7PefA2vOWwB6Yfmj+rgumr1O9o5uV+Z2S9RlVRTxgauG/rMR5ZH HYX+iKX/EdRAw== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, sameehj@amazon.com Subject: [PATCH v7 bpf-next 01/14] xdp: introduce mb in xdp_buff/xdp_frame Date: Fri, 19 Mar 2021 22:47:15 +0100 Message-Id: X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Introduce multi-buffer bit (mb) in xdp_frame/xdp_buffer data structure in order to specify if this is a linear buffer (mb = 0) or a multi-buffer frame (mb = 1). In the latter case the shared_info area at the end of the first buffer will be properly initialized to link together subsequent buffers. Signed-off-by: Lorenzo Bianconi --- include/net/xdp.h | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/include/net/xdp.h b/include/net/xdp.h index a5bc214a49d9..b57ff2c81e7c 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -73,7 +73,8 @@ struct xdp_buff { void *data_hard_start; struct xdp_rxq_info *rxq; struct xdp_txq_info *txq; - u32 frame_sz; /* frame size to deduce data_hard_end/reserved tailroom*/ + u32 frame_sz:31; /* frame size to deduce data_hard_end/reserved tailroom*/ + u32 mb:1; /* xdp non-linear buffer */ }; static __always_inline void @@ -81,6 +82,7 @@ xdp_init_buff(struct xdp_buff *xdp, u32 frame_sz, struct xdp_rxq_info *rxq) { xdp->frame_sz = frame_sz; xdp->rxq = rxq; + xdp->mb = 0; } static __always_inline void @@ -116,7 +118,8 @@ struct xdp_frame { u16 len; u16 headroom; u32 metasize:8; - u32 frame_sz:24; + u32 frame_sz:23; + u32 mb:1; /* xdp non-linear frame */ /* Lifetime of xdp_rxq_info is limited to NAPI/enqueue time, * while mem info is valid on remote CPU. */ @@ -179,6 +182,7 @@ void xdp_convert_frame_to_buff(struct xdp_frame *frame, struct xdp_buff *xdp) xdp->data_end = frame->data + frame->len; xdp->data_meta = frame->data - frame->metasize; xdp->frame_sz = frame->frame_sz; + xdp->mb = frame->mb; } static inline @@ -205,6 +209,7 @@ int xdp_update_frame_from_buff(struct xdp_buff *xdp, xdp_frame->headroom = headroom - sizeof(*xdp_frame); xdp_frame->metasize = metasize; xdp_frame->frame_sz = xdp->frame_sz; + xdp_frame->mb = xdp->mb; return 0; } From patchwork Fri Mar 19 21:47:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "lorenzo@kernel.org" X-Patchwork-Id: 12151989 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D886C433E0 for ; Fri, 19 Mar 2021 21:49:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5460060232 for ; Fri, 19 Mar 2021 21:49:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230440AbhCSVsm (ORCPT ); Fri, 19 Mar 2021 17:48:42 -0400 Received: from mail.kernel.org ([198.145.29.99]:45422 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230218AbhCSVsJ (ORCPT ); Fri, 19 Mar 2021 17:48:09 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 5B22F61956; Fri, 19 Mar 2021 21:48:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1616190489; bh=Pbf40SiyAqiH7pKTBuoCs2wC96KaU3l+C5JHrrBWdRs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LoPD+zK8DAx2IxKTRCBgNMxY21BHtw50dh2/W31gWeLsmnRfnH4hzAM4YrRp6bV9B asmYs7lnFPiZK4TwTYSkmLPDF5QN3OyxT15Yhm+kpBO30zFvjoNFnr76TEc4yy+xjh id59t1E8zLrdWhsssifIJwl9T15dHqfU8p5LggCuIm/140vM/INzFIz1+Z/hUjUdpQ itI4Y4zneBbhZC2VTpa/O6ZRecN23PCabHKLbqDn+GA1dLU7/WOdWZ4pnnsjkyvkb8 s+kvGN1A7qbUiauyobQ/cvk5eL8GJbNRheKAY3nlISafs7TLYYXd8D2FzjvMUIotcn QuMx0EQwebVhw== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, sameehj@amazon.com Subject: [PATCH v7 bpf-next 02/14] xdp: add xdp_shared_info data structure Date: Fri, 19 Mar 2021 22:47:16 +0100 Message-Id: X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Introduce xdp_shared_info data structure to contain info about "non-linear" xdp frame. xdp_shared_info will alias skb_shared_info allowing to keep most of the frags in the same cache-line. Introduce some xdp_shared_info helpers aligned to skb_frag* ones Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/marvell/mvneta.c | 62 +++++++++++++++------------ include/net/xdp.h | 55 ++++++++++++++++++++++-- 2 files changed, 85 insertions(+), 32 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index 20307eec8988..b21ba3e36264 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -2036,14 +2036,17 @@ int mvneta_rx_refill_queue(struct mvneta_port *pp, struct mvneta_rx_queue *rxq) static void mvneta_xdp_put_buff(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, - struct xdp_buff *xdp, struct skb_shared_info *sinfo, + struct xdp_buff *xdp, struct xdp_shared_info *xdp_sinfo, int sync_len) { int i; - for (i = 0; i < sinfo->nr_frags; i++) + for (i = 0; i < xdp_sinfo->nr_frags; i++) { + skb_frag_t *frag = &xdp_sinfo->frags[i]; + page_pool_put_full_page(rxq->page_pool, - skb_frag_page(&sinfo->frags[i]), true); + xdp_get_frag_page(frag), true); + } page_pool_put_page(rxq->page_pool, virt_to_head_page(xdp->data), sync_len, true); } @@ -2181,7 +2184,7 @@ mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, struct bpf_prog *prog, struct xdp_buff *xdp, u32 frame_sz, struct mvneta_stats *stats) { - struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); + struct xdp_shared_info *xdp_sinfo = xdp_get_shared_info_from_buff(xdp); unsigned int len, data_len, sync; u32 ret, act; @@ -2202,7 +2205,7 @@ mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, err = xdp_do_redirect(pp->dev, xdp, prog); if (unlikely(err)) { - mvneta_xdp_put_buff(pp, rxq, xdp, sinfo, sync); + mvneta_xdp_put_buff(pp, rxq, xdp, xdp_sinfo, sync); ret = MVNETA_XDP_DROPPED; } else { ret = MVNETA_XDP_REDIR; @@ -2213,7 +2216,7 @@ mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, case XDP_TX: ret = mvneta_xdp_xmit_back(pp, xdp); if (ret != MVNETA_XDP_TX) - mvneta_xdp_put_buff(pp, rxq, xdp, sinfo, sync); + mvneta_xdp_put_buff(pp, rxq, xdp, xdp_sinfo, sync); break; default: bpf_warn_invalid_xdp_action(act); @@ -2222,7 +2225,7 @@ mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, trace_xdp_exception(pp->dev, prog, act); fallthrough; case XDP_DROP: - mvneta_xdp_put_buff(pp, rxq, xdp, sinfo, sync); + mvneta_xdp_put_buff(pp, rxq, xdp, xdp_sinfo, sync); ret = MVNETA_XDP_DROPPED; stats->xdp_drop++; break; @@ -2243,9 +2246,9 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp, { unsigned char *data = page_address(page); int data_len = -MVNETA_MH_SIZE, len; + struct xdp_shared_info *xdp_sinfo; struct net_device *dev = pp->dev; enum dma_data_direction dma_dir; - struct skb_shared_info *sinfo; if (*size > MVNETA_MAX_RX_BUF_SIZE) { len = MVNETA_MAX_RX_BUF_SIZE; @@ -2268,8 +2271,8 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp, xdp_prepare_buff(xdp, data, pp->rx_offset_correction + MVNETA_MH_SIZE, data_len, false); - sinfo = xdp_get_shared_info_from_buff(xdp); - sinfo->nr_frags = 0; + xdp_sinfo = xdp_get_shared_info_from_buff(xdp); + xdp_sinfo->nr_frags = 0; } static void @@ -2277,7 +2280,7 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp, struct mvneta_rx_desc *rx_desc, struct mvneta_rx_queue *rxq, struct xdp_buff *xdp, int *size, - struct skb_shared_info *xdp_sinfo, + struct xdp_shared_info *xdp_sinfo, struct page *page) { struct net_device *dev = pp->dev; @@ -2300,13 +2303,13 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp, if (data_len > 0 && xdp_sinfo->nr_frags < MAX_SKB_FRAGS) { skb_frag_t *frag = &xdp_sinfo->frags[xdp_sinfo->nr_frags++]; - skb_frag_off_set(frag, pp->rx_offset_correction); - skb_frag_size_set(frag, data_len); - __skb_frag_set_page(frag, page); + xdp_set_frag_offset(frag, pp->rx_offset_correction); + xdp_set_frag_size(frag, data_len); + xdp_set_frag_page(frag, page); /* last fragment */ if (len == *size) { - struct skb_shared_info *sinfo; + struct xdp_shared_info *sinfo; sinfo = xdp_get_shared_info_from_buff(xdp); sinfo->nr_frags = xdp_sinfo->nr_frags; @@ -2323,10 +2326,13 @@ static struct sk_buff * mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, struct xdp_buff *xdp, u32 desc_status) { - struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); - int i, num_frags = sinfo->nr_frags; + struct xdp_shared_info *xdp_sinfo = xdp_get_shared_info_from_buff(xdp); + int i, num_frags = xdp_sinfo->nr_frags; + skb_frag_t frag_list[MAX_SKB_FRAGS]; struct sk_buff *skb; + memcpy(frag_list, xdp_sinfo->frags, sizeof(skb_frag_t) * num_frags); + skb = build_skb(xdp->data_hard_start, PAGE_SIZE); if (!skb) return ERR_PTR(-ENOMEM); @@ -2338,12 +2344,12 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, mvneta_rx_csum(pp, desc_status, skb); for (i = 0; i < num_frags; i++) { - skb_frag_t *frag = &sinfo->frags[i]; + struct page *page = xdp_get_frag_page(&frag_list[i]); skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, - skb_frag_page(frag), skb_frag_off(frag), - skb_frag_size(frag), PAGE_SIZE); - page_pool_release_page(rxq->page_pool, skb_frag_page(frag)); + page, xdp_get_frag_offset(&frag_list[i]), + xdp_get_frag_size(&frag_list[i]), PAGE_SIZE); + page_pool_release_page(rxq->page_pool, page); } return skb; @@ -2356,7 +2362,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi, { int rx_proc = 0, rx_todo, refill, size = 0; struct net_device *dev = pp->dev; - struct skb_shared_info sinfo; + struct xdp_shared_info xdp_sinfo; struct mvneta_stats ps = {}; struct bpf_prog *xdp_prog; u32 desc_status, frame_sz; @@ -2365,7 +2371,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi, xdp_init_buff(&xdp_buf, PAGE_SIZE, &rxq->xdp_rxq); xdp_buf.data_hard_start = NULL; - sinfo.nr_frags = 0; + xdp_sinfo.nr_frags = 0; /* Get number of received packets */ rx_todo = mvneta_rxq_busy_desc_num_get(pp, rxq); @@ -2409,7 +2415,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi, } mvneta_swbm_add_rx_fragment(pp, rx_desc, rxq, &xdp_buf, - &size, &sinfo, page); + &size, &xdp_sinfo, page); } /* Middle or Last descriptor */ if (!(rx_status & MVNETA_RXD_LAST_DESC)) @@ -2417,7 +2423,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi, continue; if (size) { - mvneta_xdp_put_buff(pp, rxq, &xdp_buf, &sinfo, -1); + mvneta_xdp_put_buff(pp, rxq, &xdp_buf, &xdp_sinfo, -1); goto next; } @@ -2429,7 +2435,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi, if (IS_ERR(skb)) { struct mvneta_pcpu_stats *stats = this_cpu_ptr(pp->stats); - mvneta_xdp_put_buff(pp, rxq, &xdp_buf, &sinfo, -1); + mvneta_xdp_put_buff(pp, rxq, &xdp_buf, &xdp_sinfo, -1); u64_stats_update_begin(&stats->syncp); stats->es.skb_alloc_error++; @@ -2446,12 +2452,12 @@ static int mvneta_rx_swbm(struct napi_struct *napi, napi_gro_receive(napi, skb); next: xdp_buf.data_hard_start = NULL; - sinfo.nr_frags = 0; + xdp_sinfo.nr_frags = 0; } rcu_read_unlock(); if (xdp_buf.data_hard_start) - mvneta_xdp_put_buff(pp, rxq, &xdp_buf, &sinfo, -1); + mvneta_xdp_put_buff(pp, rxq, &xdp_buf, &xdp_sinfo, -1); if (ps.xdp_redirect) xdp_do_flush_map(); diff --git a/include/net/xdp.h b/include/net/xdp.h index b57ff2c81e7c..5b3874b68f99 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -107,10 +107,54 @@ xdp_prepare_buff(struct xdp_buff *xdp, unsigned char *hard_start, ((xdp)->data_hard_start + (xdp)->frame_sz - \ SKB_DATA_ALIGN(sizeof(struct skb_shared_info))) -static inline struct skb_shared_info * +struct xdp_shared_info { + u16 nr_frags; + u16 data_length; /* paged area length */ + skb_frag_t frags[MAX_SKB_FRAGS]; +}; + +static inline struct xdp_shared_info * xdp_get_shared_info_from_buff(struct xdp_buff *xdp) { - return (struct skb_shared_info *)xdp_data_hard_end(xdp); + BUILD_BUG_ON(sizeof(struct xdp_shared_info) > + sizeof(struct skb_shared_info)); + return (struct xdp_shared_info *)xdp_data_hard_end(xdp); +} + +static inline struct page *xdp_get_frag_page(const skb_frag_t *frag) +{ + return frag->bv_page; +} + +static inline unsigned int xdp_get_frag_offset(const skb_frag_t *frag) +{ + return frag->bv_offset; +} + +static inline unsigned int xdp_get_frag_size(const skb_frag_t *frag) +{ + return frag->bv_len; +} + +static inline void *xdp_get_frag_address(const skb_frag_t *frag) +{ + return page_address(xdp_get_frag_page(frag)) + + xdp_get_frag_offset(frag); +} + +static inline void xdp_set_frag_page(skb_frag_t *frag, struct page *page) +{ + frag->bv_page = page; +} + +static inline void xdp_set_frag_offset(skb_frag_t *frag, u32 offset) +{ + frag->bv_offset = offset; +} + +static inline void xdp_set_frag_size(skb_frag_t *frag, u32 size) +{ + frag->bv_len = size; } struct xdp_frame { @@ -140,12 +184,15 @@ static __always_inline void xdp_frame_bulk_init(struct xdp_frame_bulk *bq) bq->xa = NULL; } -static inline struct skb_shared_info * +static inline struct xdp_shared_info * xdp_get_shared_info_from_frame(struct xdp_frame *frame) { void *data_hard_start = frame->data - frame->headroom - sizeof(*frame); - return (struct skb_shared_info *)(data_hard_start + frame->frame_sz - + /* xdp_shared_info struct must be aligned to skb_shared_info + * area in buffer tailroom + */ + return (struct xdp_shared_info *)(data_hard_start + frame->frame_sz - SKB_DATA_ALIGN(sizeof(struct skb_shared_info))); } From patchwork Fri Mar 19 21:47:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "lorenzo@kernel.org" X-Patchwork-Id: 12151985 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F7FAC433E1 for ; Fri, 19 Mar 2021 21:49:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 672FC61952 for ; Fri, 19 Mar 2021 21:49:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230218AbhCSVsn (ORCPT ); Fri, 19 Mar 2021 17:48:43 -0400 Received: from mail.kernel.org ([198.145.29.99]:45486 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230186AbhCSVsN (ORCPT ); Fri, 19 Mar 2021 17:48:13 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id CAFC361985; Fri, 19 Mar 2021 21:48:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1616190492; bh=XVynEEBFvgZsDR8rk/WaP+UaTtqOfatbEsF280ONx2Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=E+OrKWfd8Z5tRVdCiS1rUQ+zpaGljlcSxvylOAQOZqqWdbCBmkEbpmIg/xKsuYHO1 6UHBVxV1br/TQw7Xtr4h9CFStHdas8obRjhuiZxukDZ9k6c6R7ymJaKLsCVR2sfcpS 8Pf2+tsVLAjUzyhNcsv+sh8ZQzarzgvMsNMp/LNjDVFHqj7MTdL3v+GaBcCD9KDyFN FP4g0jigWZzxfCfD+EoRmvw0HY+JJiuwMvm9oSEBOfYvE99R5rSqCLy7H40PovGWwS y2FOipmDAbC4AN3jiP5E4m5PuYC3xCBN8TW19MNqO0xyIDaJNUjWnvMhALxY7yJaSI KUhe4lQsLFECw== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, sameehj@amazon.com Subject: [PATCH v7 bpf-next 03/14] net: mvneta: update mb bit before passing the xdp buffer to eBPF layer Date: Fri, 19 Mar 2021 22:47:17 +0100 Message-Id: <21b8359604d981412afc40f0a87a7ffd7c41eb84.1616179034.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Update multi-buffer bit (mb) in xdp_buff to notify XDP/eBPF layer and XDP remote drivers if this is a "non-linear" XDP buffer. Access xdp_shared_info only if xdp_buff mb is set. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/marvell/mvneta.c | 26 ++++++++++++++++++++------ 1 file changed, 20 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index b21ba3e36264..009b2c5a90b1 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -2041,12 +2041,16 @@ mvneta_xdp_put_buff(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, { int i; + if (likely(!xdp->mb)) + goto out; + for (i = 0; i < xdp_sinfo->nr_frags; i++) { skb_frag_t *frag = &xdp_sinfo->frags[i]; page_pool_put_full_page(rxq->page_pool, xdp_get_frag_page(frag), true); } +out: page_pool_put_page(rxq->page_pool, virt_to_head_page(xdp->data), sync_len, true); } @@ -2246,7 +2250,6 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp, { unsigned char *data = page_address(page); int data_len = -MVNETA_MH_SIZE, len; - struct xdp_shared_info *xdp_sinfo; struct net_device *dev = pp->dev; enum dma_data_direction dma_dir; @@ -2270,9 +2273,6 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp, prefetch(data); xdp_prepare_buff(xdp, data, pp->rx_offset_correction + MVNETA_MH_SIZE, data_len, false); - - xdp_sinfo = xdp_get_shared_info_from_buff(xdp); - xdp_sinfo->nr_frags = 0; } static void @@ -2307,12 +2307,18 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp, xdp_set_frag_size(frag, data_len); xdp_set_frag_page(frag, page); + if (!xdp->mb) { + xdp_sinfo->data_length = *size; + xdp->mb = 1; + } /* last fragment */ if (len == *size) { struct xdp_shared_info *sinfo; sinfo = xdp_get_shared_info_from_buff(xdp); sinfo->nr_frags = xdp_sinfo->nr_frags; + sinfo->data_length = xdp_sinfo->data_length; + memcpy(sinfo->frags, xdp_sinfo->frags, sinfo->nr_frags * sizeof(skb_frag_t)); } @@ -2327,11 +2333,15 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, struct xdp_buff *xdp, u32 desc_status) { struct xdp_shared_info *xdp_sinfo = xdp_get_shared_info_from_buff(xdp); - int i, num_frags = xdp_sinfo->nr_frags; skb_frag_t frag_list[MAX_SKB_FRAGS]; + int i, num_frags = 0; struct sk_buff *skb; - memcpy(frag_list, xdp_sinfo->frags, sizeof(skb_frag_t) * num_frags); + if (unlikely(xdp->mb)) { + num_frags = xdp_sinfo->nr_frags; + memcpy(frag_list, xdp_sinfo->frags, + sizeof(skb_frag_t) * num_frags); + } skb = build_skb(xdp->data_hard_start, PAGE_SIZE); if (!skb) @@ -2343,6 +2353,9 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, skb_put(skb, xdp->data_end - xdp->data); mvneta_rx_csum(pp, desc_status, skb); + if (likely(!xdp->mb)) + return skb; + for (i = 0; i < num_frags; i++) { struct page *page = xdp_get_frag_page(&frag_list[i]); @@ -2404,6 +2417,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi, frame_sz = size - ETH_FCS_LEN; desc_status = rx_status; + xdp_buf.mb = 0; mvneta_swbm_rx_frame(pp, rx_desc, rxq, &xdp_buf, &size, page); } else { From patchwork Fri Mar 19 21:47:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "lorenzo@kernel.org" X-Patchwork-Id: 12151987 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1D17C433E2 for ; Fri, 19 Mar 2021 21:49:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AD87061952 for ; Fri, 19 Mar 2021 21:49:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230478AbhCSVsn (ORCPT ); Fri, 19 Mar 2021 17:48:43 -0400 Received: from mail.kernel.org ([198.145.29.99]:45512 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230203AbhCSVsQ (ORCPT ); Fri, 19 Mar 2021 17:48:16 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 3731061958; Fri, 19 Mar 2021 21:48:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1616190496; bh=BjM3aRfITmVs222cko1lsgwnS/HAjJqzUVnLOnTFztY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pUjmdhG3jpnmOtbpK8/P6zgMqKhaKfRYrfdEbEdLxY/wtmja+eKXwwigU2t4/MXZN HEv88fINvrhI3OFJomoGFlDoLHB8HtaL4LUG8MOx2GN3oyoaXzcH7L4jo96dnGHR9Y STWWK/etMH74zqA9bm2SGaWmUEPQnr8RLZPwM7WZmXYF0490u78yJFhdVut7FKqw5m mPZEec4cWxr8K/3VpD/leMg5dPBbjQIyz4AFYjncTOPfOkzpb6VcmdD2kLA6L5e/Z6 P1AcIunB7fNyLUkaIVEf8bYyLUMg3UWEPrb+aocXSf5PaLUqgO3Eq3GNXWLtEbdyPT 8gXkNX5KnnL1w== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, sameehj@amazon.com Subject: [PATCH v7 bpf-next 04/14] xdp: add multi-buff support to xdp_return_{buff/frame} Date: Fri, 19 Mar 2021 22:47:18 +0100 Message-Id: X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Take into account if the received xdp_buff/xdp_frame is non-linear recycling/returning the frame memory to the allocator or into xdp_frame_bulk. Introduce xdp_return_num_frags_from_buff to return a given number of fragments from a xdp multi-buff starting from the tail. Signed-off-by: Lorenzo Bianconi --- include/net/xdp.h | 19 ++++++++++-- net/core/xdp.c | 76 ++++++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 92 insertions(+), 3 deletions(-) diff --git a/include/net/xdp.h b/include/net/xdp.h index 5b3874b68f99..8be1b5e5a08a 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -287,6 +287,7 @@ void xdp_return_buff(struct xdp_buff *xdp); void xdp_flush_frame_bulk(struct xdp_frame_bulk *bq); void xdp_return_frame_bulk(struct xdp_frame *xdpf, struct xdp_frame_bulk *bq); +void xdp_return_num_frags_from_buff(struct xdp_buff *xdp, u16 num_frags); /* When sending xdp_frame into the network stack, then there is no * return point callback, which is needed to release e.g. DMA-mapping @@ -297,10 +298,24 @@ void __xdp_release_frame(void *data, struct xdp_mem_info *mem); static inline void xdp_release_frame(struct xdp_frame *xdpf) { struct xdp_mem_info *mem = &xdpf->mem; + struct xdp_shared_info *xdp_sinfo; + int i; /* Curr only page_pool needs this */ - if (mem->type == MEM_TYPE_PAGE_POOL) - __xdp_release_frame(xdpf->data, mem); + if (mem->type != MEM_TYPE_PAGE_POOL) + return; + + if (likely(!xdpf->mb)) + goto out; + + xdp_sinfo = xdp_get_shared_info_from_frame(xdpf); + for (i = 0; i < xdp_sinfo->nr_frags; i++) { + struct page *page = xdp_get_frag_page(&xdp_sinfo->frags[i]); + + __xdp_release_frame(page_address(page), mem); + } +out: + __xdp_release_frame(xdpf->data, mem); } int xdp_rxq_info_reg(struct xdp_rxq_info *xdp_rxq, diff --git a/net/core/xdp.c b/net/core/xdp.c index 05354976c1fc..430f516259d9 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -374,12 +374,38 @@ static void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct, void xdp_return_frame(struct xdp_frame *xdpf) { + struct xdp_shared_info *xdp_sinfo; + int i; + + if (likely(!xdpf->mb)) + goto out; + + xdp_sinfo = xdp_get_shared_info_from_frame(xdpf); + for (i = 0; i < xdp_sinfo->nr_frags; i++) { + struct page *page = xdp_get_frag_page(&xdp_sinfo->frags[i]); + + __xdp_return(page_address(page), &xdpf->mem, false, NULL); + } +out: __xdp_return(xdpf->data, &xdpf->mem, false, NULL); } EXPORT_SYMBOL_GPL(xdp_return_frame); void xdp_return_frame_rx_napi(struct xdp_frame *xdpf) { + struct xdp_shared_info *xdp_sinfo; + int i; + + if (likely(!xdpf->mb)) + goto out; + + xdp_sinfo = xdp_get_shared_info_from_frame(xdpf); + for (i = 0; i < xdp_sinfo->nr_frags; i++) { + struct page *page = xdp_get_frag_page(&xdp_sinfo->frags[i]); + + __xdp_return(page_address(page), &xdpf->mem, true, NULL); + } +out: __xdp_return(xdpf->data, &xdpf->mem, true, NULL); } EXPORT_SYMBOL_GPL(xdp_return_frame_rx_napi); @@ -415,7 +441,7 @@ void xdp_return_frame_bulk(struct xdp_frame *xdpf, struct xdp_mem_allocator *xa; if (mem->type != MEM_TYPE_PAGE_POOL) { - __xdp_return(xdpf->data, &xdpf->mem, false, NULL); + xdp_return_frame(xdpf); return; } @@ -434,15 +460,63 @@ void xdp_return_frame_bulk(struct xdp_frame *xdpf, bq->xa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params); } + if (unlikely(xdpf->mb)) { + struct xdp_shared_info *xdp_sinfo; + int i; + + xdp_sinfo = xdp_get_shared_info_from_frame(xdpf); + for (i = 0; i < xdp_sinfo->nr_frags; i++) { + skb_frag_t *frag = &xdp_sinfo->frags[i]; + + bq->q[bq->count++] = xdp_get_frag_address(frag); + if (bq->count == XDP_BULK_QUEUE_SIZE) + xdp_flush_frame_bulk(bq); + } + } bq->q[bq->count++] = xdpf->data; } EXPORT_SYMBOL_GPL(xdp_return_frame_bulk); void xdp_return_buff(struct xdp_buff *xdp) { + struct xdp_shared_info *xdp_sinfo; + int i; + + if (likely(!xdp->mb)) + goto out; + + xdp_sinfo = xdp_get_shared_info_from_buff(xdp); + for (i = 0; i < xdp_sinfo->nr_frags; i++) { + struct page *page = xdp_get_frag_page(&xdp_sinfo->frags[i]); + + __xdp_return(page_address(page), &xdp->rxq->mem, true, xdp); + } +out: __xdp_return(xdp->data, &xdp->rxq->mem, true, xdp); } +void xdp_return_num_frags_from_buff(struct xdp_buff *xdp, u16 num_frags) +{ + struct xdp_shared_info *xdp_sinfo; + int i; + + if (unlikely(!xdp->mb)) + return; + + xdp_sinfo = xdp_get_shared_info_from_buff(xdp); + num_frags = min_t(u16, num_frags, xdp_sinfo->nr_frags); + for (i = 1; i <= num_frags; i++) { + skb_frag_t *frag = &xdp_sinfo->frags[xdp_sinfo->nr_frags - i]; + struct page *page = xdp_get_frag_page(frag); + + xdp_sinfo->data_length -= xdp_get_frag_size(frag); + __xdp_return(page_address(page), &xdp->rxq->mem, false, NULL); + } + xdp_sinfo->nr_frags -= num_frags; + xdp->mb = !!xdp_sinfo->nr_frags; +} +EXPORT_SYMBOL_GPL(xdp_return_num_frags_from_buff); + /* Only called for MEM_TYPE_PAGE_POOL see xdp.h */ void __xdp_release_frame(void *data, struct xdp_mem_info *mem) { From patchwork Fri Mar 19 21:47:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "lorenzo@kernel.org" X-Patchwork-Id: 12151993 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7B80C433E4 for ; Fri, 19 Mar 2021 21:49:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9A49F61976 for ; Fri, 19 Mar 2021 21:49:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230490AbhCSVso (ORCPT ); Fri, 19 Mar 2021 17:48:44 -0400 Received: from mail.kernel.org ([198.145.29.99]:45546 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230409AbhCSVsT (ORCPT ); Fri, 19 Mar 2021 17:48:19 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 9480A61983; Fri, 19 Mar 2021 21:48:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1616190499; bh=IkFtuVfqaFze6+r8rbNzZTCcafAIsbYs2BIwafYZLqo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QrAR3mM2ObF1Q3N1w1skVbCnNabBVQpwksFBadVw/57XdQs75URuIFBcdCnUcuX0R FMkmfEnzR7GBOLUJlXCU5NFCasZ/IZDTTRTYXQKbROGoK9KPMWyUV8+LiUQjCpHY0Y KzjA04m+3a5t1AQozKcY6HYsSORv7PudRaJRqd0U/vZgcTBLd5aV2+u635DVet1Tik kFNZqRbs1i/HtQRv6+VP6ErIqvmzFU3SqwkPTzOeJMwJSM/U0SkejZ+naaht/9mHD4 RiDidSX+70Mwmy6mHzAS//VWqsoi/rWwOp1vLesAC404dqEvzQPDkcm5il7NRAno7L zuiIC71sQon/w== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, sameehj@amazon.com Subject: [PATCH v7 bpf-next 05/14] net: mvneta: add multi buffer support to XDP_TX Date: Fri, 19 Mar 2021 22:47:19 +0100 Message-Id: <27adb990b7e47e7b4f6fb1b590a37042f874f24d.1616179034.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Introduce the capability to map non-linear xdp buffer running mvneta_xdp_submit_frame() for XDP_TX and XDP_REDIRECT Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/marvell/mvneta.c | 91 ++++++++++++++++----------- 1 file changed, 55 insertions(+), 36 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index 009b2c5a90b1..226d76e7ccc8 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -1860,8 +1860,8 @@ static void mvneta_txq_bufs_free(struct mvneta_port *pp, bytes_compl += buf->skb->len; pkts_compl++; dev_kfree_skb_any(buf->skb); - } else if (buf->type == MVNETA_TYPE_XDP_TX || - buf->type == MVNETA_TYPE_XDP_NDO) { + } else if ((buf->type == MVNETA_TYPE_XDP_TX || + buf->type == MVNETA_TYPE_XDP_NDO) && buf->xdpf) { if (napi && buf->type == MVNETA_TYPE_XDP_TX) xdp_return_frame_rx_napi(buf->xdpf); else @@ -2057,45 +2057,64 @@ mvneta_xdp_put_buff(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, static int mvneta_xdp_submit_frame(struct mvneta_port *pp, struct mvneta_tx_queue *txq, - struct xdp_frame *xdpf, bool dma_map) + struct xdp_frame *xdpf, int *nxmit_byte, bool dma_map) { - struct mvneta_tx_desc *tx_desc; - struct mvneta_tx_buf *buf; - dma_addr_t dma_addr; + struct xdp_shared_info *xdp_sinfo = xdp_get_shared_info_from_frame(xdpf); + int i, num_frames = xdpf->mb ? xdp_sinfo->nr_frags + 1 : 1; + struct mvneta_tx_desc *tx_desc = NULL; + struct page *page; - if (txq->count >= txq->tx_stop_threshold) + if (txq->count + num_frames >= txq->size) return MVNETA_XDP_DROPPED; - tx_desc = mvneta_txq_next_desc_get(txq); + for (i = 0; i < num_frames; i++) { + struct mvneta_tx_buf *buf = &txq->buf[txq->txq_put_index]; + skb_frag_t *frag = i ? &xdp_sinfo->frags[i - 1] : NULL; + int len = i ? xdp_get_frag_size(frag) : xdpf->len; + dma_addr_t dma_addr; - buf = &txq->buf[txq->txq_put_index]; - if (dma_map) { - /* ndo_xdp_xmit */ - dma_addr = dma_map_single(pp->dev->dev.parent, xdpf->data, - xdpf->len, DMA_TO_DEVICE); - if (dma_mapping_error(pp->dev->dev.parent, dma_addr)) { - mvneta_txq_desc_put(txq); - return MVNETA_XDP_DROPPED; + tx_desc = mvneta_txq_next_desc_get(txq); + if (dma_map) { + /* ndo_xdp_xmit */ + void *data; + + data = frag ? xdp_get_frag_address(frag) : xdpf->data; + dma_addr = dma_map_single(pp->dev->dev.parent, data, + len, DMA_TO_DEVICE); + if (dma_mapping_error(pp->dev->dev.parent, dma_addr)) { + for (; i >= 0; i--) + mvneta_txq_desc_put(txq); + return MVNETA_XDP_DROPPED; + } + buf->type = MVNETA_TYPE_XDP_NDO; + } else { + page = frag ? xdp_get_frag_page(frag) + : virt_to_page(xdpf->data); + dma_addr = page_pool_get_dma_addr(page); + if (frag) + dma_addr += xdp_get_frag_offset(frag); + else + dma_addr += sizeof(*xdpf) + xdpf->headroom; + dma_sync_single_for_device(pp->dev->dev.parent, + dma_addr, len, + DMA_BIDIRECTIONAL); + buf->type = MVNETA_TYPE_XDP_TX; } - buf->type = MVNETA_TYPE_XDP_NDO; - } else { - struct page *page = virt_to_page(xdpf->data); + buf->xdpf = i ? NULL : xdpf; + + tx_desc->command = !i ? MVNETA_TXD_F_DESC : 0; + tx_desc->buf_phys_addr = dma_addr; + tx_desc->data_size = len; + *nxmit_byte += len; - dma_addr = page_pool_get_dma_addr(page) + - sizeof(*xdpf) + xdpf->headroom; - dma_sync_single_for_device(pp->dev->dev.parent, dma_addr, - xdpf->len, DMA_BIDIRECTIONAL); - buf->type = MVNETA_TYPE_XDP_TX; + mvneta_txq_inc_put(txq); } - buf->xdpf = xdpf; - tx_desc->command = MVNETA_TXD_FLZ_DESC; - tx_desc->buf_phys_addr = dma_addr; - tx_desc->data_size = xdpf->len; + /*last descriptor */ + tx_desc->command |= MVNETA_TXD_L_DESC | MVNETA_TXD_Z_PAD; - mvneta_txq_inc_put(txq); - txq->pending++; - txq->count++; + txq->pending += num_frames; + txq->count += num_frames; return MVNETA_XDP_TX; } @@ -2106,8 +2125,8 @@ mvneta_xdp_xmit_back(struct mvneta_port *pp, struct xdp_buff *xdp) struct mvneta_pcpu_stats *stats = this_cpu_ptr(pp->stats); struct mvneta_tx_queue *txq; struct netdev_queue *nq; + int cpu, nxmit_byte = 0; struct xdp_frame *xdpf; - int cpu; u32 ret; xdpf = xdp_convert_buff_to_frame(xdp); @@ -2119,10 +2138,10 @@ mvneta_xdp_xmit_back(struct mvneta_port *pp, struct xdp_buff *xdp) nq = netdev_get_tx_queue(pp->dev, txq->id); __netif_tx_lock(nq, cpu); - ret = mvneta_xdp_submit_frame(pp, txq, xdpf, false); + ret = mvneta_xdp_submit_frame(pp, txq, xdpf, &nxmit_byte, false); if (ret == MVNETA_XDP_TX) { u64_stats_update_begin(&stats->syncp); - stats->es.ps.tx_bytes += xdpf->len; + stats->es.ps.tx_bytes += nxmit_byte; stats->es.ps.tx_packets++; stats->es.ps.xdp_tx++; u64_stats_update_end(&stats->syncp); @@ -2161,11 +2180,11 @@ mvneta_xdp_xmit(struct net_device *dev, int num_frame, __netif_tx_lock(nq, cpu); for (i = 0; i < num_frame; i++) { - ret = mvneta_xdp_submit_frame(pp, txq, frames[i], true); + ret = mvneta_xdp_submit_frame(pp, txq, frames[i], &nxmit_byte, + true); if (ret != MVNETA_XDP_TX) break; - nxmit_byte += frames[i]->len; nxmit++; } From patchwork Fri Mar 19 21:47:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "lorenzo@kernel.org" X-Patchwork-Id: 12151995 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE286C433E6 for ; Fri, 19 Mar 2021 21:49:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D95316198F for ; Fri, 19 Mar 2021 21:49:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230492AbhCSVsp (ORCPT ); Fri, 19 Mar 2021 17:48:45 -0400 Received: from mail.kernel.org ([198.145.29.99]:45568 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230411AbhCSVsX (ORCPT ); Fri, 19 Mar 2021 17:48:23 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 01D9761956; Fri, 19 Mar 2021 21:48:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1616190503; bh=JjWi6a6aiqTEcsxJvOguqklYwQGDmE+Ojlby57SaRRc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qnqJLTX3wamDodYqP8Mqj3fvf8jiERDuEhGJtuzlpTjoaoKycp1ec7Ot4JFw3Tqx/ gi0QGzPNy11I4ng0Ma5jsdSUG3HUxWe20F53lIrSuZ1EYo0J9h5BvwmIUJLT26h01g j2FW0KKZsYJSD+5W1CjCQOatbRN39CTxUp3FUAOVLWV8s+SkOevghilsxhQYFY28pd 0+SHfI5QBs43aT3v2eEbPbGW42DABQZ9ONZEvqalXUpIC0qnUk4f0tum07f7pgrU3k ywS1g2tOn8Ef6Y3ATHypROaZDDJi/1kW0S6lVnFdeA/j83UtEMCa72MNIr9P31PN/l dgFmhlvtw0pYA== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, sameehj@amazon.com Subject: [PATCH v7 bpf-next 06/14] net: mvneta: enable jumbo frames for XDP Date: Fri, 19 Mar 2021 22:47:20 +0100 Message-Id: X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Enable the capability to receive jumbo frames even if the interface is running in XDP mode Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/marvell/mvneta.c | 10 ---------- 1 file changed, 10 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index 226d76e7ccc8..d725abced380 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -3768,11 +3768,6 @@ static int mvneta_change_mtu(struct net_device *dev, int mtu) mtu = ALIGN(MVNETA_RX_PKT_SIZE(mtu), 8); } - if (pp->xdp_prog && mtu > MVNETA_MAX_RX_BUF_SIZE) { - netdev_info(dev, "Illegal MTU value %d for XDP mode\n", mtu); - return -EINVAL; - } - dev->mtu = mtu; if (!netif_running(dev)) { @@ -4475,11 +4470,6 @@ static int mvneta_xdp_setup(struct net_device *dev, struct bpf_prog *prog, struct mvneta_port *pp = netdev_priv(dev); struct bpf_prog *old_prog; - if (prog && dev->mtu > MVNETA_MAX_RX_BUF_SIZE) { - NL_SET_ERR_MSG_MOD(extack, "MTU too large for XDP"); - return -EOPNOTSUPP; - } - if (pp->bm_priv) { NL_SET_ERR_MSG_MOD(extack, "Hardware Buffer Management not supported on XDP"); From patchwork Fri Mar 19 21:47:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "lorenzo@kernel.org" X-Patchwork-Id: 12151991 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D93C8C433E3 for ; Fri, 19 Mar 2021 21:49:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C21816198E for ; Fri, 19 Mar 2021 21:49:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230507AbhCSVsq (ORCPT ); Fri, 19 Mar 2021 17:48:46 -0400 Received: from mail.kernel.org ([198.145.29.99]:45598 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229942AbhCSVs0 (ORCPT ); Fri, 19 Mar 2021 17:48:26 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 62D3861986; Fri, 19 Mar 2021 21:48:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1616190506; bh=tDqB0Ey7DwyVDYXMPU2eRxTOZD4Z/gsFYf6a8/8/tlo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ISUoFvsiX8lmD5IRJ/KxkRnnf9uaLL/P/NwOLYnwblMfS/Eeae1nMmIcmq5kn9RmQ s2kb7KuBZuitm1UY4VJWUQ1PVOEIJv2hT+gSw0014rX18+sIJirI/GrW22Dn89Dcu2 tV2nDMZcdZbJeisDiLWMboi7xBT0ZmbIasM/8IbDjEULKRZoRy2kamp7lUMcxg9u+Z QeXnEPOpBwDJS4doseIUlaWKpZzXT0PRLOI4nnAfszdCBN4aZA9PxFDYtynDUQ9rkJ RqaoZnJ6MeDFIHponhXIbEmY+ODk99b2f62VdvDxyzoy2SIRXuhmCSCi+YtGoymtdg c/gQk+lR0yPyw== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, sameehj@amazon.com Subject: [PATCH v7 bpf-next 07/14] net: xdp: add multi-buff support to xdp_build_skb_from_fram Date: Fri, 19 Mar 2021 22:47:21 +0100 Message-Id: X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Introduce xdp multi-buff support to __xdp_build_skb_from_frame/xdp_build_skb_from_fram utility routines. Signed-off-by: Lorenzo Bianconi --- net/core/xdp.c | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/net/core/xdp.c b/net/core/xdp.c index 430f516259d9..7388bc6d680b 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -603,9 +603,21 @@ struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf, struct sk_buff *skb, struct net_device *dev) { + skb_frag_t frag_list[MAX_SKB_FRAGS]; unsigned int headroom, frame_size; + int i, num_frags = 0; void *hard_start; + /* XDP multi-buff frame */ + if (unlikely(xdpf->mb)) { + struct xdp_shared_info *xdp_sinfo; + + xdp_sinfo = xdp_get_shared_info_from_frame(xdpf); + num_frags = xdp_sinfo->nr_frags; + memcpy(frag_list, xdp_sinfo->frags, + sizeof(skb_frag_t) * num_frags); + } + /* Part of headroom was reserved to xdpf */ headroom = sizeof(*xdpf) + xdpf->headroom; @@ -624,6 +636,20 @@ struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf, if (xdpf->metasize) skb_metadata_set(skb, xdpf->metasize); + /* Single-buff XDP frame */ + if (likely(!num_frags)) + goto out; + + for (i = 0; i < num_frags; i++) { + struct page *page = xdp_get_frag_page(&frag_list[i]); + + skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, + page, xdp_get_frag_offset(&frag_list[i]), + xdp_get_frag_size(&frag_list[i]), + xdpf->frame_sz); + } + +out: /* Essential SKB info: protocol and skb->dev */ skb->protocol = eth_type_trans(skb, dev); From patchwork Fri Mar 19 21:47:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "lorenzo@kernel.org" X-Patchwork-Id: 12151997 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57C75C433EA for ; Fri, 19 Mar 2021 21:49:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2D9966198F for ; Fri, 19 Mar 2021 21:49:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230518AbhCSVsq (ORCPT ); Fri, 19 Mar 2021 17:48:46 -0400 Received: from mail.kernel.org ([198.145.29.99]:45628 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230433AbhCSVsa (ORCPT ); Fri, 19 Mar 2021 17:48:30 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id C626A61958; Fri, 19 Mar 2021 21:48:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1616190509; bh=UffDz3mjDvxHKJazLrKdQOkZimTnyea4SglojrrVotQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jLsIRTn7nwHSw2ps3BGw++6I2gLzgWnvQECQAQgShP1MqyWTgxXJ7V4d+lkTHxGt9 8Y7Vo1ALIhI1Gl40yCg3iBTbzu4HV5XGi6aZJrr0vFFudRZC3d8s9J6PYs8TLKJOfk sGvcG+XanyPEy5NISN/j7rkBCjXpppkDkqK2Kuweqg+Et3Kh+kPVrmerjatt6PYdVy 3EvRHATqHH201rehfwxQaCmY4SWOcc3utdMrTt48sbkltoc5sJhvYo3dO5As/cHc6h mPsb3X3vnq/cTbkwk7XNsradyO2iFqIpKgHKp3avz27NvYO8Tgk4exKA+P3yxcUn/c 5MFiRlnrNqKaQ== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, sameehj@amazon.com Subject: [PATCH v7 bpf-next 08/14] bpf: add multi-buff support to the bpf_xdp_adjust_tail() API Date: Fri, 19 Mar 2021 22:47:22 +0100 Message-Id: <6da4e8a314e7fbdeb0a6790a920a4ae554fb3742.1616179034.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Eelco Chaudron This change adds support for tail growing and shrinking for XDP multi-buff. Signed-off-by: Eelco Chaudron Signed-off-by: Lorenzo Bianconi --- include/net/xdp.h | 5 ++++ net/core/filter.c | 63 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 68 insertions(+) diff --git a/include/net/xdp.h b/include/net/xdp.h index 8be1b5e5a08a..19cd6642e087 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -157,6 +157,11 @@ static inline void xdp_set_frag_size(skb_frag_t *frag, u32 size) frag->bv_len = size; } +static inline unsigned int xdp_get_frag_tailroom(const skb_frag_t *frag) +{ + return PAGE_SIZE - xdp_get_frag_size(frag) - xdp_get_frag_offset(frag); +} + struct xdp_frame { void *data; u16 len; diff --git a/net/core/filter.c b/net/core/filter.c index 10dac9dd5086..18b2c9bacba1 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -3855,11 +3855,74 @@ static const struct bpf_func_proto bpf_xdp_adjust_head_proto = { .arg2_type = ARG_ANYTHING, }; +static int bpf_xdp_mb_adjust_tail(struct xdp_buff *xdp, int offset) +{ + struct xdp_shared_info *xdp_sinfo = xdp_get_shared_info_from_buff(xdp); + + if (unlikely(xdp_sinfo->nr_frags == 0)) + return -EINVAL; + + if (offset >= 0) { + skb_frag_t *frag = &xdp_sinfo->frags[xdp_sinfo->nr_frags - 1]; + int size; + + if (unlikely(offset > xdp_get_frag_tailroom(frag))) + return -EINVAL; + + size = xdp_get_frag_size(frag); + memset(xdp_get_frag_address(frag) + size, 0, offset); + xdp_set_frag_size(frag, size + offset); + xdp_sinfo->data_length += offset; + } else { + int i, frags_to_free = 0; + + offset = abs(offset); + + if (unlikely(offset > ((int)(xdp->data_end - xdp->data) + + xdp_sinfo->data_length - + ETH_HLEN))) + return -EINVAL; + + for (i = xdp_sinfo->nr_frags - 1; i >= 0 && offset > 0; i--) { + skb_frag_t *frag = &xdp_sinfo->frags[i]; + int size = xdp_get_frag_size(frag); + int shrink = min_t(int, offset, size); + + offset -= shrink; + if (likely(size - shrink > 0)) { + /* When updating the final fragment we have + * to adjust the data_length in line. + */ + xdp_sinfo->data_length -= shrink; + xdp_set_frag_size(frag, size - shrink); + break; + } + + /* When we free the fragments, + * xdp_return_frags_from_buff() will take care + * of updating the xdp share info data_length. + */ + frags_to_free++; + } + + if (unlikely(frags_to_free)) + xdp_return_num_frags_from_buff(xdp, frags_to_free); + + if (unlikely(offset > 0)) + xdp->data_end -= offset; + } + + return 0; +} + BPF_CALL_2(bpf_xdp_adjust_tail, struct xdp_buff *, xdp, int, offset) { void *data_hard_end = xdp_data_hard_end(xdp); /* use xdp->frame_sz */ void *data_end = xdp->data_end + offset; + if (unlikely(xdp->mb)) + return bpf_xdp_mb_adjust_tail(xdp, offset); + /* Notice that xdp_data_hard_end have reserved some tailroom */ if (unlikely(data_end > data_hard_end)) return -EINVAL; From patchwork Fri Mar 19 21:47:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "lorenzo@kernel.org" X-Patchwork-Id: 12152003 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CE90C433E9 for ; Fri, 19 Mar 2021 21:49:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 17C926197E for ; Fri, 19 Mar 2021 21:49:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230525AbhCSVsq (ORCPT ); Fri, 19 Mar 2021 17:48:46 -0400 Received: from mail.kernel.org ([198.145.29.99]:45650 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230435AbhCSVsd (ORCPT ); Fri, 19 Mar 2021 17:48:33 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 34DD361985; Fri, 19 Mar 2021 21:48:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1616190513; bh=omQdRBZC+A2j7aSS4eNnfXtkhAXwswJoP2HdMUSYt3Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ER0vHpQ3eyYmjG6JM4VqSNygK/qXOm5cp1Y5iigZnbSz9Altd5sTbTh7A9U/i2YX+ WOq1YVM4V+8oKt3oYeiyyV0etOlhFf+IuNyFbQbleZqGb2cuDJth32x7aqzJEQMRVD WiHUXPuAddM+vP3lPwlycerjifnm+nSLQK6wXoNJbIFBbTZxWp4WlqDrqG3DuTD/dP TM6sxd6A8qFsPOuS1M+J/bRbarql+XP55IWDsE6DxWsDh3XCl9IBynrMpSnPJ93Mc3 kpt0AX3uEc9UJmiaDJmzUfbRp+4HWMSTxkOfqs57RxXUE4UVmtLTgAb6zl7B9wNroi 1cFIUxGMUJsaQ== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, sameehj@amazon.com Subject: [PATCH v7 bpf-next 09/14] bpd: add multi-buffer support to xdp copy helpers Date: Fri, 19 Mar 2021 22:47:23 +0100 Message-Id: X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Eelco Chaudron This patch adds support for multi-buffer for the following helpers: - bpf_xdp_output() - bpf_perf_event_output() Signed-off-by: Eelco Chaudron Signed-off-by: Lorenzo Bianconi Reported-by: kernel test robot Reported-by: kernel test robot --- net/core/filter.c | 60 ++++++++- .../selftests/bpf/prog_tests/xdp_bpf2bpf.c | 127 ++++++++++++------ .../selftests/bpf/progs/test_xdp_bpf2bpf.c | 3 +- 3 files changed, 146 insertions(+), 44 deletions(-) diff --git a/net/core/filter.c b/net/core/filter.c index 18b2c9bacba1..a607ea8321bd 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -4549,10 +4549,53 @@ static const struct bpf_func_proto bpf_sk_ancestor_cgroup_id_proto = { }; #endif -static unsigned long bpf_xdp_copy(void *dst_buff, const void *src_buff, +static unsigned long bpf_xdp_copy(void *dst_buff, const void *ctx, unsigned long off, unsigned long len) { - memcpy(dst_buff, src_buff + off, len); + struct xdp_buff *xdp = (struct xdp_buff *)ctx; + struct xdp_shared_info *xdp_sinfo; + unsigned long base_len; + const void *src_buff; + + if (likely(!xdp->mb)) { + src_buff = xdp->data; + memcpy(dst_buff, src_buff + off, len); + + return 0; + } + + base_len = xdp->data_end - xdp->data; + xdp_sinfo = xdp_get_shared_info_from_buff(xdp); + do { + unsigned long copy_len; + + if (off < base_len) { + src_buff = xdp->data + off; + copy_len = min(len, base_len - off); + } else { + unsigned long frag_off_total = base_len; + int i; + + for (i = 0; i < xdp_sinfo->nr_frags; i++) { + skb_frag_t *frag = &xdp_sinfo->frags[i]; + unsigned long frag_len = xdp_get_frag_size(frag); + unsigned long frag_off = off - frag_off_total; + + if (frag_off < frag_len) { + src_buff = xdp_get_frag_address(frag) + + frag_off; + copy_len = min(len, frag_len - frag_off); + break; + } + frag_off_total += frag_len; + } + } + memcpy(dst_buff, src_buff, copy_len); + off += copy_len; + len -= copy_len; + dst_buff += copy_len; + } while (len); + return 0; } @@ -4564,10 +4607,19 @@ BPF_CALL_5(bpf_xdp_event_output, struct xdp_buff *, xdp, struct bpf_map *, map, if (unlikely(flags & ~(BPF_F_CTXLEN_MASK | BPF_F_INDEX_MASK))) return -EINVAL; if (unlikely(!xdp || - xdp_size > (unsigned long)(xdp->data_end - xdp->data))) + (likely(!xdp->mb) && + xdp_size > (unsigned long)(xdp->data_end - xdp->data)))) return -EFAULT; + if (unlikely(xdp->mb)) { + struct xdp_shared_info *xdp_sinfo; + + xdp_sinfo = xdp_get_shared_info_from_buff(xdp); + if (unlikely(xdp_size > ((int)(xdp->data_end - xdp->data) + + xdp_sinfo->data_length))) + return -EFAULT; + } - return bpf_event_output(map, flags, meta, meta_size, xdp->data, + return bpf_event_output(map, flags, meta, meta_size, xdp, xdp_size, bpf_xdp_copy); } diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_bpf2bpf.c b/tools/testing/selftests/bpf/prog_tests/xdp_bpf2bpf.c index 2c6c570b21f8..355e64526f3f 100644 --- a/tools/testing/selftests/bpf/prog_tests/xdp_bpf2bpf.c +++ b/tools/testing/selftests/bpf/prog_tests/xdp_bpf2bpf.c @@ -10,11 +10,20 @@ struct meta { int pkt_len; }; +struct test_ctx_s { + bool passed; + int pkt_size; +}; + +struct test_ctx_s test_ctx; + static void on_sample(void *ctx, int cpu, void *data, __u32 size) { - int duration = 0; struct meta *meta = (struct meta *)data; struct ipv4_packet *trace_pkt_v4 = data + sizeof(*meta); + unsigned char *raw_pkt = data + sizeof(*meta); + struct test_ctx_s *tst_ctx = ctx; + int duration = 0; if (CHECK(size < sizeof(pkt_v4) + sizeof(*meta), "check_size", "size %u < %zu\n", @@ -25,25 +34,90 @@ static void on_sample(void *ctx, int cpu, void *data, __u32 size) "meta->ifindex = %d\n", meta->ifindex)) return; - if (CHECK(meta->pkt_len != sizeof(pkt_v4), "check_meta_pkt_len", - "meta->pkt_len = %zd\n", sizeof(pkt_v4))) + if (CHECK(meta->pkt_len != tst_ctx->pkt_size, "check_meta_pkt_len", + "meta->pkt_len = %d\n", tst_ctx->pkt_size)) return; if (CHECK(memcmp(trace_pkt_v4, &pkt_v4, sizeof(pkt_v4)), "check_packet_content", "content not the same\n")) return; - *(bool *)ctx = true; + if (meta->pkt_len > sizeof(pkt_v4)) { + for (int i = 0; i < (meta->pkt_len - sizeof(pkt_v4)); i++) { + if (raw_pkt[i + sizeof(pkt_v4)] != (unsigned char)i) { + CHECK(true, "check_packet_content", + "byte %zu does not match %u != %u\n", + i + sizeof(pkt_v4), + raw_pkt[i + sizeof(pkt_v4)], + (unsigned char)i); + break; + } + } + } + + tst_ctx->passed = true; } -void test_xdp_bpf2bpf(void) +static int run_xdp_bpf2bpf_pkt_size(int pkt_fd, struct perf_buffer *pb, + struct test_xdp_bpf2bpf *ftrace_skel, + int pkt_size) { __u32 duration = 0, retval, size; - char buf[128]; + unsigned char buf_in[9000]; + unsigned char buf[9000]; + int err; + + if (pkt_size > sizeof(buf_in) || pkt_size < sizeof(pkt_v4)) + return -EINVAL; + + test_ctx.passed = false; + test_ctx.pkt_size = pkt_size; + + memcpy(buf_in, &pkt_v4, sizeof(pkt_v4)); + if (pkt_size > sizeof(pkt_v4)) { + for (int i = 0; i < (pkt_size - sizeof(pkt_v4)); i++) + buf_in[i + sizeof(pkt_v4)] = i; + } + + /* Run test program */ + err = bpf_prog_test_run(pkt_fd, 1, buf_in, pkt_size, + buf, &size, &retval, &duration); + + if (CHECK(err || retval != XDP_PASS || size != pkt_size, + "ipv4", "err %d errno %d retval %d size %d\n", + err, errno, retval, size)) + return -1; + + /* Make sure bpf_xdp_output() was triggered and it sent the expected + * data to the perf ring buffer. + */ + err = perf_buffer__poll(pb, 100); + if (CHECK(err <= 0, "perf_buffer__poll", "err %d\n", err)) + return -1; + + if (CHECK_FAIL(!test_ctx.passed)) + return -1; + + /* Verify test results */ + if (CHECK(ftrace_skel->bss->test_result_fentry != if_nametoindex("lo"), + "result", "fentry failed err %llu\n", + ftrace_skel->bss->test_result_fentry)) + return -1; + + if (CHECK(ftrace_skel->bss->test_result_fexit != XDP_PASS, "result", + "fexit failed err %llu\n", + ftrace_skel->bss->test_result_fexit)) + return -1; + + return 0; +} + +void test_xdp_bpf2bpf(void) +{ int err, pkt_fd, map_fd; - bool passed = false; - struct iphdr *iph = (void *)buf + sizeof(struct ethhdr); - struct iptnl_info value4 = {.family = AF_INET}; + __u32 duration = 0; + int pkt_sizes[] = {sizeof(pkt_v4), 1024, 4100, 8200}; + struct iptnl_info value4 = {.family = AF_INET6}; struct test_xdp *pkt_skel = NULL; struct test_xdp_bpf2bpf *ftrace_skel = NULL; struct vip key4 = {.protocol = 6, .family = AF_INET}; @@ -87,40 +161,15 @@ void test_xdp_bpf2bpf(void) /* Set up perf buffer */ pb_opts.sample_cb = on_sample; - pb_opts.ctx = &passed; + pb_opts.ctx = &test_ctx; pb = perf_buffer__new(bpf_map__fd(ftrace_skel->maps.perf_buf_map), - 1, &pb_opts); + 8, &pb_opts); if (CHECK(IS_ERR(pb), "perf_buf__new", "err %ld\n", PTR_ERR(pb))) goto out; - /* Run test program */ - err = bpf_prog_test_run(pkt_fd, 1, &pkt_v4, sizeof(pkt_v4), - buf, &size, &retval, &duration); - - if (CHECK(err || retval != XDP_TX || size != 74 || - iph->protocol != IPPROTO_IPIP, "ipv4", - "err %d errno %d retval %d size %d\n", - err, errno, retval, size)) - goto out; - - /* Make sure bpf_xdp_output() was triggered and it sent the expected - * data to the perf ring buffer. - */ - err = perf_buffer__poll(pb, 100); - if (CHECK(err < 0, "perf_buffer__poll", "err %d\n", err)) - goto out; - - CHECK_FAIL(!passed); - - /* Verify test results */ - if (CHECK(ftrace_skel->bss->test_result_fentry != if_nametoindex("lo"), - "result", "fentry failed err %llu\n", - ftrace_skel->bss->test_result_fentry)) - goto out; - - CHECK(ftrace_skel->bss->test_result_fexit != XDP_TX, "result", - "fexit failed err %llu\n", ftrace_skel->bss->test_result_fexit); - + for (int i = 0; i < ARRAY_SIZE(pkt_sizes); i++) + run_xdp_bpf2bpf_pkt_size(pkt_fd, pb, ftrace_skel, + pkt_sizes[i]); out: if (pb) perf_buffer__free(pb); diff --git a/tools/testing/selftests/bpf/progs/test_xdp_bpf2bpf.c b/tools/testing/selftests/bpf/progs/test_xdp_bpf2bpf.c index a038e827f850..d5a5f603d252 100644 --- a/tools/testing/selftests/bpf/progs/test_xdp_bpf2bpf.c +++ b/tools/testing/selftests/bpf/progs/test_xdp_bpf2bpf.c @@ -27,6 +27,7 @@ struct xdp_buff { void *data_hard_start; unsigned long handle; struct xdp_rxq_info *rxq; + __u32 frame_length; } __attribute__((preserve_access_index)); struct meta { @@ -49,7 +50,7 @@ int BPF_PROG(trace_on_entry, struct xdp_buff *xdp) void *data = (void *)(long)xdp->data; meta.ifindex = xdp->rxq->dev->ifindex; - meta.pkt_len = data_end - data; + meta.pkt_len = xdp->frame_length; bpf_xdp_output(xdp, &perf_buf_map, ((__u64) meta.pkt_len << 32) | BPF_F_CURRENT_CPU, From patchwork Fri Mar 19 21:47:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "lorenzo@kernel.org" X-Patchwork-Id: 12152001 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88EEFC433ED for ; Fri, 19 Mar 2021 21:49:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 72BF561983 for ; Fri, 19 Mar 2021 21:49:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231127AbhCSVsr (ORCPT ); Fri, 19 Mar 2021 17:48:47 -0400 Received: from mail.kernel.org ([198.145.29.99]:45676 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230438AbhCSVsg (ORCPT ); Fri, 19 Mar 2021 17:48:36 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 97F3661981; Fri, 19 Mar 2021 21:48:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1616190516; bh=FX0T9Ki2KSKPSSzmHZg5ztIP24cIlfXORQB6kwUM4xQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=heOHVBPbz+HJOJIRbQl/rBc+eELwMXmyYoyfjItggBsl5h8kq1I7rEXeJdtq+TKc3 u9IQccLSoieM1N5p9czO9Ml/EYfaSBZEoC1uME+kkcq6wpB8s12AnkBzuQ8NYlIVf9 f3yaqdHeydrexqXqb/6JAUuHMZaRooGvL92t846bsev6ISvzHMSfiw/SSfaGdr8jKv YXU7qSQEyQLC7nKxMN0U4tpD4KpQ5We8sbLKXHM1s79vP1z4fPWtsrPZNhb9ech8+A 0KIInQUoTRYsySSD4aH1uEURd7+t92Mp6lWyCfQY0UcXzXpNi+M0zeBKit9HDvnRtG qEIMV8LY7yEgA== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, sameehj@amazon.com Subject: [PATCH v7 bpf-next 10/14] bpf: add new frame_length field to the XDP ctx Date: Fri, 19 Mar 2021 22:47:24 +0100 Message-Id: X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Eelco Chaudron This patch adds a new field to the XDP context called frame_length, which will hold the full length of the packet, including fragments if existing. eBPF programs can determine if fragments are present using something like: if (ctx->data_end - ctx->data < ctx->frame_length) { /* Fragements exists. /* } Signed-off-by: Eelco Chaudron Signed-off-by: Lorenzo Bianconi --- include/linux/filter.h | 7 +++++++ include/net/xdp.h | 12 ++++++++++++ include/uapi/linux/bpf.h | 1 + net/core/filter.c | 8 ++++++++ net/core/xdp.c | 1 + tools/include/uapi/linux/bpf.h | 1 + 6 files changed, 30 insertions(+) diff --git a/include/linux/filter.h b/include/linux/filter.h index b2b85b2cad8e..511d812fd18c 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -768,6 +768,13 @@ static __always_inline u32 bpf_prog_run_xdp(const struct bpf_prog *prog, * already takes rcu_read_lock() when fetching the program, so * it's not necessary here anymore. */ + xdp->frame_length = xdp->data_end - xdp->data; + if (unlikely(xdp->mb)) { + struct xdp_shared_info *xdp_sinfo; + + xdp_sinfo = xdp_get_shared_info_from_buff(xdp); + xdp->frame_length += xdp_sinfo->data_length; + } return __BPF_PROG_RUN(prog, xdp, BPF_DISPATCHER_FUNC(xdp)); } diff --git a/include/net/xdp.h b/include/net/xdp.h index 19cd6642e087..e47d9e8da547 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -75,6 +75,10 @@ struct xdp_buff { struct xdp_txq_info *txq; u32 frame_sz:31; /* frame size to deduce data_hard_end/reserved tailroom*/ u32 mb:1; /* xdp non-linear buffer */ + u32 frame_length; /* Total frame length across all buffers. Only needs + * to be updated by helper functions, as it will be + * initialized at XDP program start. + */ }; static __always_inline void @@ -235,6 +239,14 @@ void xdp_convert_frame_to_buff(struct xdp_frame *frame, struct xdp_buff *xdp) xdp->data_meta = frame->data - frame->metasize; xdp->frame_sz = frame->frame_sz; xdp->mb = frame->mb; + xdp->frame_length = frame->len; + + if (unlikely(xdp->mb)) { + struct xdp_shared_info *xdp_sinfo; + + xdp_sinfo = xdp_get_shared_info_from_buff(xdp); + xdp->frame_length += xdp_sinfo->data_length; + } } static inline diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 2d3036e292a9..4dcc5ad736b4 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -5213,6 +5213,7 @@ struct xdp_md { __u32 rx_queue_index; /* rxq->queue_index */ __u32 egress_ifindex; /* txq->dev->ifindex */ + __u32 frame_length; }; /* DEVMAP map-value layout diff --git a/net/core/filter.c b/net/core/filter.c index a607ea8321bd..b047757bd839 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -3873,6 +3873,7 @@ static int bpf_xdp_mb_adjust_tail(struct xdp_buff *xdp, int offset) memset(xdp_get_frag_address(frag) + size, 0, offset); xdp_set_frag_size(frag, size + offset); xdp_sinfo->data_length += offset; + xdp->frame_length += offset; } else { int i, frags_to_free = 0; @@ -3894,6 +3895,7 @@ static int bpf_xdp_mb_adjust_tail(struct xdp_buff *xdp, int offset) * to adjust the data_length in line. */ xdp_sinfo->data_length -= shrink; + xdp->frame_length -= shrink; xdp_set_frag_size(frag, size - shrink); break; } @@ -9126,6 +9128,12 @@ static u32 xdp_convert_ctx_access(enum bpf_access_type type, *insn++ = BPF_LDX_MEM(BPF_W, si->dst_reg, si->dst_reg, offsetof(struct net_device, ifindex)); break; + case offsetof(struct xdp_md, frame_length): + *insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct xdp_buff, + frame_length), + si->dst_reg, si->src_reg, + offsetof(struct xdp_buff, frame_length)); + break; } return insn - insn_buf; diff --git a/net/core/xdp.c b/net/core/xdp.c index 7388bc6d680b..fb7d0724a5b6 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -510,6 +510,7 @@ void xdp_return_num_frags_from_buff(struct xdp_buff *xdp, u16 num_frags) struct page *page = xdp_get_frag_page(frag); xdp_sinfo->data_length -= xdp_get_frag_size(frag); + xdp->frame_length -= xdp_get_frag_size(frag); __xdp_return(page_address(page), &xdp->rxq->mem, false, NULL); } xdp_sinfo->nr_frags -= num_frags; diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 2d3036e292a9..4dcc5ad736b4 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -5213,6 +5213,7 @@ struct xdp_md { __u32 rx_queue_index; /* rxq->queue_index */ __u32 egress_ifindex; /* txq->dev->ifindex */ + __u32 frame_length; }; /* DEVMAP map-value layout From patchwork Fri Mar 19 21:47:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "lorenzo@kernel.org" X-Patchwork-Id: 12151999 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 737ACC433EB for ; Fri, 19 Mar 2021 21:49:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 54C7961989 for ; Fri, 19 Mar 2021 21:49:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230145AbhCSVsr (ORCPT ); Fri, 19 Mar 2021 17:48:47 -0400 Received: from mail.kernel.org ([198.145.29.99]:45708 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230439AbhCSVsk (ORCPT ); Fri, 19 Mar 2021 17:48:40 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 00CA861983; Fri, 19 Mar 2021 21:48:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1616190520; bh=wqobyhPlfuvPEi3hAtXwvFihWG8FOemveu5QOUTuJdo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mhC9uSmzGt9rBtTpgftQ16xdbaxLnhqXAkWRTh2jEIkDse9iDpX3Q+/ZaMc17ds44 qsFcH8QlBAJ78KtuGYvAmGKP0XoAT7+RQtZpoS+S+b/fEE68EplmhrcRVIqq61nf3z nP9LOl9toNxuKGAru3pwLFYPr56BMIaphmswTnoMbDYociTX39dCsiZra4bH2YXYWy LkcMZPEzLNc4n2WceQ9TgNWPMlhnnEsy4bFROv/wZZRXL3RAwErvO/LBcPzBU6tRPk 5g1xVYlPiJreEUMSl0s6MntSoZD8IDS/6nlBe1y7Fb3au+sRU0Naytc78GrJgGzozW rnvd59bO/lm5g== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, sameehj@amazon.com Subject: [PATCH v7 bpf-next 11/14] bpf: move user_size out of bpf_test_init Date: Fri, 19 Mar 2021 22:47:25 +0100 Message-Id: <4199200124683783456c6f94e663dae8a9d3799e.1616179034.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Rely on data_size_in in bpf_test_init routine signature. This is a preliminary patch to introduce xdp multi-buff selftest Signed-off-by: Lorenzo Bianconi --- net/bpf/test_run.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index 0abdd67f44b1..6c3516555757 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -213,11 +213,10 @@ __diag_pop(); ALLOW_ERROR_INJECTION(bpf_modify_return_test, ERRNO); -static void *bpf_test_init(const union bpf_attr *kattr, u32 size, - u32 headroom, u32 tailroom) +static void *bpf_test_init(const union bpf_attr *kattr, u32 user_size, + u32 size, u32 headroom, u32 tailroom) { void __user *data_in = u64_to_user_ptr(kattr->test.data_in); - u32 user_size = kattr->test.data_size_in; void *data; if (size < ETH_HLEN || size > PAGE_SIZE - headroom - tailroom) @@ -538,7 +537,8 @@ int bpf_prog_test_run_skb(struct bpf_prog *prog, const union bpf_attr *kattr, if (kattr->test.flags || kattr->test.cpu) return -EINVAL; - data = bpf_test_init(kattr, size, NET_SKB_PAD + NET_IP_ALIGN, + data = bpf_test_init(kattr, kattr->test.data_size_in, + size, NET_SKB_PAD + NET_IP_ALIGN, SKB_DATA_ALIGN(sizeof(struct skb_shared_info))); if (IS_ERR(data)) return PTR_ERR(data); @@ -675,7 +675,8 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr, /* XDP have extra tailroom as (most) drivers use full page */ max_data_sz = 4096 - headroom - tailroom; - data = bpf_test_init(kattr, max_data_sz, headroom, tailroom); + data = bpf_test_init(kattr, kattr->test.data_size_in, + max_data_sz, headroom, tailroom); if (IS_ERR(data)) return PTR_ERR(data); @@ -737,7 +738,7 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog, if (size < ETH_HLEN) return -EINVAL; - data = bpf_test_init(kattr, size, 0, 0); + data = bpf_test_init(kattr, kattr->test.data_size_in, size, 0, 0); if (IS_ERR(data)) return PTR_ERR(data); From patchwork Fri Mar 19 21:47:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "lorenzo@kernel.org" X-Patchwork-Id: 12152005 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7DCC0C433C1 for ; Fri, 19 Mar 2021 21:49:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5304D61956 for ; Fri, 19 Mar 2021 21:49:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231201AbhCSVtP (ORCPT ); Fri, 19 Mar 2021 17:49:15 -0400 Received: from mail.kernel.org ([198.145.29.99]:45768 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230473AbhCSVsn (ORCPT ); Fri, 19 Mar 2021 17:48:43 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 61F8D61956; Fri, 19 Mar 2021 21:48:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1616190523; bh=HThzNHtvOrTetLhTi8bziyUenmjjOc+AEVfJYuhOOWg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JaWR3zlj8j9nzc53T7rGVRnE71hUbqanTtI8/3D+xzyoBIn0rJboBaqWi0o5u50xy jLxyNp+1U8Cc9KYmhTHUF/ua39H1DAHrfabEZVPnVKPA/cD7/B0Jz7eC/H4JPbAU5f XVnrIpaIutoV9V03pri5hBUufRXb8SY8tzWX3+DQKBpBrBzCduSCLWlPyTfYyeGIhm iKMjpEBSIbRd9bN8O1VMspDhOJih0LJ55G24rMPMNvT2o9KF0dvI8WypoqaEHq1Mip 8tBDEeRyiBRolZyysGhOLcn4dlEFPbuBEBjOrNtRNg9hfpnLE6vmYfl8uK66+y6nS2 eXDZswDbSRWeA== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, sameehj@amazon.com Subject: [PATCH v7 bpf-next 12/14] bpf: introduce multibuff support to bpf_prog_test_run_xdp() Date: Fri, 19 Mar 2021 22:47:26 +0100 Message-Id: X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Introduce the capability to allocate a xdp multi-buff in bpf_prog_test_run_xdp routine. This is a preliminary patch to introduce the selftests for new xdp multi-buff ebpf helpers Signed-off-by: Lorenzo Bianconi --- net/bpf/test_run.c | 52 +++++++++++++++++++++++++++++++++++++++------- 1 file changed, 44 insertions(+), 8 deletions(-) diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index 6c3516555757..8a4cc15b89b7 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -660,23 +660,22 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr, { u32 tailroom = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); u32 headroom = XDP_PACKET_HEADROOM; - u32 size = kattr->test.data_size_in; + struct xdp_shared_info *xdp_sinfo; u32 repeat = kattr->test.repeat; struct netdev_rx_queue *rxqueue; struct xdp_buff xdp = {}; + u32 max_data_sz, size; u32 retval, duration; - u32 max_data_sz; + int i, ret; void *data; - int ret; if (kattr->test.ctx_in || kattr->test.ctx_out) return -EINVAL; - /* XDP have extra tailroom as (most) drivers use full page */ max_data_sz = 4096 - headroom - tailroom; + size = min_t(u32, kattr->test.data_size_in, max_data_sz); - data = bpf_test_init(kattr, kattr->test.data_size_in, - max_data_sz, headroom, tailroom); + data = bpf_test_init(kattr, size, max_data_sz, headroom, tailroom); if (IS_ERR(data)) return PTR_ERR(data); @@ -685,16 +684,53 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr, &rxqueue->xdp_rxq); xdp_prepare_buff(&xdp, data, headroom, size, true); + xdp_sinfo = xdp_get_shared_info_from_buff(&xdp); + if (unlikely(kattr->test.data_size_in > size)) { + void __user *data_in = u64_to_user_ptr(kattr->test.data_in); + + while (size < kattr->test.data_size_in) { + struct page *page; + skb_frag_t *frag; + int data_len; + + page = alloc_page(GFP_KERNEL); + if (!page) { + ret = -ENOMEM; + goto out; + } + + frag = &xdp_sinfo->frags[xdp_sinfo->nr_frags++]; + xdp_set_frag_page(frag, page); + + data_len = min_t(int, kattr->test.data_size_in - size, + PAGE_SIZE); + xdp_set_frag_size(frag, data_len); + + if (copy_from_user(page_address(page), data_in + size, + data_len)) { + ret = -EFAULT; + goto out; + } + xdp_sinfo->data_length += data_len; + size += data_len; + } + xdp.mb = 1; + } + bpf_prog_change_xdp(NULL, prog); ret = bpf_test_run(prog, &xdp, repeat, &retval, &duration, true); if (ret) goto out; - if (xdp.data != data + headroom || xdp.data_end != xdp.data + size) - size = xdp.data_end - xdp.data; + + size = xdp.data_end - xdp.data + xdp_sinfo->data_length; ret = bpf_test_finish(kattr, uattr, xdp.data, size, retval, duration); + out: bpf_prog_change_xdp(prog, NULL); + for (i = 0; i < xdp_sinfo->nr_frags; i++) + __free_page(xdp_get_frag_page(&xdp_sinfo->frags[i])); kfree(data); + return ret; } From patchwork Fri Mar 19 21:47:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "lorenzo@kernel.org" X-Patchwork-Id: 12152007 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3C98C433E3 for ; Fri, 19 Mar 2021 21:49:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9D27161987 for ; Fri, 19 Mar 2021 21:49:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229912AbhCSVtQ (ORCPT ); Fri, 19 Mar 2021 17:49:16 -0400 Received: from mail.kernel.org ([198.145.29.99]:45880 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230527AbhCSVsr (ORCPT ); Fri, 19 Mar 2021 17:48:47 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id C1F9060232; Fri, 19 Mar 2021 21:48:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1616190526; bh=E+cwASbkr9sofcZgKxyOAGkwZR11Ry8Dttod3qkuA5M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YClhThjzn56gqu5nEAwL9Dm8tT1kpqbqKQCY3V0OgYdBeHiWk9Ijw92x3jicwBN3h 11yk81HILwnEyysjN7H9z+VHsAeKXwkFpQsiOJig43l+mHmod51ksvdpNVEUJ0kuf7 qg4UtdRMXGGOtveERctMYac3dUIAJCK+xdoh/H5vMih91huCuiYKs91xsy3//rx2yD EuL3iI7jCtsA1o3ICTOp1woo1h46eVsWiX+i39X9Sz+cqn29weADu7pq4IwFaZwu2F qQk1JoOap/LRqL8lUKxmS/Mud6/RxsXOPdAcP0ruwUY0HOSfE5AEZuwU68oKkYio9N IAdg6TGsRdsRA== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, sameehj@amazon.com Subject: [PATCH v7 bpf-next 13/14] bpf: test_run: add xdp_shared_info pointer in bpf_test_finish signature Date: Fri, 19 Mar 2021 22:47:27 +0100 Message-Id: <98ad8100e9219ab509adcfaa711cd876c747f21a.1616179034.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net introduce xdp_shared_info pointer in bpf_test_finish signature in order to copy back paged data from a xdp multi-buff frame to userspace buffer Signed-off-by: Lorenzo Bianconi --- net/bpf/test_run.c | 48 ++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 40 insertions(+), 8 deletions(-) diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index 8a4cc15b89b7..bc575fd64e06 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -123,7 +123,8 @@ static int bpf_test_run(struct bpf_prog *prog, void *ctx, u32 repeat, static int bpf_test_finish(const union bpf_attr *kattr, union bpf_attr __user *uattr, const void *data, - u32 size, u32 retval, u32 duration) + struct xdp_shared_info *xdp_sinfo, u32 size, + u32 retval, u32 duration) { void __user *data_out = u64_to_user_ptr(kattr->test.data_out); int err = -EFAULT; @@ -138,8 +139,37 @@ static int bpf_test_finish(const union bpf_attr *kattr, err = -ENOSPC; } - if (data_out && copy_to_user(data_out, data, copy_size)) - goto out; + if (data_out) { + int len = xdp_sinfo ? copy_size - xdp_sinfo->data_length + : copy_size; + + if (copy_to_user(data_out, data, len)) + goto out; + + if (xdp_sinfo) { + int i, offset = len, data_len; + + for (i = 0; i < xdp_sinfo->nr_frags; i++) { + skb_frag_t *frag = &xdp_sinfo->frags[i]; + + if (offset >= copy_size) { + err = -ENOSPC; + break; + } + + data_len = min_t(int, copy_size - offset, + xdp_get_frag_size(frag)); + + if (copy_to_user(data_out + offset, + xdp_get_frag_address(frag), + data_len)) + goto out; + + offset += data_len; + } + } + } + if (copy_to_user(&uattr->test.data_size_out, &size, sizeof(size))) goto out; if (copy_to_user(&uattr->test.retval, &retval, sizeof(retval))) @@ -641,7 +671,8 @@ int bpf_prog_test_run_skb(struct bpf_prog *prog, const union bpf_attr *kattr, /* bpf program can never convert linear skb to non-linear */ if (WARN_ON_ONCE(skb_is_nonlinear(skb))) size = skb_headlen(skb); - ret = bpf_test_finish(kattr, uattr, skb->data, size, retval, duration); + ret = bpf_test_finish(kattr, uattr, skb->data, NULL, size, retval, + duration); if (!ret) ret = bpf_ctx_finish(kattr, uattr, ctx, sizeof(struct __sk_buff)); @@ -723,7 +754,8 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr, goto out; size = xdp.data_end - xdp.data + xdp_sinfo->data_length; - ret = bpf_test_finish(kattr, uattr, xdp.data, size, retval, duration); + ret = bpf_test_finish(kattr, uattr, xdp.data, xdp_sinfo, size, retval, + duration); out: bpf_prog_change_xdp(prog, NULL); @@ -809,8 +841,8 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog, if (ret < 0) goto out; - ret = bpf_test_finish(kattr, uattr, &flow_keys, sizeof(flow_keys), - retval, duration); + ret = bpf_test_finish(kattr, uattr, &flow_keys, NULL, + sizeof(flow_keys), retval, duration); if (!ret) ret = bpf_ctx_finish(kattr, uattr, user_ctx, sizeof(struct bpf_flow_keys)); @@ -914,7 +946,7 @@ int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog, const union bpf_attr *kat user_ctx->cookie = sock_gen_cookie(ctx.selected_sk); } - ret = bpf_test_finish(kattr, uattr, NULL, 0, retval, duration); + ret = bpf_test_finish(kattr, uattr, NULL, NULL, 0, retval, duration); if (!ret) ret = bpf_ctx_finish(kattr, uattr, user_ctx, sizeof(*user_ctx)); From patchwork Fri Mar 19 21:47:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "lorenzo@kernel.org" X-Patchwork-Id: 12152009 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1A2CC433E1 for ; Fri, 19 Mar 2021 21:49:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8D22E60232 for ; Fri, 19 Mar 2021 21:49:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230084AbhCSVtR (ORCPT ); Fri, 19 Mar 2021 17:49:17 -0400 Received: from mail.kernel.org ([198.145.29.99]:45910 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230435AbhCSVsv (ORCPT ); Fri, 19 Mar 2021 17:48:51 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 34E0F61981; Fri, 19 Mar 2021 21:48:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1616190530; bh=7rAvZGMahdh7X616sOsJRHk3+fsB8FFlsd9SKvcSRbY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dFrTtv61jkS9EutAZHXLdZFiY3uvdUQTjklN4zz9cfJjA3qLzzo9lq0d5GeynwLhI Ood3YO2JwactDw1oeE/OMDp2+S16m29B9ZKGLmcGPPxjh8WmVov/EGvRMVZ0Cmii7c Q0PJDKsPdpHKjW6HW2iLZJ8P8MZAokLErJLptNklLGYhjuXGkR1xnptiJfiRLyGniE DMiVry0RudTiFfmROz2Thibg1Icru5qzSTuaVWhemFI/k4gLAFyMscDpT1q7MJdg15 jjhNHj3BrI3a3+IC0Ffuiw3K0tqsARYkuNkxXsiIuHQmxPvugz36XszBI1CIM48Tmc D/NVIOmVOAGvg== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, sameehj@amazon.com Subject: [PATCH v7 bpf-next 14/14] bpf: update xdp_adjust_tail selftest to include multi-buffer Date: Fri, 19 Mar 2021 22:47:28 +0100 Message-Id: X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Eelco Chaudron This change adds test cases for the multi-buffer scenarios when shrinking and growing. Signed-off-by: Eelco Chaudron Signed-off-by: Lorenzo Bianconi --- .../bpf/prog_tests/xdp_adjust_tail.c | 105 ++++++++++++++++++ .../bpf/progs/test_xdp_adjust_tail_grow.c | 16 +-- .../bpf/progs/test_xdp_adjust_tail_shrink.c | 32 +++++- 3 files changed, 142 insertions(+), 11 deletions(-) diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_adjust_tail.c b/tools/testing/selftests/bpf/prog_tests/xdp_adjust_tail.c index d5c98f2cb12f..b936beaba797 100644 --- a/tools/testing/selftests/bpf/prog_tests/xdp_adjust_tail.c +++ b/tools/testing/selftests/bpf/prog_tests/xdp_adjust_tail.c @@ -130,6 +130,107 @@ void test_xdp_adjust_tail_grow2(void) bpf_object__close(obj); } +void test_xdp_adjust_mb_tail_shrink(void) +{ + const char *file = "./test_xdp_adjust_tail_shrink.o"; + __u32 duration, retval, size, exp_size; + struct bpf_object *obj; + static char buf[9000]; + int err, prog_fd; + + /* For the individual test cases, the first byte in the packet + * indicates which test will be run. + */ + + err = bpf_prog_load(file, BPF_PROG_TYPE_XDP, &obj, &prog_fd); + if (CHECK_FAIL(err)) + return; + + /* Test case removing 10 bytes from last frag, NOT freeing it */ + buf[0] = 0; + exp_size = sizeof(buf) - 10; + err = bpf_prog_test_run(prog_fd, 1, buf, sizeof(buf), + buf, &size, &retval, &duration); + + CHECK(err || retval != XDP_TX || size != exp_size, + "9k-10b", "err %d errno %d retval %d[%d] size %d[%u]\n", + err, errno, retval, XDP_TX, size, exp_size); + + /* Test case removing one of two pages, assuming 4K pages */ + buf[0] = 1; + exp_size = sizeof(buf) - 4100; + err = bpf_prog_test_run(prog_fd, 1, buf, sizeof(buf), + buf, &size, &retval, &duration); + + CHECK(err || retval != XDP_TX || size != exp_size, + "9k-1p", "err %d errno %d retval %d[%d] size %d[%u]\n", + err, errno, retval, XDP_TX, size, exp_size); + + /* Test case removing two pages resulting in a non mb xdp_buff */ + buf[0] = 2; + exp_size = sizeof(buf) - 8200; + err = bpf_prog_test_run(prog_fd, 1, buf, sizeof(buf), + buf, &size, &retval, &duration); + + CHECK(err || retval != XDP_TX || size != exp_size, + "9k-2p", "err %d errno %d retval %d[%d] size %d[%u]\n", + err, errno, retval, XDP_TX, size, exp_size); + + bpf_object__close(obj); +} + +void test_xdp_adjust_mb_tail_grow(void) +{ + const char *file = "./test_xdp_adjust_tail_grow.o"; + __u32 duration, retval, size, exp_size; + static char buf[16384]; + struct bpf_object *obj; + int err, i, prog_fd; + + err = bpf_prog_load(file, BPF_PROG_TYPE_XDP, &obj, &prog_fd); + if (CHECK_FAIL(err)) + return; + + /* Test case add 10 bytes to last frag */ + memset(buf, 1, sizeof(buf)); + size = 9000; + exp_size = size + 10; + err = bpf_prog_test_run(prog_fd, 1, buf, size, + buf, &size, &retval, &duration); + + CHECK(err || retval != XDP_TX || size != exp_size, + "9k+10b", "err %d retval %d[%d] size %d[%u]\n", + err, retval, XDP_TX, size, exp_size); + + for (i = 0; i < 9000; i++) + CHECK(buf[i] != 1, "9k+10b-old", + "Old data not all ok, offset %i is failing [%u]!\n", + i, buf[i]); + + for (i = 9000; i < 9010; i++) + CHECK(buf[i] != 0, "9k+10b-new", + "New data not all ok, offset %i is failing [%u]!\n", + i, buf[i]); + + for (i = 9010; i < sizeof(buf); i++) + CHECK(buf[i] != 1, "9k+10b-untouched", + "Unused data not all ok, offset %i is failing [%u]!\n", + i, buf[i]); + + /* Test a too large grow */ + memset(buf, 1, sizeof(buf)); + size = 9001; + exp_size = size; + err = bpf_prog_test_run(prog_fd, 1, buf, size, + buf, &size, &retval, &duration); + + CHECK(err || retval != XDP_DROP || size != exp_size, + "9k+10b", "err %d retval %d[%d] size %d[%u]\n", + err, retval, XDP_TX, size, exp_size); + + bpf_object__close(obj); +} + void test_xdp_adjust_tail(void) { if (test__start_subtest("xdp_adjust_tail_shrink")) @@ -138,4 +239,8 @@ void test_xdp_adjust_tail(void) test_xdp_adjust_tail_grow(); if (test__start_subtest("xdp_adjust_tail_grow2")) test_xdp_adjust_tail_grow2(); + if (test__start_subtest("xdp_adjust_mb_tail_shrink")) + test_xdp_adjust_mb_tail_shrink(); + if (test__start_subtest("xdp_adjust_mb_tail_grow")) + test_xdp_adjust_mb_tail_grow(); } diff --git a/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_grow.c b/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_grow.c index 3d66599eee2e..25ac7108a53f 100644 --- a/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_grow.c +++ b/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_grow.c @@ -7,20 +7,22 @@ int _xdp_adjust_tail_grow(struct xdp_md *xdp) { void *data_end = (void *)(long)xdp->data_end; void *data = (void *)(long)xdp->data; - unsigned int data_len; int offset = 0; /* Data length determine test case */ - data_len = data_end - data; - if (data_len == 54) { /* sizeof(pkt_v4) */ + if (xdp->frame_length == 54) { /* sizeof(pkt_v4) */ offset = 4096; /* test too large offset */ - } else if (data_len == 74) { /* sizeof(pkt_v6) */ + } else if (xdp->frame_length == 74) { /* sizeof(pkt_v6) */ offset = 40; - } else if (data_len == 64) { + } else if (xdp->frame_length == 64) { offset = 128; - } else if (data_len == 128) { - offset = 4096 - 256 - 320 - data_len; /* Max tail grow 3520 */ + } else if (xdp->frame_length == 128) { + offset = 4096 - 256 - 320 - xdp->frame_length; /* Max tail grow 3520 */ + } else if (xdp->frame_length == 9000) { + offset = 10; + } else if (xdp->frame_length == 9001) { + offset = 4096; } else { return XDP_ABORTED; /* No matching test */ } diff --git a/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_shrink.c b/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_shrink.c index 22065a9cfb25..689450414d29 100644 --- a/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_shrink.c +++ b/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_shrink.c @@ -14,14 +14,38 @@ int _version SEC("version") = 1; SEC("xdp_adjust_tail_shrink") int _xdp_adjust_tail_shrink(struct xdp_md *xdp) { - void *data_end = (void *)(long)xdp->data_end; - void *data = (void *)(long)xdp->data; + __u8 *data_end = (void *)(long)xdp->data_end; + __u8 *data = (void *)(long)xdp->data; int offset = 0; - if (data_end - data == 54) /* sizeof(pkt_v4) */ + switch (xdp->frame_length) { + case 54: + /* sizeof(pkt_v4) */ offset = 256; /* shrink too much */ - else + break; + case 9000: + /* Multi-buffer test cases */ + if (data + 1 > data_end) + return XDP_DROP; + + switch (data[0]) { + case 0: + offset = 10; + break; + case 1: + offset = 4100; + break; + case 2: + offset = 8200; + break; + default: + return XDP_DROP; + } + break; + default: offset = 20; + break; + } if (bpf_xdp_adjust_tail(xdp, 0 - offset)) return XDP_DROP; return XDP_TX;