From patchwork Tue Jan 19 20:20:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12030743 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B45CC433DB for ; Tue, 19 Jan 2021 20:39:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3254D23108 for ; Tue, 19 Jan 2021 20:39:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730759AbhASUio (ORCPT ); Tue, 19 Jan 2021 15:38:44 -0500 Received: from mail.kernel.org ([198.145.29.99]:55858 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729133AbhASUVO (ORCPT ); Tue, 19 Jan 2021 15:21:14 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 6A16C2310B; Tue, 19 Jan 2021 20:20:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1611087633; bh=zRvnC+rMpVuGdnEf78Q53RzlvyZSgmNFdlCLTFG2m8c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qdOM4oU8k+LMMJRotRJ4X+pHTmLhkfTL2iHyvV5JI8/Hp2i2G+GgBciWY4ae4XKog cbYCVZbX7H4oZXD5/niOFF8beLgbCyMtOhvicTQ8MNi7BGpOylGi5sGzqiajKWZfLW UuSAlQRpG7kU84j2hRdI83WXjbECUn8XjKd4JyluRuFA9I0kbvhxO+FTaQG68Gmtb5 PwAtcYVssNHtwb3eJB8pbEiV+ToXFvUX96i2OkzcZjJ0SBB+f9wXOySFHnzW8D5VU2 2A+66gVYn5eYwI5my0NoOfEle9Xp2dpgXSkW9EvLZ/z41N9p3AME0WB5UGdcWg9ZLm GhkLlC5WYs//A== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, sameehj@amazon.com Subject: [PATCH v6 bpf-next 1/8] xdp: introduce mb in xdp_buff/xdp_frame Date: Tue, 19 Jan 2021 21:20:07 +0100 Message-Id: <6e92b9194558d7251bba5405b3138daaa8a1a950.1611086134.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Introduce multi-buffer bit (mb) in xdp_frame/xdp_buffer data structure in order to specify if this is a linear buffer (mb = 0) or a multi-buffer frame (mb = 1). In the latter case the shared_info area at the end of the first buffer will be properly initialized to link together subsequent buffers. Signed-off-by: Lorenzo Bianconi --- include/net/xdp.h | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/include/net/xdp.h b/include/net/xdp.h index c4bfdc9a8b79..945767813fdd 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -73,7 +73,8 @@ struct xdp_buff { void *data_hard_start; struct xdp_rxq_info *rxq; struct xdp_txq_info *txq; - u32 frame_sz; /* frame size to deduce data_hard_end/reserved tailroom*/ + u32 frame_sz:31; /* frame size to deduce data_hard_end/reserved tailroom*/ + u32 mb:1; /* xdp non-linear buffer */ }; static __always_inline void @@ -81,6 +82,7 @@ xdp_init_buff(struct xdp_buff *xdp, u32 frame_sz, struct xdp_rxq_info *rxq) { xdp->frame_sz = frame_sz; xdp->rxq = rxq; + xdp->mb = 0; } static __always_inline void @@ -116,7 +118,8 @@ struct xdp_frame { u16 len; u16 headroom; u32 metasize:8; - u32 frame_sz:24; + u32 frame_sz:23; + u32 mb:1; /* xdp non-linear frame */ /* Lifetime of xdp_rxq_info is limited to NAPI/enqueue time, * while mem info is valid on remote CPU. */ @@ -178,6 +181,7 @@ void xdp_convert_frame_to_buff(struct xdp_frame *frame, struct xdp_buff *xdp) xdp->data_end = frame->data + frame->len; xdp->data_meta = frame->data - frame->metasize; xdp->frame_sz = frame->frame_sz; + xdp->mb = frame->mb; } static inline @@ -204,6 +208,7 @@ int xdp_update_frame_from_buff(struct xdp_buff *xdp, xdp_frame->headroom = headroom - sizeof(*xdp_frame); xdp_frame->metasize = metasize; xdp_frame->frame_sz = xdp->frame_sz; + xdp_frame->mb = xdp->mb; return 0; } From patchwork Tue Jan 19 20:20:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12030697 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B51E3C433DB for ; Tue, 19 Jan 2021 20:21:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 54DB323133 for ; Tue, 19 Jan 2021 20:21:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727630AbhASUVr (ORCPT ); Tue, 19 Jan 2021 15:21:47 -0500 Received: from mail.kernel.org ([198.145.29.99]:55956 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730094AbhASUVX (ORCPT ); Tue, 19 Jan 2021 15:21:23 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 2D4B423132; Tue, 19 Jan 2021 20:20:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1611087638; bh=sMtpFxzo67Gl7QC5/TAyHJ4KGQKAvtpoHQ52AfKwecs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sJwqHlyHyo9ZvwpEtSODPRVcXRoFp7v2AOKHBcUISc+oaohYrOfGfJbHTzVMQru/t xzc125Nr41vXbiGQXkWM3nt5uCdHd+XojFvzaPcPU0KoVfvH7QLk5eTb5Aj11xFkog FmLDXb58D+lsXHII1FgvzI/b+U3EOa2Xlj8ThwIUbvF5rMgTgzg2PownW6J+zArqP3 Rz9Gw1w7D5FF/vPRBQg8u749LlugCAYzTQzduEHWQ2uVdcGlL2JH2voF8SbpCgtUEp mdHiE2YjmBqP3+Ny9JCZL8tNmDPLNfAFrVYCnFtUvqD2Ls9UyTeqqOcQTwK87U2uL5 pLxk3omI60UxA== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, sameehj@amazon.com Subject: [PATCH v6 bpf-next 2/8] xdp: add xdp_shared_info data structure Date: Tue, 19 Jan 2021 21:20:08 +0100 Message-Id: <8dbb5349886e839ab3db60214b25ecd36c63a571.1611086134.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Introduce xdp_shared_info data structure to contain info about "non-linear" xdp frame. xdp_shared_info will alias skb_shared_info allowing to keep most of the frags in the same cache-line. Introduce some xdp_shared_info helpers aligned to skb_frag* ones Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/marvell/mvneta.c | 62 +++++++++++++++------------ include/net/xdp.h | 55 ++++++++++++++++++++++-- 2 files changed, 85 insertions(+), 32 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index 6290bfb6494e..ed986ffff96b 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -2033,14 +2033,17 @@ int mvneta_rx_refill_queue(struct mvneta_port *pp, struct mvneta_rx_queue *rxq) static void mvneta_xdp_put_buff(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, - struct xdp_buff *xdp, struct skb_shared_info *sinfo, + struct xdp_buff *xdp, struct xdp_shared_info *xdp_sinfo, int sync_len) { int i; - for (i = 0; i < sinfo->nr_frags; i++) + for (i = 0; i < xdp_sinfo->nr_frags; i++) { + skb_frag_t *frag = &xdp_sinfo->frags[i]; + page_pool_put_full_page(rxq->page_pool, - skb_frag_page(&sinfo->frags[i]), true); + xdp_get_frag_page(frag), true); + } page_pool_put_page(rxq->page_pool, virt_to_head_page(xdp->data), sync_len, true); } @@ -2179,7 +2182,7 @@ mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, struct bpf_prog *prog, struct xdp_buff *xdp, u32 frame_sz, struct mvneta_stats *stats) { - struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); + struct xdp_shared_info *xdp_sinfo = xdp_get_shared_info_from_buff(xdp); unsigned int len, data_len, sync; u32 ret, act; @@ -2200,7 +2203,7 @@ mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, err = xdp_do_redirect(pp->dev, xdp, prog); if (unlikely(err)) { - mvneta_xdp_put_buff(pp, rxq, xdp, sinfo, sync); + mvneta_xdp_put_buff(pp, rxq, xdp, xdp_sinfo, sync); ret = MVNETA_XDP_DROPPED; } else { ret = MVNETA_XDP_REDIR; @@ -2211,7 +2214,7 @@ mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, case XDP_TX: ret = mvneta_xdp_xmit_back(pp, xdp); if (ret != MVNETA_XDP_TX) - mvneta_xdp_put_buff(pp, rxq, xdp, sinfo, sync); + mvneta_xdp_put_buff(pp, rxq, xdp, xdp_sinfo, sync); break; default: bpf_warn_invalid_xdp_action(act); @@ -2220,7 +2223,7 @@ mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, trace_xdp_exception(pp->dev, prog, act); fallthrough; case XDP_DROP: - mvneta_xdp_put_buff(pp, rxq, xdp, sinfo, sync); + mvneta_xdp_put_buff(pp, rxq, xdp, xdp_sinfo, sync); ret = MVNETA_XDP_DROPPED; stats->xdp_drop++; break; @@ -2241,9 +2244,9 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp, { unsigned char *data = page_address(page); int data_len = -MVNETA_MH_SIZE, len; + struct xdp_shared_info *xdp_sinfo; struct net_device *dev = pp->dev; enum dma_data_direction dma_dir; - struct skb_shared_info *sinfo; if (*size > MVNETA_MAX_RX_BUF_SIZE) { len = MVNETA_MAX_RX_BUF_SIZE; @@ -2266,8 +2269,8 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp, xdp_prepare_buff(xdp, data, pp->rx_offset_correction + MVNETA_MH_SIZE, data_len, false); - sinfo = xdp_get_shared_info_from_buff(xdp); - sinfo->nr_frags = 0; + xdp_sinfo = xdp_get_shared_info_from_buff(xdp); + xdp_sinfo->nr_frags = 0; } static void @@ -2275,7 +2278,7 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp, struct mvneta_rx_desc *rx_desc, struct mvneta_rx_queue *rxq, struct xdp_buff *xdp, int *size, - struct skb_shared_info *xdp_sinfo, + struct xdp_shared_info *xdp_sinfo, struct page *page) { struct net_device *dev = pp->dev; @@ -2298,13 +2301,13 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp, if (data_len > 0 && xdp_sinfo->nr_frags < MAX_SKB_FRAGS) { skb_frag_t *frag = &xdp_sinfo->frags[xdp_sinfo->nr_frags++]; - skb_frag_off_set(frag, pp->rx_offset_correction); - skb_frag_size_set(frag, data_len); - __skb_frag_set_page(frag, page); + xdp_set_frag_offset(frag, pp->rx_offset_correction); + xdp_set_frag_size(frag, data_len); + xdp_set_frag_page(frag, page); /* last fragment */ if (len == *size) { - struct skb_shared_info *sinfo; + struct xdp_shared_info *sinfo; sinfo = xdp_get_shared_info_from_buff(xdp); sinfo->nr_frags = xdp_sinfo->nr_frags; @@ -2321,10 +2324,13 @@ static struct sk_buff * mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, struct xdp_buff *xdp, u32 desc_status) { - struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); - int i, num_frags = sinfo->nr_frags; + struct xdp_shared_info *xdp_sinfo = xdp_get_shared_info_from_buff(xdp); + int i, num_frags = xdp_sinfo->nr_frags; + skb_frag_t frag_list[MAX_SKB_FRAGS]; struct sk_buff *skb; + memcpy(frag_list, xdp_sinfo->frags, sizeof(skb_frag_t) * num_frags); + skb = build_skb(xdp->data_hard_start, PAGE_SIZE); if (!skb) return ERR_PTR(-ENOMEM); @@ -2336,12 +2342,12 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, mvneta_rx_csum(pp, desc_status, skb); for (i = 0; i < num_frags; i++) { - skb_frag_t *frag = &sinfo->frags[i]; + struct page *page = xdp_get_frag_page(&frag_list[i]); skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, - skb_frag_page(frag), skb_frag_off(frag), - skb_frag_size(frag), PAGE_SIZE); - page_pool_release_page(rxq->page_pool, skb_frag_page(frag)); + page, xdp_get_frag_offset(&frag_list[i]), + xdp_get_frag_size(&frag_list[i]), PAGE_SIZE); + page_pool_release_page(rxq->page_pool, page); } return skb; @@ -2354,7 +2360,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi, { int rx_proc = 0, rx_todo, refill, size = 0; struct net_device *dev = pp->dev; - struct skb_shared_info sinfo; + struct xdp_shared_info xdp_sinfo; struct mvneta_stats ps = {}; struct bpf_prog *xdp_prog; u32 desc_status, frame_sz; @@ -2363,7 +2369,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi, xdp_init_buff(&xdp_buf, PAGE_SIZE, &rxq->xdp_rxq); xdp_buf.data_hard_start = NULL; - sinfo.nr_frags = 0; + xdp_sinfo.nr_frags = 0; /* Get number of received packets */ rx_todo = mvneta_rxq_busy_desc_num_get(pp, rxq); @@ -2407,7 +2413,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi, } mvneta_swbm_add_rx_fragment(pp, rx_desc, rxq, &xdp_buf, - &size, &sinfo, page); + &size, &xdp_sinfo, page); } /* Middle or Last descriptor */ if (!(rx_status & MVNETA_RXD_LAST_DESC)) @@ -2415,7 +2421,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi, continue; if (size) { - mvneta_xdp_put_buff(pp, rxq, &xdp_buf, &sinfo, -1); + mvneta_xdp_put_buff(pp, rxq, &xdp_buf, &xdp_sinfo, -1); goto next; } @@ -2427,7 +2433,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi, if (IS_ERR(skb)) { struct mvneta_pcpu_stats *stats = this_cpu_ptr(pp->stats); - mvneta_xdp_put_buff(pp, rxq, &xdp_buf, &sinfo, -1); + mvneta_xdp_put_buff(pp, rxq, &xdp_buf, &xdp_sinfo, -1); u64_stats_update_begin(&stats->syncp); stats->es.skb_alloc_error++; @@ -2444,12 +2450,12 @@ static int mvneta_rx_swbm(struct napi_struct *napi, napi_gro_receive(napi, skb); next: xdp_buf.data_hard_start = NULL; - sinfo.nr_frags = 0; + xdp_sinfo.nr_frags = 0; } rcu_read_unlock(); if (xdp_buf.data_hard_start) - mvneta_xdp_put_buff(pp, rxq, &xdp_buf, &sinfo, -1); + mvneta_xdp_put_buff(pp, rxq, &xdp_buf, &xdp_sinfo, -1); if (ps.xdp_redirect) xdp_do_flush_map(); diff --git a/include/net/xdp.h b/include/net/xdp.h index 945767813fdd..ca7c4fb30b4e 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -107,10 +107,54 @@ xdp_prepare_buff(struct xdp_buff *xdp, unsigned char *hard_start, ((xdp)->data_hard_start + (xdp)->frame_sz - \ SKB_DATA_ALIGN(sizeof(struct skb_shared_info))) -static inline struct skb_shared_info * +struct xdp_shared_info { + u16 nr_frags; + u16 data_length; /* paged area length */ + skb_frag_t frags[MAX_SKB_FRAGS]; +}; + +static inline struct xdp_shared_info * xdp_get_shared_info_from_buff(struct xdp_buff *xdp) { - return (struct skb_shared_info *)xdp_data_hard_end(xdp); + BUILD_BUG_ON(sizeof(struct xdp_shared_info) > + sizeof(struct skb_shared_info)); + return (struct xdp_shared_info *)xdp_data_hard_end(xdp); +} + +static inline struct page *xdp_get_frag_page(const skb_frag_t *frag) +{ + return frag->bv_page; +} + +static inline unsigned int xdp_get_frag_offset(const skb_frag_t *frag) +{ + return frag->bv_offset; +} + +static inline unsigned int xdp_get_frag_size(const skb_frag_t *frag) +{ + return frag->bv_len; +} + +static inline void *xdp_get_frag_address(const skb_frag_t *frag) +{ + return page_address(xdp_get_frag_page(frag)) + + xdp_get_frag_offset(frag); +} + +static inline void xdp_set_frag_page(skb_frag_t *frag, struct page *page) +{ + frag->bv_page = page; +} + +static inline void xdp_set_frag_offset(skb_frag_t *frag, u32 offset) +{ + frag->bv_offset = offset; +} + +static inline void xdp_set_frag_size(skb_frag_t *frag, u32 size) +{ + frag->bv_len = size; } struct xdp_frame { @@ -140,12 +184,15 @@ static __always_inline void xdp_frame_bulk_init(struct xdp_frame_bulk *bq) bq->xa = NULL; } -static inline struct skb_shared_info * +static inline struct xdp_shared_info * xdp_get_shared_info_from_frame(struct xdp_frame *frame) { void *data_hard_start = frame->data - frame->headroom - sizeof(*frame); - return (struct skb_shared_info *)(data_hard_start + frame->frame_sz - + /* xdp_shared_info struct must be aligned to skb_shared_info + * area in buffer tailroom + */ + return (struct xdp_shared_info *)(data_hard_start + frame->frame_sz - SKB_DATA_ALIGN(sizeof(struct skb_shared_info))); } From patchwork Tue Jan 19 20:20:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12030701 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01695C433DB for ; Tue, 19 Jan 2021 20:21:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9CE5B23104 for ; Tue, 19 Jan 2021 20:21:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726208AbhASUVz (ORCPT ); Tue, 19 Jan 2021 15:21:55 -0500 Received: from mail.kernel.org ([198.145.29.99]:55994 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730126AbhASUVZ (ORCPT ); Tue, 19 Jan 2021 15:21:25 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 777AB2310D; Tue, 19 Jan 2021 20:20:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1611087642; bh=+08pvCIfgsqYJ8S06q2qM20qFEOiUxlYPFCxQCxJDY0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CpvLLBnPEJYz5s+nRgj2Y/tdzW3/dSHqIkjQ37V4Kf4zXW754O9tD/NfezCMtBNYc RSCT6n5VDmHHnxQXt/oLAaL+zoB0IsQymaPO5i7JbJmJ8O3fdFEu/icR2+yo+Udwha RWs/SL8xmbCJJrvZ5QSXdtr4cfvcGW5rlmIuN5BOGqloA4bZ0oKpw453Mc9ldN8k6c ilPWmSvT4sQcu7vAtnm4jtGxWHJl3z+4mq+Im6obEzHK9pc4PTElOcSpKZO3H/9PLL p72VJd9tWjTmRK5/mq0+EmQpt9u6c61J7GkyV541xyqyM9p2CzcH0xpbKN9YgNtedB 2gg2rwghSC+Sw== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, sameehj@amazon.com Subject: [PATCH v6 bpf-next 3/8] net: mvneta: update mb bit before passing the xdp buffer to eBPF layer Date: Tue, 19 Jan 2021 21:20:09 +0100 Message-Id: X-Mailer: git-send-email 2.29.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Update multi-buffer bit (mb) in xdp_buff to notify XDP/eBPF layer and XDP remote drivers if this is a "non-linear" XDP buffer. Access xdp_shared_info only if xdp_buff mb is set. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/marvell/mvneta.c | 26 ++++++++++++++++++++------ 1 file changed, 20 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index ed986ffff96b..3df0602239ad 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -2038,12 +2038,16 @@ mvneta_xdp_put_buff(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, { int i; + if (likely(!xdp->mb)) + goto out; + for (i = 0; i < xdp_sinfo->nr_frags; i++) { skb_frag_t *frag = &xdp_sinfo->frags[i]; page_pool_put_full_page(rxq->page_pool, xdp_get_frag_page(frag), true); } +out: page_pool_put_page(rxq->page_pool, virt_to_head_page(xdp->data), sync_len, true); } @@ -2244,7 +2248,6 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp, { unsigned char *data = page_address(page); int data_len = -MVNETA_MH_SIZE, len; - struct xdp_shared_info *xdp_sinfo; struct net_device *dev = pp->dev; enum dma_data_direction dma_dir; @@ -2268,9 +2271,6 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp, prefetch(data); xdp_prepare_buff(xdp, data, pp->rx_offset_correction + MVNETA_MH_SIZE, data_len, false); - - xdp_sinfo = xdp_get_shared_info_from_buff(xdp); - xdp_sinfo->nr_frags = 0; } static void @@ -2305,12 +2305,18 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp, xdp_set_frag_size(frag, data_len); xdp_set_frag_page(frag, page); + if (!xdp->mb) { + xdp_sinfo->data_length = *size; + xdp->mb = 1; + } /* last fragment */ if (len == *size) { struct xdp_shared_info *sinfo; sinfo = xdp_get_shared_info_from_buff(xdp); sinfo->nr_frags = xdp_sinfo->nr_frags; + sinfo->data_length = xdp_sinfo->data_length; + memcpy(sinfo->frags, xdp_sinfo->frags, sinfo->nr_frags * sizeof(skb_frag_t)); } @@ -2325,11 +2331,15 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, struct xdp_buff *xdp, u32 desc_status) { struct xdp_shared_info *xdp_sinfo = xdp_get_shared_info_from_buff(xdp); - int i, num_frags = xdp_sinfo->nr_frags; skb_frag_t frag_list[MAX_SKB_FRAGS]; + int i, num_frags = 0; struct sk_buff *skb; - memcpy(frag_list, xdp_sinfo->frags, sizeof(skb_frag_t) * num_frags); + if (unlikely(xdp->mb)) { + num_frags = xdp_sinfo->nr_frags; + memcpy(frag_list, xdp_sinfo->frags, + sizeof(skb_frag_t) * num_frags); + } skb = build_skb(xdp->data_hard_start, PAGE_SIZE); if (!skb) @@ -2341,6 +2351,9 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, skb_put(skb, xdp->data_end - xdp->data); mvneta_rx_csum(pp, desc_status, skb); + if (likely(!xdp->mb)) + return skb; + for (i = 0; i < num_frags; i++) { struct page *page = xdp_get_frag_page(&frag_list[i]); @@ -2402,6 +2415,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi, frame_sz = size - ETH_FCS_LEN; desc_status = rx_status; + xdp_buf.mb = 0; mvneta_swbm_rx_frame(pp, rx_desc, rxq, &xdp_buf, &size, page); } else { From patchwork Tue Jan 19 20:20:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12030707 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1138C433E9 for ; Tue, 19 Jan 2021 20:24:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5E85A23139 for ; Tue, 19 Jan 2021 20:24:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731473AbhASUX0 (ORCPT ); Tue, 19 Jan 2021 15:23:26 -0500 Received: from mail.kernel.org ([198.145.29.99]:56206 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729879AbhASUWQ (ORCPT ); Tue, 19 Jan 2021 15:22:16 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id CF79323119; Tue, 19 Jan 2021 20:20:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1611087647; bh=xpdKrm8ZcTc0HwbnQvHlKBZmoqejubUtzFtDYJ82BdU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=a0+hSu4mV3DvlS6bIAyIR1RCM4FyszhhkgkeFoTIluuTPzz3HxNWKKaFFKNK4r7hm 8IN7vnHSVyoYyygVHvtctxgqHtDM2Fn1kqfXsTHvCN2Eg4QDwE8MSixhK56zZI6UcR iCDQ9jEmbEs3vqLIyB4Od2IL1Oh31splpI0ifV1IbbVSKHtcCOE7M+exAPLH63NsmM 1W7/kFI34pdhLC3m6TuC8MNOmCXZ9QcuZwbrjyQKCgtwdlYZtrVjX0IK1KUO6+MSNL VAtnU859OPvW6V9TnBby1Esn59C5NEM1TnFMX8+NJzpbX+tfwysY7PMoUaUEMmZnY4 YzXor2U3EVzig== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, sameehj@amazon.com Subject: [PATCH v6 bpf-next 4/8] xdp: add multi-buff support to xdp_return_{buff/frame} Date: Tue, 19 Jan 2021 21:20:10 +0100 Message-Id: <1542d03761990429276d72fcb94f7e91d63e7e1f.1611086134.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Take into account if the received xdp_buff/xdp_frame is non-linear recycling/returning the frame memory to the allocator or into xdp_frame_bulk. Introduce xdp_return_num_frags_from_buff to return a given number of fragments from a xdp multi-buff starting from the tail. Signed-off-by: Lorenzo Bianconi --- include/net/xdp.h | 19 ++++++++++-- net/core/xdp.c | 76 ++++++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 92 insertions(+), 3 deletions(-) diff --git a/include/net/xdp.h b/include/net/xdp.h index ca7c4fb30b4e..a2e09031b346 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -286,6 +286,7 @@ void xdp_return_buff(struct xdp_buff *xdp); void xdp_flush_frame_bulk(struct xdp_frame_bulk *bq); void xdp_return_frame_bulk(struct xdp_frame *xdpf, struct xdp_frame_bulk *bq); +void xdp_return_num_frags_from_buff(struct xdp_buff *xdp, u16 num_frags); /* When sending xdp_frame into the network stack, then there is no * return point callback, which is needed to release e.g. DMA-mapping @@ -296,10 +297,24 @@ void __xdp_release_frame(void *data, struct xdp_mem_info *mem); static inline void xdp_release_frame(struct xdp_frame *xdpf) { struct xdp_mem_info *mem = &xdpf->mem; + struct xdp_shared_info *xdp_sinfo; + int i; /* Curr only page_pool needs this */ - if (mem->type == MEM_TYPE_PAGE_POOL) - __xdp_release_frame(xdpf->data, mem); + if (mem->type != MEM_TYPE_PAGE_POOL) + return; + + if (likely(!xdpf->mb)) + goto out; + + xdp_sinfo = xdp_get_shared_info_from_frame(xdpf); + for (i = 0; i < xdp_sinfo->nr_frags; i++) { + struct page *page = xdp_get_frag_page(&xdp_sinfo->frags[i]); + + __xdp_release_frame(page_address(page), mem); + } +out: + __xdp_release_frame(xdpf->data, mem); } int xdp_rxq_info_reg(struct xdp_rxq_info *xdp_rxq, diff --git a/net/core/xdp.c b/net/core/xdp.c index 0d2630a35c3e..af9b9c1194fc 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -374,12 +374,38 @@ static void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct, void xdp_return_frame(struct xdp_frame *xdpf) { + struct xdp_shared_info *xdp_sinfo; + int i; + + if (likely(!xdpf->mb)) + goto out; + + xdp_sinfo = xdp_get_shared_info_from_frame(xdpf); + for (i = 0; i < xdp_sinfo->nr_frags; i++) { + struct page *page = xdp_get_frag_page(&xdp_sinfo->frags[i]); + + __xdp_return(page_address(page), &xdpf->mem, false, NULL); + } +out: __xdp_return(xdpf->data, &xdpf->mem, false, NULL); } EXPORT_SYMBOL_GPL(xdp_return_frame); void xdp_return_frame_rx_napi(struct xdp_frame *xdpf) { + struct xdp_shared_info *xdp_sinfo; + int i; + + if (likely(!xdpf->mb)) + goto out; + + xdp_sinfo = xdp_get_shared_info_from_frame(xdpf); + for (i = 0; i < xdp_sinfo->nr_frags; i++) { + struct page *page = xdp_get_frag_page(&xdp_sinfo->frags[i]); + + __xdp_return(page_address(page), &xdpf->mem, true, NULL); + } +out: __xdp_return(xdpf->data, &xdpf->mem, true, NULL); } EXPORT_SYMBOL_GPL(xdp_return_frame_rx_napi); @@ -415,7 +441,7 @@ void xdp_return_frame_bulk(struct xdp_frame *xdpf, struct xdp_mem_allocator *xa; if (mem->type != MEM_TYPE_PAGE_POOL) { - __xdp_return(xdpf->data, &xdpf->mem, false, NULL); + xdp_return_frame(xdpf); return; } @@ -434,15 +460,63 @@ void xdp_return_frame_bulk(struct xdp_frame *xdpf, bq->xa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params); } + if (unlikely(xdpf->mb)) { + struct xdp_shared_info *xdp_sinfo; + int i; + + xdp_sinfo = xdp_get_shared_info_from_frame(xdpf); + for (i = 0; i < xdp_sinfo->nr_frags; i++) { + skb_frag_t *frag = &xdp_sinfo->frags[i]; + + bq->q[bq->count++] = xdp_get_frag_address(frag); + if (bq->count == XDP_BULK_QUEUE_SIZE) + xdp_flush_frame_bulk(bq); + } + } bq->q[bq->count++] = xdpf->data; } EXPORT_SYMBOL_GPL(xdp_return_frame_bulk); void xdp_return_buff(struct xdp_buff *xdp) { + struct xdp_shared_info *xdp_sinfo; + int i; + + if (likely(!xdp->mb)) + goto out; + + xdp_sinfo = xdp_get_shared_info_from_buff(xdp); + for (i = 0; i < xdp_sinfo->nr_frags; i++) { + struct page *page = xdp_get_frag_page(&xdp_sinfo->frags[i]); + + __xdp_return(page_address(page), &xdp->rxq->mem, true, xdp); + } +out: __xdp_return(xdp->data, &xdp->rxq->mem, true, xdp); } +void xdp_return_num_frags_from_buff(struct xdp_buff *xdp, u16 num_frags) +{ + struct xdp_shared_info *xdp_sinfo; + int i; + + if (unlikely(!xdp->mb)) + return; + + xdp_sinfo = xdp_get_shared_info_from_buff(xdp); + num_frags = min_t(u16, num_frags, xdp_sinfo->nr_frags); + for (i = 1; i <= num_frags; i++) { + skb_frag_t *frag = &xdp_sinfo->frags[xdp_sinfo->nr_frags - i]; + struct page *page = xdp_get_frag_page(frag); + + xdp_sinfo->data_length -= xdp_get_frag_size(frag); + __xdp_return(page_address(page), &xdp->rxq->mem, false, NULL); + } + xdp_sinfo->nr_frags -= num_frags; + xdp->mb = !!xdp_sinfo->nr_frags; +} +EXPORT_SYMBOL_GPL(xdp_return_num_frags_from_buff); + /* Only called for MEM_TYPE_PAGE_POOL see xdp.h */ void __xdp_release_frame(void *data, struct xdp_mem_info *mem) { From patchwork Tue Jan 19 20:20:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12030699 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70ADFC433E0 for ; Tue, 19 Jan 2021 20:22:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0AE8B23138 for ; Tue, 19 Jan 2021 20:22:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730214AbhASUW0 (ORCPT ); Tue, 19 Jan 2021 15:22:26 -0500 Received: from mail.kernel.org ([198.145.29.99]:56200 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729302AbhASUWQ (ORCPT ); Tue, 19 Jan 2021 15:22:16 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 114682312B; Tue, 19 Jan 2021 20:20:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1611087651; bh=wYcilxXfq5lFMwCFOPI6lGUmo/1e45Q8wUEYBB5EZcc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=F2cCtbzily9udQwI/lKyVx4/MPe1vbRB5ftJSNsOW16s5ylFUTPFazZq37XKl3HFZ pEn4CIwoRMjHMwfTEDSd2yHMDtMRUyBZ/Z++rxEY0XsZ/d2ii5uRHKAvKUfgXNv6Or YGqmpJKlHeDEvAWenuGfM3Tae0ru2RiQjxByuxFKFGHDmXGLUAg7M+AqJO8ZnYJEjN FDvYnTYOjC11WiEJKqjyT9/QnBenj6MWUym1jXIUARLdh0VW83VyqZsTY3q+jBPANw XcxXNGtNTvObwhbT9ERFFWJtYaTQobGTU7HJRng3QLQYDC2odB2EjVtJYprLjcht1i Jsr/5VAm3ywtg== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, sameehj@amazon.com Subject: [PATCH v6 bpf-next 5/8] net: mvneta: add multi buffer support to XDP_TX Date: Tue, 19 Jan 2021 21:20:11 +0100 Message-Id: X-Mailer: git-send-email 2.29.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Introduce the capability to map non-linear xdp buffer running mvneta_xdp_submit_frame() for XDP_TX and XDP_REDIRECT Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/marvell/mvneta.c | 94 ++++++++++++++++----------- 1 file changed, 56 insertions(+), 38 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index 3df0602239ad..c273e674e3de 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -1857,8 +1857,8 @@ static void mvneta_txq_bufs_free(struct mvneta_port *pp, bytes_compl += buf->skb->len; pkts_compl++; dev_kfree_skb_any(buf->skb); - } else if (buf->type == MVNETA_TYPE_XDP_TX || - buf->type == MVNETA_TYPE_XDP_NDO) { + } else if ((buf->type == MVNETA_TYPE_XDP_TX || + buf->type == MVNETA_TYPE_XDP_NDO) && buf->xdpf) { if (napi && buf->type == MVNETA_TYPE_XDP_TX) xdp_return_frame_rx_napi(buf->xdpf); else @@ -2054,45 +2054,64 @@ mvneta_xdp_put_buff(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, static int mvneta_xdp_submit_frame(struct mvneta_port *pp, struct mvneta_tx_queue *txq, - struct xdp_frame *xdpf, bool dma_map) + struct xdp_frame *xdpf, int *nxmit_byte, bool dma_map) { - struct mvneta_tx_desc *tx_desc; - struct mvneta_tx_buf *buf; - dma_addr_t dma_addr; + struct xdp_shared_info *xdp_sinfo = xdp_get_shared_info_from_frame(xdpf); + int i, num_frames = xdpf->mb ? xdp_sinfo->nr_frags + 1 : 1; + struct mvneta_tx_desc *tx_desc = NULL; + struct page *page; - if (txq->count >= txq->tx_stop_threshold) + if (txq->count + num_frames >= txq->size) return MVNETA_XDP_DROPPED; - tx_desc = mvneta_txq_next_desc_get(txq); + for (i = 0; i < num_frames; i++) { + struct mvneta_tx_buf *buf = &txq->buf[txq->txq_put_index]; + skb_frag_t *frag = i ? &xdp_sinfo->frags[i - 1] : NULL; + int len = i ? xdp_get_frag_size(frag) : xdpf->len; + dma_addr_t dma_addr; - buf = &txq->buf[txq->txq_put_index]; - if (dma_map) { - /* ndo_xdp_xmit */ - dma_addr = dma_map_single(pp->dev->dev.parent, xdpf->data, - xdpf->len, DMA_TO_DEVICE); - if (dma_mapping_error(pp->dev->dev.parent, dma_addr)) { - mvneta_txq_desc_put(txq); - return MVNETA_XDP_DROPPED; + tx_desc = mvneta_txq_next_desc_get(txq); + if (dma_map) { + /* ndo_xdp_xmit */ + void *data; + + data = frag ? xdp_get_frag_address(frag) : xdpf->data; + dma_addr = dma_map_single(pp->dev->dev.parent, data, + len, DMA_TO_DEVICE); + if (dma_mapping_error(pp->dev->dev.parent, dma_addr)) { + for (; i >= 0; i--) + mvneta_txq_desc_put(txq); + return MVNETA_XDP_DROPPED; + } + buf->type = MVNETA_TYPE_XDP_NDO; + } else { + page = frag ? xdp_get_frag_page(frag) + : virt_to_page(xdpf->data); + dma_addr = page_pool_get_dma_addr(page); + if (frag) + dma_addr += xdp_get_frag_offset(frag); + else + dma_addr += sizeof(*xdpf) + xdpf->headroom; + dma_sync_single_for_device(pp->dev->dev.parent, + dma_addr, len, + DMA_BIDIRECTIONAL); + buf->type = MVNETA_TYPE_XDP_TX; } - buf->type = MVNETA_TYPE_XDP_NDO; - } else { - struct page *page = virt_to_page(xdpf->data); + buf->xdpf = i ? NULL : xdpf; + + tx_desc->command = !i ? MVNETA_TXD_F_DESC : 0; + tx_desc->buf_phys_addr = dma_addr; + tx_desc->data_size = len; + *nxmit_byte += len; - dma_addr = page_pool_get_dma_addr(page) + - sizeof(*xdpf) + xdpf->headroom; - dma_sync_single_for_device(pp->dev->dev.parent, dma_addr, - xdpf->len, DMA_BIDIRECTIONAL); - buf->type = MVNETA_TYPE_XDP_TX; + mvneta_txq_inc_put(txq); } - buf->xdpf = xdpf; - tx_desc->command = MVNETA_TXD_FLZ_DESC; - tx_desc->buf_phys_addr = dma_addr; - tx_desc->data_size = xdpf->len; + /*last descriptor */ + tx_desc->command |= MVNETA_TXD_L_DESC | MVNETA_TXD_Z_PAD; - mvneta_txq_inc_put(txq); - txq->pending++; - txq->count++; + txq->pending += num_frames; + txq->count += num_frames; return MVNETA_XDP_TX; } @@ -2103,8 +2122,8 @@ mvneta_xdp_xmit_back(struct mvneta_port *pp, struct xdp_buff *xdp) struct mvneta_pcpu_stats *stats = this_cpu_ptr(pp->stats); struct mvneta_tx_queue *txq; struct netdev_queue *nq; + int cpu, nxmit_byte = 0; struct xdp_frame *xdpf; - int cpu; u32 ret; xdpf = xdp_convert_buff_to_frame(xdp); @@ -2116,10 +2135,10 @@ mvneta_xdp_xmit_back(struct mvneta_port *pp, struct xdp_buff *xdp) nq = netdev_get_tx_queue(pp->dev, txq->id); __netif_tx_lock(nq, cpu); - ret = mvneta_xdp_submit_frame(pp, txq, xdpf, false); + ret = mvneta_xdp_submit_frame(pp, txq, xdpf, &nxmit_byte, false); if (ret == MVNETA_XDP_TX) { u64_stats_update_begin(&stats->syncp); - stats->es.ps.tx_bytes += xdpf->len; + stats->es.ps.tx_bytes += nxmit_byte; stats->es.ps.tx_packets++; stats->es.ps.xdp_tx++; u64_stats_update_end(&stats->syncp); @@ -2158,10 +2177,9 @@ mvneta_xdp_xmit(struct net_device *dev, int num_frame, __netif_tx_lock(nq, cpu); for (i = 0; i < num_frame; i++) { - ret = mvneta_xdp_submit_frame(pp, txq, frames[i], true); - if (ret == MVNETA_XDP_TX) { - nxmit_byte += frames[i]->len; - } else { + ret = mvneta_xdp_submit_frame(pp, txq, frames[i], &nxmit_byte, + true); + if (ret != MVNETA_XDP_TX) { xdp_return_frame_rx_napi(frames[i]); nxmit--; } From patchwork Tue Jan 19 20:20:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12030709 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5F72C433DB for ; Tue, 19 Jan 2021 20:24:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7F3452313F for ; Tue, 19 Jan 2021 20:24:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389698AbhASUYL (ORCPT ); Tue, 19 Jan 2021 15:24:11 -0500 Received: from mail.kernel.org ([198.145.29.99]:56202 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729336AbhASUWQ (ORCPT ); Tue, 19 Jan 2021 15:22:16 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 531252312D; Tue, 19 Jan 2021 20:20:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1611087655; bh=DNabHv0vbOmpYQ+NoLBQkgO3fFAh+v/6/BvP+O4xu7E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QvKO34Ef0iy2dI2QjK0rRtq2pXSFpAX0eP+HwdTPgqsu21Qw77KIiOFRgaJR8Yec5 vzTqKOwvMzfbx2DCTHsd/TfaF8K6Ory+rhU6xe7a1z0BP3eSjrWJiN6swP6f3EdvZb Oa6UdSnHjzMxuuCgLivR4+xxW9S8rpJCUNkf46lOO1GGG1rmjQHduHZJfehapFKZ+g 9AzgAAgfcXAJhVJnT75ANlDhSOtbQjE5v7Y7TLqYPflfaS2a8UlJVa7icAln9FVHIY NVaWd4mSP/2/lxBCvlJY95kOr8fh1z+4HyA9beZIyS5eRrPoqc50OevXROn3RmwbEA TqAGSdcSlgAEw== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, sameehj@amazon.com Subject: [PATCH v6 bpf-next 6/8] net: mvneta: enable jumbo frames for XDP Date: Tue, 19 Jan 2021 21:20:12 +0100 Message-Id: <6e8d1cc2fb0af6e1b54e97980501ebf6a3e50833.1611086134.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Enable the capability to receive jumbo frames even if the interface is running in XDP mode Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/marvell/mvneta.c | 10 ---------- 1 file changed, 10 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index c273e674e3de..45a3a8cb12fa 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -3763,11 +3763,6 @@ static int mvneta_change_mtu(struct net_device *dev, int mtu) mtu = ALIGN(MVNETA_RX_PKT_SIZE(mtu), 8); } - if (pp->xdp_prog && mtu > MVNETA_MAX_RX_BUF_SIZE) { - netdev_info(dev, "Illegal MTU value %d for XDP mode\n", mtu); - return -EINVAL; - } - dev->mtu = mtu; if (!netif_running(dev)) { @@ -4465,11 +4460,6 @@ static int mvneta_xdp_setup(struct net_device *dev, struct bpf_prog *prog, struct mvneta_port *pp = netdev_priv(dev); struct bpf_prog *old_prog; - if (prog && dev->mtu > MVNETA_MAX_RX_BUF_SIZE) { - NL_SET_ERR_MSG_MOD(extack, "MTU too large for XDP"); - return -EOPNOTSUPP; - } - if (pp->bm_priv) { NL_SET_ERR_MSG_MOD(extack, "Hardware Buffer Management not supported on XDP"); From patchwork Tue Jan 19 20:20:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12030729 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 259FDC433DB for ; Tue, 19 Jan 2021 20:37:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BDFED23108 for ; Tue, 19 Jan 2021 20:37:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729703AbhASUhG (ORCPT ); Tue, 19 Jan 2021 15:37:06 -0500 Received: from mail.kernel.org ([198.145.29.99]:56204 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729933AbhASUWQ (ORCPT ); Tue, 19 Jan 2021 15:22:16 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 9E4DD23137; Tue, 19 Jan 2021 20:20:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1611087659; bh=qxAMkTcvy7a03HNUrSCHfTmOJmHHhmfl7L9icGtloug=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Z5letdSC5mgmBS65iL+A0bmd8CrgN5P5NZp3pS01KNnsJUBqCi914T6aOrYnKAO8B jibj99qd31oZQQanPdMz9kcLhQzGn9/ZArxA3fmLCDkXS2AGqbMRF789P3glCJERVK oROPxRAUQ+ILZo+JRzy7SCktxiT9DRynxvo7fb88JHU4jO1pfxGLuthHoE6t956yca Qp0xN3NPdFb7xiK4KWHUyT7t4nMO6HLPIKGnwlmBjanQtsfe/oRMLKm9jKmVNdClf2 3Z4jzi+dUYrEjdtK7di/iRQmjl7lirghP/590FPhMbkm6T2xr2+P1HrkA9G16ochhn nigNg2CkHcTVg== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, sameehj@amazon.com Subject: [PATCH v6 bpf-next 7/8] net: xdp: add multi-buff support to xdp_build_skb_from_fram Date: Tue, 19 Jan 2021 21:20:13 +0100 Message-Id: X-Mailer: git-send-email 2.29.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Introduce xdp multi-buff support to __xdp_build_skb_from_frame/xdp_build_skb_from_fram utility routines. Signed-off-by: Lorenzo Bianconi --- net/core/xdp.c | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/net/core/xdp.c b/net/core/xdp.c index af9b9c1194fc..a7cd1ffbab6b 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -592,9 +592,21 @@ struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf, struct sk_buff *skb, struct net_device *dev) { + skb_frag_t frag_list[MAX_SKB_FRAGS]; unsigned int headroom, frame_size; + int i, num_frags = 0; void *hard_start; + /* XDP multi-buff frame */ + if (unlikely(xdpf->mb)) { + struct xdp_shared_info *xdp_sinfo; + + xdp_sinfo = xdp_get_shared_info_from_frame(xdpf); + num_frags = xdp_sinfo->nr_frags; + memcpy(frag_list, xdp_sinfo->frags, + sizeof(skb_frag_t) * num_frags); + } + /* Part of headroom was reserved to xdpf */ headroom = sizeof(*xdpf) + xdpf->headroom; @@ -613,6 +625,20 @@ struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf, if (xdpf->metasize) skb_metadata_set(skb, xdpf->metasize); + /* Single-buff XDP frame */ + if (likely(!num_frags)) + goto out; + + for (i = 0; i < num_frags; i++) { + struct page *page = xdp_get_frag_page(&frag_list[i]); + + skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, + page, xdp_get_frag_offset(&frag_list[i]), + xdp_get_frag_size(&frag_list[i]), + xdpf->frame_sz); + } + +out: /* Essential SKB info: protocol and skb->dev */ skb->protocol = eth_type_trans(skb, dev); From patchwork Tue Jan 19 20:20:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12030703 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F109C433E6 for ; Tue, 19 Jan 2021 20:23:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 35CFB23138 for ; Tue, 19 Jan 2021 20:23:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730118AbhASUXC (ORCPT ); Tue, 19 Jan 2021 15:23:02 -0500 Received: from mail.kernel.org ([198.145.29.99]:56208 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729780AbhASUWQ (ORCPT ); Tue, 19 Jan 2021 15:22:16 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id E7DF323131; Tue, 19 Jan 2021 20:21:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1611087665; bh=4SmFoxP+xrgfe2Ekh8Np/OrZmdH+BwGDbumAr1h2DCc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PN2+bmJSRPzUZjx2RG7W5pLkyXIZoIvaNqxaBkEE4QZOxiKj9yj/qy9SyblKf8wKf DKKdQewIzlvMMceJn2XibSWFIE0Y0bQ2wwv9ArK+khqKPeRkQUC6RKM4k86bKhL5Po Gt1xBIb+5Y/LTrdbttqU+XoSAOtxfskWH08NvZLiKxyDAOW30Zspqtf771zmUeSfha 0I47/zaooKBRT+3qyyqJORrlrCuh8ZnPu/UJUFmevvkkmar580CcJCAfAijqMM2oXX hWc6C3jPi9/SvmGgQg8SzLzkhN+vDbVumn3VN+Ytn0iY4HNgTps3apgbmo/SLa46Ka Q+aQZC70/6XdA== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, sameehj@amazon.com Subject: [PATCH v6 bpf-next 8/8] bpf: add multi-buff support to the bpf_xdp_adjust_tail() API Date: Tue, 19 Jan 2021 21:20:14 +0100 Message-Id: <9248f1347d67587010274cdd488fe5a0008e3f9c.1611086134.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Eelco Chaudron This change adds support for tail growing and shrinking for XDP multi-buff. Signed-off-by: Eelco Chaudron Signed-off-by: Lorenzo Bianconi --- include/net/xdp.h | 5 ++++ net/core/filter.c | 63 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 68 insertions(+) diff --git a/include/net/xdp.h b/include/net/xdp.h index a2e09031b346..4bc86538e052 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -157,6 +157,11 @@ static inline void xdp_set_frag_size(skb_frag_t *frag, u32 size) frag->bv_len = size; } +static inline unsigned int xdp_get_frag_tailroom(const skb_frag_t *frag) +{ + return PAGE_SIZE - xdp_get_frag_size(frag) - xdp_get_frag_offset(frag); +} + struct xdp_frame { void *data; u16 len; diff --git a/net/core/filter.c b/net/core/filter.c index 9ab94e90d660..86d0adc6cecd 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -3860,11 +3860,74 @@ static const struct bpf_func_proto bpf_xdp_adjust_head_proto = { .arg2_type = ARG_ANYTHING, }; +static int bpf_xdp_mb_adjust_tail(struct xdp_buff *xdp, int offset) +{ + struct xdp_shared_info *xdp_sinfo = xdp_get_shared_info_from_buff(xdp); + + if (unlikely(xdp_sinfo->nr_frags == 0)) + return -EINVAL; + + if (offset >= 0) { + skb_frag_t *frag = &xdp_sinfo->frags[xdp_sinfo->nr_frags - 1]; + int size; + + if (unlikely(offset > xdp_get_frag_tailroom(frag))) + return -EINVAL; + + size = xdp_get_frag_size(frag); + memset(xdp_get_frag_address(frag) + size, 0, offset); + xdp_set_frag_size(frag, size + offset); + xdp_sinfo->data_length += offset; + } else { + int i, frags_to_free = 0; + + offset = abs(offset); + + if (unlikely(offset > ((int)(xdp->data_end - xdp->data) + + xdp_sinfo->data_length - + ETH_HLEN))) + return -EINVAL; + + for (i = xdp_sinfo->nr_frags - 1; i >= 0 && offset > 0; i--) { + skb_frag_t *frag = &xdp_sinfo->frags[i]; + int size = xdp_get_frag_size(frag); + int shrink = min_t(int, offset, size); + + offset -= shrink; + if (likely(size - shrink > 0)) { + /* When updating the final fragment we have + * to adjust the data_length in line. + */ + xdp_sinfo->data_length -= shrink; + xdp_set_frag_size(frag, size - shrink); + break; + } + + /* When we free the fragments, + * xdp_return_frags_from_buff() will take care + * of updating the xdp share info data_length. + */ + frags_to_free++; + } + + if (unlikely(frags_to_free)) + xdp_return_num_frags_from_buff(xdp, frags_to_free); + + if (unlikely(offset > 0)) + xdp->data_end -= offset; + } + + return 0; +} + BPF_CALL_2(bpf_xdp_adjust_tail, struct xdp_buff *, xdp, int, offset) { void *data_hard_end = xdp_data_hard_end(xdp); /* use xdp->frame_sz */ void *data_end = xdp->data_end + offset; + if (unlikely(xdp->mb)) + return bpf_xdp_mb_adjust_tail(xdp, offset); + /* Notice that xdp_data_hard_end have reserved some tailroom */ if (unlikely(data_end > data_hard_end)) return -EINVAL;