From patchwork Wed Mar 15 04:10:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13175281 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29916C7618D for ; Wed, 15 Mar 2023 04:11:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230458AbjCOEK6 (ORCPT ); Wed, 15 Mar 2023 00:10:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59102 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230289AbjCOEKx (ORCPT ); Wed, 15 Mar 2023 00:10:53 -0400 Received: from out30-119.freemail.mail.aliyun.com (out30-119.freemail.mail.aliyun.com [115.124.30.119]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 33D084AFE5; Tue, 14 Mar 2023 21:10:49 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R931e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046060;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=13;SR=0;TI=SMTPD_---0VducNZS_1678853445; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VducNZS_1678853445) by smtp.aliyun-inc.com; Wed, 15 Mar 2023 12:10:46 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [RFC net-next 2/8] virtio_net: mergeable xdp: introduce mergeable_xdp_prepare Date: Wed, 15 Mar 2023 12:10:36 +0800 Message-Id: <20230315041042.88138-3-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20230315041042.88138-1-xuanzhuo@linux.alibaba.com> References: <20230315041042.88138-1-xuanzhuo@linux.alibaba.com> MIME-Version: 1.0 X-Git-Hash: a046238c058f Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Separating the logic of preparation for xdp from receive_mergeable. The purpose of this is to simplify the logic of execution of XDP. The main logic here is that when headroom is insufficient, we need to allocate a new page and calculate offset. It should be noted that if there is new page, the variable page will refer to the new page. Signed-off-by: Xuan Zhuo --- drivers/net/virtio_net.c | 135 ++++++++++++++++++++++----------------- 1 file changed, 77 insertions(+), 58 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 36be30319660..f4d01693e568 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1092,6 +1092,79 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev, return 0; } +static void *mergeable_xdp_prepare(struct virtnet_info *vi, + struct receive_queue *rq, + struct bpf_prog *xdp_prog, + void *ctx, + unsigned int *frame_sz, + int *num_buf, + struct page **page, + int offset, + unsigned int *len, + struct virtio_net_hdr_mrg_rxbuf *hdr) +{ + unsigned int truesize = mergeable_ctx_to_truesize(ctx); + unsigned int headroom = mergeable_ctx_to_headroom(ctx); + struct page *xdp_page; + unsigned int xdp_room; + + /* Transient failure which in theory could occur if + * in-flight packets from before XDP was enabled reach + * the receive path after XDP is loaded. + */ + if (unlikely(hdr->hdr.gso_type)) + return NULL; + + /* Now XDP core assumes frag size is PAGE_SIZE, but buffers + * with headroom may add hole in truesize, which + * make their length exceed PAGE_SIZE. So we disabled the + * hole mechanism for xdp. See add_recvbuf_mergeable(). + */ + *frame_sz = truesize; + + /* This happens when headroom is not enough because + * of the buffer was prefilled before XDP is set. + * This should only happen for the first several packets. + * In fact, vq reset can be used here to help us clean up + * the prefilled buffers, but many existing devices do not + * support it, and we don't want to bother users who are + * using xdp normally. + */ + if (!xdp_prog->aux->xdp_has_frags && + (*num_buf > 1 || headroom < virtnet_get_headroom(vi))) { + /* linearize data for XDP */ + xdp_page = xdp_linearize_page(rq, num_buf, + *page, offset, + VIRTIO_XDP_HEADROOM, + len); + + if (!xdp_page) + return NULL; + } else if (unlikely(headroom < virtnet_get_headroom(vi))) { + xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM + + sizeof(struct skb_shared_info)); + if (*len + xdp_room > PAGE_SIZE) + return NULL; + + xdp_page = alloc_page(GFP_ATOMIC); + if (!xdp_page) + return NULL; + + memcpy(page_address(xdp_page) + VIRTIO_XDP_HEADROOM, + page_address(*page) + offset, *len); + } else { + return page_address(*page) + offset; + } + + *frame_sz = PAGE_SIZE; + + put_page(*page); + + *page = xdp_page; + + return page_address(xdp_page) + VIRTIO_XDP_HEADROOM; +} + static struct sk_buff *receive_mergeable(struct net_device *dev, struct virtnet_info *vi, struct receive_queue *rq, @@ -1111,7 +1184,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, unsigned int headroom = mergeable_ctx_to_headroom(ctx); unsigned int tailroom = headroom ? sizeof(struct skb_shared_info) : 0; unsigned int room = SKB_DATA_ALIGN(headroom + tailroom); - unsigned int frame_sz, xdp_room; + unsigned int frame_sz; int err; head_skb = NULL; @@ -1141,65 +1214,11 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, u32 act; int i; - /* Transient failure which in theory could occur if - * in-flight packets from before XDP was enabled reach - * the receive path after XDP is loaded. - */ - if (unlikely(hdr->hdr.gso_type)) + data = mergeable_xdp_prepare(vi, rq, xdp_prog, ctx, &frame_sz, &num_buf, &page, + offset, &len, hdr); + if (!data) goto err_xdp; - /* Now XDP core assumes frag size is PAGE_SIZE, but buffers - * with headroom may add hole in truesize, which - * make their length exceed PAGE_SIZE. So we disabled the - * hole mechanism for xdp. See add_recvbuf_mergeable(). - */ - frame_sz = truesize; - - /* This happens when headroom is not enough because - * of the buffer was prefilled before XDP is set. - * This should only happen for the first several packets. - * In fact, vq reset can be used here to help us clean up - * the prefilled buffers, but many existing devices do not - * support it, and we don't want to bother users who are - * using xdp normally. - */ - if (!xdp_prog->aux->xdp_has_frags && - (num_buf > 1 || headroom < virtnet_get_headroom(vi))) { - /* linearize data for XDP */ - xdp_page = xdp_linearize_page(rq, &num_buf, - page, offset, - VIRTIO_XDP_HEADROOM, - &len); - frame_sz = PAGE_SIZE; - - if (!xdp_page) - goto err_xdp; - offset = VIRTIO_XDP_HEADROOM; - - put_page(page); - page = xdp_page; - } else if (unlikely(headroom < virtnet_get_headroom(vi))) { - xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM + - sizeof(struct skb_shared_info)); - if (len + xdp_room > PAGE_SIZE) - goto err_xdp; - - xdp_page = alloc_page(GFP_ATOMIC); - if (!xdp_page) - goto err_xdp; - - memcpy(page_address(xdp_page) + VIRTIO_XDP_HEADROOM, - page_address(page) + offset, len); - frame_sz = PAGE_SIZE; - offset = VIRTIO_XDP_HEADROOM; - - put_page(page); - page = xdp_page; - } else { - xdp_page = page; - } - - data = page_address(xdp_page) + offset; err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz, &num_buf, &xdp_frags_truesz, stats); if (unlikely(err))