From patchwork Wed Mar 31 07:11:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 12174499 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D5BAC433E4 for ; Wed, 31 Mar 2021 07:12:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 09828619EF for ; Wed, 31 Mar 2021 07:12:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234031AbhCaHMT (ORCPT ); Wed, 31 Mar 2021 03:12:19 -0400 Received: from out30-57.freemail.mail.aliyun.com ([115.124.30.57]:55975 "EHLO out30-57.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233906AbhCaHLo (ORCPT ); Wed, 31 Mar 2021 03:11:44 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R141e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e01424;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=20;SR=0;TI=SMTPD_---0UTwBY6S_1617174701; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0UTwBY6S_1617174701) by smtp.aliyun-inc.com(127.0.0.1); Wed, 31 Mar 2021 15:11:42 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , "David S. Miller" , Jakub Kicinski , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Magnus Karlsson , Jonathan Lemon , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org, Dust Li Subject: [PATCH net-next v3 5/8] virtio-net: xsk zero copy xmit support xsk unaligned mode Date: Wed, 31 Mar 2021 15:11:36 +0800 Message-Id: <20210331071139.15473-6-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.31.0 In-Reply-To: <20210331071139.15473-1-xuanzhuo@linux.alibaba.com> References: <20210331071139.15473-1-xuanzhuo@linux.alibaba.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org In xsk unaligned mode, the frame pointed to by desc may span two consecutive pages, but not more than two pages. Signed-off-by: Xuan Zhuo Reviewed-by: Dust Li --- drivers/net/virtio_net.c | 30 ++++++++++++++++++++++++------ 1 file changed, 24 insertions(+), 6 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index c8a317a93ef7..259fafcf6028 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -2562,24 +2562,42 @@ static void virtnet_xsk_check_space(struct send_queue *sq) static int virtnet_xsk_xmit(struct send_queue *sq, struct xsk_buff_pool *pool, struct xdp_desc *desc) { + u32 offset, n, i, copy, copied; struct virtnet_info *vi; struct page *page; void *data; - u32 offset; + int err, m; u64 addr; - int err; vi = sq->vq->vdev->priv; addr = desc->addr; + data = xsk_buff_raw_get_data(pool, addr); + offset = offset_in_page(data); + m = desc->len - (PAGE_SIZE - offset); + /* xsk unaligned mode, desc will use two page */ + if (m > 0) + n = 3; + else + n = 2; - sg_init_table(sq->sg, 2); + sg_init_table(sq->sg, n); sg_set_buf(sq->sg, &xsk_hdr, vi->hdr_len); - page = xsk_buff_xdp_get_page(pool, addr); - sg_set_page(sq->sg + 1, page, desc->len, offset); - err = virtqueue_add_outbuf(sq->vq, sq->sg, 2, NULL, GFP_ATOMIC); + copied = 0; + for (i = 1; i < n; ++i) { + copy = min_t(int, desc->len - copied, PAGE_SIZE - offset); + + page = xsk_buff_xdp_get_page(pool, addr + copied); + + sg_set_page(sq->sg + i, page, copy, offset); + copied += copy; + if (offset) + offset = 0; + } + + err = virtqueue_add_outbuf(sq->vq, sq->sg, n, NULL, GFP_ATOMIC); if (unlikely(err)) sq->xsk.last_desc = *desc;