From patchwork Mon Apr 22 01:53:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13637591 Received: from out30-110.freemail.mail.aliyun.com (out30-110.freemail.mail.aliyun.com [115.124.30.110]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8250F4437 for ; Mon, 22 Apr 2024 02:09:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.110 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713751784; cv=none; b=EI9lBawcYmpE0YgcrnsUTQf0XEj8ksVrwALFY8GAbKdgc7SclwEiTeapCncPMkRNtjPA+Ui2UCi85ytqWrBTcqcbQFKchwTQ3f3Y+65tGbp1s08QHRR8fmIZT/aI4DYD5FbZJ6brr9+6QKaPdP8EXiQ5kkfCa+YqRXby2ZQBgX0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713751784; c=relaxed/simple; bh=IKPSrjK7GumQkO7rVTXz7glaom9+TDeC1rcFSLC4/ME=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=H2Kiz2EnXNbUffMQZthDRJ0TRN6umfEc++/Mhjll6zSv6//TJmICmsE0n+9DItec8fzIqucuA9F/27pdtq1Q2J23WYzSagSotJHicMJSKiP7dr+pG/KR//CjWR7b/qTn2ov5NkWAm9BwXNB0srUF3lxbRFCpSgVdIjJfodr1o8I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=m3MJn72f; arc=none smtp.client-ip=115.124.30.110 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="m3MJn72f" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1713751779; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=CRDIlHlwyghRqmAon+aJ87AiYl5fnuNF3IDN2znNSds=; b=m3MJn72fJGCtjUJ9RLLM3gYQSAhVTIXLzQRlJhCHLIFJZkR+G86a9ieoBx+TsCvzr//f6yiI+QFZWFO6q7ex3AuqT5lWE//FrAXRJ/b/qIasRYC2uWgkbucdsm8YWWAN0Rsf/icbZgbpqQ49DTUYDI7a7RpA3/5G9Bl80Zq5IDU= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R211e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045170;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0W4yCK0x_1713750845; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W4yCK0x_1713750845) by smtp.aliyun-inc.com; Mon, 22 Apr 2024 09:54:06 +0800 From: Xuan Zhuo To: virtualization@lists.linux.dev Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH vhost v1 1/7] virtio_ring: introduce dma map api for page Date: Mon, 22 Apr 2024 09:53:57 +0800 Message-Id: <20240422015403.72526-2-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240422015403.72526-1-xuanzhuo@linux.alibaba.com> References: <20240422015403.72526-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 2c089e81de7e The virtio-net big mode sq will use these APIs to map the pages. dma_addr_t virtqueue_dma_map_page_attrs(struct virtqueue *_vq, struct page *page, size_t offset, size_t size, enum dma_data_direction dir, unsigned long attrs); void virtqueue_dma_unmap_page_attrs(struct virtqueue *_vq, dma_addr_t addr, size_t size, enum dma_data_direction dir, unsigned long attrs); Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/virtio/virtio_ring.c | 52 ++++++++++++++++++++++++++++++++++++ include/linux/virtio.h | 7 +++++ 2 files changed, 59 insertions(+) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 70de1a9a81a3..1b9fb680cff3 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -3100,6 +3100,58 @@ void virtqueue_dma_unmap_single_attrs(struct virtqueue *_vq, dma_addr_t addr, } EXPORT_SYMBOL_GPL(virtqueue_dma_unmap_single_attrs); +/** + * virtqueue_dma_map_page_attrs - map DMA for _vq + * @_vq: the struct virtqueue we're talking about. + * @page: the page to do dma + * @offset: the offset inside the page + * @size: the size of the page to do dma + * @dir: DMA direction + * @attrs: DMA Attrs + * + * The caller calls this to do dma mapping in advance. The DMA address can be + * passed to this _vq when it is in pre-mapped mode. + * + * return DMA address. Caller should check that by virtqueue_dma_mapping_error(). + */ +dma_addr_t virtqueue_dma_map_page_attrs(struct virtqueue *_vq, struct page *page, + size_t offset, size_t size, + enum dma_data_direction dir, + unsigned long attrs) +{ + struct vring_virtqueue *vq = to_vvq(_vq); + + if (!vq->use_dma_api) + return page_to_phys(page) + offset; + + return dma_map_page_attrs(vring_dma_dev(vq), page, offset, size, dir, attrs); +} +EXPORT_SYMBOL_GPL(virtqueue_dma_map_page_attrs); + +/** + * virtqueue_dma_unmap_page_attrs - unmap DMA for _vq + * @_vq: the struct virtqueue we're talking about. + * @addr: the dma address to unmap + * @size: the size of the buffer + * @dir: DMA direction + * @attrs: DMA Attrs + * + * Unmap the address that is mapped by the virtqueue_dma_map_* APIs. + * + */ +void virtqueue_dma_unmap_page_attrs(struct virtqueue *_vq, dma_addr_t addr, + size_t size, enum dma_data_direction dir, + unsigned long attrs) +{ + struct vring_virtqueue *vq = to_vvq(_vq); + + if (!vq->use_dma_api) + return; + + dma_unmap_page_attrs(vring_dma_dev(vq), addr, size, dir, attrs); +} +EXPORT_SYMBOL_GPL(virtqueue_dma_unmap_page_attrs); + /** * virtqueue_dma_mapping_error - check dma address * @_vq: the struct virtqueue we're talking about. diff --git a/include/linux/virtio.h b/include/linux/virtio.h index 26c4325aa373..d6c699553979 100644 --- a/include/linux/virtio.h +++ b/include/linux/virtio.h @@ -228,6 +228,13 @@ dma_addr_t virtqueue_dma_map_single_attrs(struct virtqueue *_vq, void *ptr, size void virtqueue_dma_unmap_single_attrs(struct virtqueue *_vq, dma_addr_t addr, size_t size, enum dma_data_direction dir, unsigned long attrs); +dma_addr_t virtqueue_dma_map_page_attrs(struct virtqueue *_vq, struct page *page, + size_t offset, size_t size, + enum dma_data_direction dir, + unsigned long attrs); +void virtqueue_dma_unmap_page_attrs(struct virtqueue *_vq, dma_addr_t addr, + size_t size, enum dma_data_direction dir, + unsigned long attrs); int virtqueue_dma_mapping_error(struct virtqueue *_vq, dma_addr_t addr); bool virtqueue_dma_need_sync(struct virtqueue *_vq, dma_addr_t addr); From patchwork Mon Apr 22 01:53:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13637584 Received: from out30-113.freemail.mail.aliyun.com (out30-113.freemail.mail.aliyun.com [115.124.30.113]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7F0DE2114 for ; Mon, 22 Apr 2024 01:54:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.113 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713750852; cv=none; b=Qe7fGhCNHtGr8LSET35vOqHoxqm/G0FStfd/Rv3xdxrTwiobwIazRjBPu3xAoZVV7Kz1Kg+QxDXaF5VnsOoNHLgBP11Hmd3Z8ZogP6L5a2XECnzc8Nl4HtIIhcpEDs0zVUOKNXfgtSTU1Iw89UvTcmFHlIIsuEXPhhPbtVIsmZw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713750852; c=relaxed/simple; bh=tUMAS9KXe4/O949IttMY1KiQyrT5G9XkJhUfio2jbWg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=FGxOKk4aJA9TO9E7C02s9/Yqzi7c57Wso9lB6wm4h0WPYDOmYM4edC0pZx9jR52ouVXdoHINU/MvMDc0PASF3Xxf3Uq6woL/+WubO4UghH/FmA9q8mt9reEGMfwV8c+fl5J+79khqCl/QgQicdDW7WeUG2OR51J9vn2CK701n3M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=QoyjmIvV; arc=none smtp.client-ip=115.124.30.113 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="QoyjmIvV" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1713750848; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=QHpOwg5Ne5hMPAux5XPNVM+NNWi2LMfMQBgNF4Fv8HI=; b=QoyjmIvV+pEbgbviD5iiyAIwqPkMbQRoFD74K1VHDX9xTaPoPAEdouXRxSyjEBPQTH3l2RcAICS14E6m/lpay1YiN4ITXa4ljyqUscpCPhSmu0dI60x8fe/vFWFN9Za9FlBF77Vn6PumRfIPHeRRgp2CoT45NE/mKR78sPP2DYY= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R151e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046059;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0W4y49bA_1713750846; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W4y49bA_1713750846) by smtp.aliyun-inc.com; Mon, 22 Apr 2024 09:54:07 +0800 From: Xuan Zhuo To: virtualization@lists.linux.dev Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH vhost v1 2/7] virtio_ring: enable premapped mode whatever use_dma_api Date: Mon, 22 Apr 2024 09:53:58 +0800 Message-Id: <20240422015403.72526-3-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240422015403.72526-1-xuanzhuo@linux.alibaba.com> References: <20240422015403.72526-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 2c089e81de7e Now, we have virtio DMA APIs, the driver can be the premapped mode whatever the virtio core uses dma api or not. So remove the limit of checking use_dma_api from virtqueue_set_dma_premapped(). Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/virtio/virtio_ring.c | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 1b9fb680cff3..1924402e0d84 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -2730,7 +2730,7 @@ EXPORT_SYMBOL_GPL(virtqueue_resize); * * Returns zero or a negative error. * 0: success. - * -EINVAL: vring does not use the dma api, so we can not enable premapped mode. + * -EINVAL: the vq is in use. */ int virtqueue_set_dma_premapped(struct virtqueue *_vq) { @@ -2746,11 +2746,6 @@ int virtqueue_set_dma_premapped(struct virtqueue *_vq) return -EINVAL; } - if (!vq->use_dma_api) { - END_USE(vq); - return -EINVAL; - } - vq->premapped = true; vq->do_unmap = false; From patchwork Mon Apr 22 01:53:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13637585 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-111.freemail.mail.aliyun.com (out30-111.freemail.mail.aliyun.com [115.124.30.111]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CFB894C65 for ; Mon, 22 Apr 2024 01:54:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.111 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713750859; cv=none; b=uJPdaefRNuCnF3PM77IE2pi4tym9WJlDdvFfdUT/KmE3ILlZixHTkbhFN+hV47ZPR6qSqS1jAz8Qa4Zq1kd32jkHYiNaU/y3hVPImzZd9D0qlVmspLoTvF0TPKyQCvWdNGSffIreyibQ6JSNaPO54on4glna/ocM05XkM34vsQk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713750859; c=relaxed/simple; bh=gdHESuJyO226XLT6IWnJgIW1k9TxKZVwRi8aj7P3lAM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=kNtHCbJFjjNJOYTFAPBmEsEUQOlYvgT/xqwqlinDjdkPefRAlrTvcbnKPPHCzV/pl7lDC0y7t3HCnfV6HYleKRgmc3f+Mrh/IoELImsJFApgYZRhldHM/0l08rILu5MCSCU2cnyRnwC4bAy693ggmBOrPMB8VCQJ+37HEA2d3+Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=DPEUzc/O; arc=none smtp.client-ip=115.124.30.111 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="DPEUzc/O" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1713750850; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=7vayq+l1KamS3AW4JkAfgCLE++jtcAZ089i7qobCbfQ=; b=DPEUzc/Oyk5dKD7iDnlGiS0Ufi6wYyTejVxCUHWspUmPYLQAxW3ch2MfeVlwJdDG/iXF3ORiEEmCF6kxDt0ZpanRcWapHmfbekLZ1Yl7LIg+T4rL21lJhIuLLk04FLGW7hIj56Y4QyR1evyedJNbwKhRR1V7KprUuLJGxnxhsp0= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R161e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046049;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0W4yCK2S_1713750847; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W4yCK2S_1713750847) by smtp.aliyun-inc.com; Mon, 22 Apr 2024 09:54:08 +0800 From: Xuan Zhuo To: virtualization@lists.linux.dev Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH vhost v1 3/7] virtio_net: replace private by pp struct inside page Date: Mon, 22 Apr 2024 09:53:59 +0800 Message-Id: <20240422015403.72526-4-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240422015403.72526-1-xuanzhuo@linux.alibaba.com> References: <20240422015403.72526-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 2c089e81de7e X-Patchwork-Delegate: kuba@kernel.org Now, we chain the pages of big mode by the page's private variable. But a subsequent patch aims to make the big mode to support premapped mode. This requires additional space to store the dma addr. Within the sub-struct that contains the 'private', there is no suitable variable for storing the DMA addr. struct { /* Page cache and anonymous pages */ /** * @lru: Pageout list, eg. active_list protected by * lruvec->lru_lock. Sometimes used as a generic list * by the page owner. */ union { struct list_head lru; /* Or, for the Unevictable "LRU list" slot */ struct { /* Always even, to negate PageTail */ void *__filler; /* Count page's or folio's mlocks */ unsigned int mlock_count; }; /* Or, free page */ struct list_head buddy_list; struct list_head pcp_list; }; /* See page-flags.h for PAGE_MAPPING_FLAGS */ struct address_space *mapping; union { pgoff_t index; /* Our offset within mapping. */ unsigned long share; /* share count for fsdax */ }; /** * @private: Mapping-private opaque data. * Usually used for buffer_heads if PagePrivate. * Used for swp_entry_t if PageSwapCache. * Indicates order in the buddy system if PageBuddy. */ unsigned long private; }; But within the page pool struct, we have a variable called dma_addr that is appropriate for storing dma addr. And that struct is used by netstack. That works to our advantage. struct { /* page_pool used by netstack */ /** * @pp_magic: magic value to avoid recycling non * page_pool allocated pages. */ unsigned long pp_magic; struct page_pool *pp; unsigned long _pp_mapping_pad; unsigned long dma_addr; atomic_long_t pp_ref_count; }; On the other side, we should use variables from the same sub-struct. So this patch replaces the "private" with "pp". Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/net/virtio_net.c | 32 ++++++++++++++++++++------------ 1 file changed, 20 insertions(+), 12 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index c22d1118a133..2c7a67ad4789 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -48,6 +48,14 @@ module_param(napi_tx, bool, 0644); #define VIRTIO_XDP_FLAG BIT(0) +/* In big mode, we use a page chain to manage multiple pages submitted to the + * ring. These pages are connected using page.pp. The following two macros are + * used to obtain the next page in a page chain and set the next page in the + * page chain. + */ +#define page_chain_next(p) ((struct page *)((p)->pp)) +#define page_chain_add(p, n) ((p)->pp = (void *)n) + /* RX packet size EWMA. The average packet size is used to determine the packet * buffer size when refilling RX rings. As the entire RX ring may be refilled * at once, the weight is chosen so that the EWMA will be insensitive to short- @@ -191,7 +199,7 @@ struct receive_queue { struct virtnet_interrupt_coalesce intr_coal; - /* Chain pages by the private ptr. */ + /* Chain pages by the page's pp struct. */ struct page *pages; /* Average packet length for mergeable receive buffers. */ @@ -432,16 +440,16 @@ skb_vnet_common_hdr(struct sk_buff *skb) } /* - * private is used to chain pages for big packets, put the whole - * most recent used list in the beginning for reuse + * put the whole most recent used list in the beginning for reuse */ static void give_pages(struct receive_queue *rq, struct page *page) { struct page *end; /* Find end of list, sew whole thing into vi->rq.pages. */ - for (end = page; end->private; end = (struct page *)end->private); - end->private = (unsigned long)rq->pages; + for (end = page; page_chain_next(end); end = page_chain_next(end)); + + page_chain_add(end, rq->pages); rq->pages = page; } @@ -450,9 +458,9 @@ static struct page *get_a_page(struct receive_queue *rq, gfp_t gfp_mask) struct page *p = rq->pages; if (p) { - rq->pages = (struct page *)p->private; - /* clear private here, it is used to chain pages */ - p->private = 0; + rq->pages = page_chain_next(p); + /* clear chain here, it is used to chain pages */ + page_chain_add(p, NULL); } else p = alloc_page(gfp_mask); return p; @@ -609,7 +617,7 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi, if (unlikely(!skb)) return NULL; - page = (struct page *)page->private; + page = page_chain_next(page); if (page) give_pages(rq, page); goto ok; @@ -657,7 +665,7 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi, skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page, offset, frag_size, truesize); len -= frag_size; - page = (struct page *)page->private; + page = page_chain_next(page); offset = 0; } @@ -1909,7 +1917,7 @@ static int add_recvbuf_big(struct virtnet_info *vi, struct receive_queue *rq, sg_set_buf(&rq->sg[i], page_address(first), PAGE_SIZE); /* chain new page in list head to match sg */ - first->private = (unsigned long)list; + page_chain_add(first, list); list = first; } @@ -1929,7 +1937,7 @@ static int add_recvbuf_big(struct virtnet_info *vi, struct receive_queue *rq, sg_set_buf(&rq->sg[1], p + offset, PAGE_SIZE - offset); /* chain first in list head */ - first->private = (unsigned long)list; + page_chain_add(first, list); err = virtqueue_add_inbuf(rq->vq, rq->sg, vi->big_packets_num_skbfrags + 2, first, gfp); if (err < 0) From patchwork Mon Apr 22 01:54:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13637590 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-110.freemail.mail.aliyun.com (out30-110.freemail.mail.aliyun.com [115.124.30.110]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3C2BB205E37 for ; Mon, 22 Apr 2024 01:59:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.110 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713751171; cv=none; b=J/w0nYcGzCiCLunEwZNjaTb2v23z2ozCzgGFiGIRnd/WVIkmK4hP1W35OAtKxs+IIFf4HGTjt8VzOxtmE0BvGWs3sQ5vvN9Se2IFZVvZwIVwgJfyvesjSIWmoHMH0doMOUDMyA4xw31RYenOh42LhpVVcQNKiake3HSclQD0rG8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713751171; c=relaxed/simple; bh=I/+Xr4SwTRUKrw6dXxPpOB8kt+CmoEbFuJXC//NNkUs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=UiHg+PdAemEtfUFyh9BFdQEXT+AvTLLLnUgNRa8kFoRQe2lkk+eAL07dEXXvgw/Jno2dnTLjnkP58Jse0GaSH0yHM8ALYfqXz4ejM1NsKX92sh33foZGN6n2f/dgsKoFDnQefp4ms4zwpFTkzAN36ui8BIF2iV136Mbrv3Q72uE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=bWjAJIzs; arc=none smtp.client-ip=115.124.30.110 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="bWjAJIzs" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1713751167; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=o/+s6E4oLCFARZjkowI+SIEK/fKiji8BVFsnIAFrzV0=; b=bWjAJIzsQlBD+znF7qOC1+dZdQJ2FLtWI1Ggt3mon6vluDLUF3WoSZYd329tgxWQeGWaPC70fiLGlYZldDTanb9+vx6WQ8vPQLi4gbv9PMl/NbL27erRcLmIc6RHECiA6HcySku+uG0Jww5Tz83D6TuCc7SKxolqorpHLwtrdKU= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R621e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045168;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0W4y5bsl_1713750848; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W4y5bsl_1713750848) by smtp.aliyun-inc.com; Mon, 22 Apr 2024 09:54:10 +0800 From: Xuan Zhuo To: virtualization@lists.linux.dev Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH vhost v1 4/7] virtio_net: big mode support premapped Date: Mon, 22 Apr 2024 09:54:00 +0800 Message-Id: <20240422015403.72526-5-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240422015403.72526-1-xuanzhuo@linux.alibaba.com> References: <20240422015403.72526-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 2c089e81de7e X-Patchwork-Delegate: kuba@kernel.org In big mode, pre-mapping DMA is beneficial because if the pages are not used, we can reuse them without needing to unmap and remap. We require space to store the DMA address. I use the page.dma_addr to store the DMA address from the pp structure inside the page. Every page retrieved from get_a_page() is mapped, and its DMA address is stored in page.dma_addr. When a page is returned to the chain, we check the DMA status; if it is not mapped (potentially having been unmapped), we remap it before returning it to the chain. Based on the following points, we do not use page pool to manage these pages: 1. virtio-net uses the DMA APIs wrapped by virtio core. Therefore, we can only prevent the page pool from performing DMA operations, and let the driver perform DMA operations on the allocated pages. 2. But when the page pool releases the page, we have no chance to execute dma unmap. 3. A solution to #2 is to execute dma unmap every time before putting the page back to the page pool. (This is actually a waste, we don't execute unmap so frequently.) 4. But there is another problem, we still need to use page.dma_addr to save the dma address. Using page.dma_addr while using page pool is unsafe behavior. More: https://lore.kernel.org/all/CACGkMEu=Aok9z2imB_c5qVuujSh=vjj1kx12fy9N7hqyi+M5Ow@mail.gmail.com/ Signed-off-by: Xuan Zhuo --- drivers/net/virtio_net.c | 123 ++++++++++++++++++++++++++++++++++----- 1 file changed, 108 insertions(+), 15 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 2c7a67ad4789..75f33bbfd5fa 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -439,6 +439,81 @@ skb_vnet_common_hdr(struct sk_buff *skb) return (struct virtio_net_common_hdr *)skb->cb; } +static void sg_fill_dma(struct scatterlist *sg, dma_addr_t addr, u32 len) +{ + sg->dma_address = addr; + sg->length = len; +} + +/* For pages submitted to the ring, we need to record its dma for unmap. + * Here, we use the page.dma_addr and page.pp_magic to store the dma + * address. + */ +static void page_chain_set_dma(struct page *p, dma_addr_t addr) +{ + if (sizeof(dma_addr_t) > sizeof(unsigned long)) { + p->dma_addr = lower_32_bits(addr); + p->pp_magic = upper_32_bits(addr); + } else { + p->dma_addr = addr; + } +} + +static dma_addr_t page_chain_get_dma(struct page *p) +{ + dma_addr_t addr; + + if (sizeof(dma_addr_t) > sizeof(unsigned long)) { + addr = p->pp_magic; + return (addr << 32) + p->dma_addr; + } else { + return p->dma_addr; + } +} + +static void page_chain_sync_for_cpu(struct receive_queue *rq, struct page *p) +{ + virtqueue_dma_sync_single_range_for_cpu(rq->vq, page_chain_get_dma(p), + 0, PAGE_SIZE, DMA_FROM_DEVICE); +} + +static void page_chain_unmap(struct receive_queue *rq, struct page *p, bool sync) +{ + int attr = 0; + + if (!sync) + attr = DMA_ATTR_SKIP_CPU_SYNC; + + virtqueue_dma_unmap_page_attrs(rq->vq, page_chain_get_dma(p), PAGE_SIZE, + DMA_FROM_DEVICE, attr); +} + +static int page_chain_map(struct receive_queue *rq, struct page *p) +{ + dma_addr_t addr; + + addr = virtqueue_dma_map_page_attrs(rq->vq, p, 0, PAGE_SIZE, DMA_FROM_DEVICE, 0); + if (virtqueue_dma_mapping_error(rq->vq, addr)) + return -ENOMEM; + + page_chain_set_dma(p, addr); + return 0; +} + +static void page_chain_release(struct receive_queue *rq) +{ + struct page *p, *n; + + for (p = rq->pages; p; p = n) { + n = page_chain_next(p); + + page_chain_unmap(rq, p, true); + __free_pages(p, 0); + } + + rq->pages = NULL; +} + /* * put the whole most recent used list in the beginning for reuse */ @@ -461,8 +536,15 @@ static struct page *get_a_page(struct receive_queue *rq, gfp_t gfp_mask) rq->pages = page_chain_next(p); /* clear chain here, it is used to chain pages */ page_chain_add(p, NULL); - } else + } else { p = alloc_page(gfp_mask); + + if (page_chain_map(rq, p)) { + __free_pages(p, 0); + return NULL; + } + } + return p; } @@ -617,9 +699,12 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi, if (unlikely(!skb)) return NULL; - page = page_chain_next(page); - if (page) - give_pages(rq, page); + if (!vi->mergeable_rx_bufs) { + page_chain_unmap(rq, page, false); + page = page_chain_next(page); + if (page) + give_pages(rq, page); + } goto ok; } @@ -662,6 +747,9 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi, BUG_ON(offset >= PAGE_SIZE); while (len) { unsigned int frag_size = min((unsigned)PAGE_SIZE - offset, len); + + page_chain_unmap(rq, page, !offset); + skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page, offset, frag_size, truesize); len -= frag_size; @@ -828,7 +916,8 @@ static void virtnet_rq_unmap_free_buf(struct virtqueue *vq, void *buf) rq = &vi->rq[i]; - if (rq->do_dma) + /* Skip the unmap for big mode. */ + if (!vi->big_packets || vi->mergeable_rx_bufs) virtnet_rq_unmap(rq, buf, 0); virtnet_rq_free_buf(vi, rq, buf); @@ -1351,8 +1440,12 @@ static struct sk_buff *receive_big(struct net_device *dev, struct virtnet_rq_stats *stats) { struct page *page = buf; - struct sk_buff *skb = - page_to_skb(vi, rq, page, 0, len, PAGE_SIZE, 0); + struct sk_buff *skb; + + /* sync first page. The follow code may read this page. */ + page_chain_sync_for_cpu(rq, page); + + skb = page_to_skb(vi, rq, page, 0, len, PAGE_SIZE, 0); u64_stats_add(&stats->bytes, len - vi->hdr_len); if (unlikely(!skb)) @@ -1901,7 +1994,7 @@ static int add_recvbuf_big(struct virtnet_info *vi, struct receive_queue *rq, gfp_t gfp) { struct page *first, *list = NULL; - char *p; + dma_addr_t p; int i, err, offset; sg_init_table(rq->sg, vi->big_packets_num_skbfrags + 2); @@ -1914,7 +2007,7 @@ static int add_recvbuf_big(struct virtnet_info *vi, struct receive_queue *rq, give_pages(rq, list); return -ENOMEM; } - sg_set_buf(&rq->sg[i], page_address(first), PAGE_SIZE); + sg_fill_dma(&rq->sg[i], page_chain_get_dma(first), PAGE_SIZE); /* chain new page in list head to match sg */ page_chain_add(first, list); @@ -1926,15 +2019,16 @@ static int add_recvbuf_big(struct virtnet_info *vi, struct receive_queue *rq, give_pages(rq, list); return -ENOMEM; } - p = page_address(first); + + p = page_chain_get_dma(first); /* rq->sg[0], rq->sg[1] share the same page */ /* a separated rq->sg[0] for header - required in case !any_header_sg */ - sg_set_buf(&rq->sg[0], p, vi->hdr_len); + sg_fill_dma(&rq->sg[0], p, vi->hdr_len); /* rq->sg[1] for data packet, from offset */ offset = sizeof(struct padded_vnet_hdr); - sg_set_buf(&rq->sg[1], p + offset, PAGE_SIZE - offset); + sg_fill_dma(&rq->sg[1], p + offset, PAGE_SIZE - offset); /* chain first in list head */ page_chain_add(first, list); @@ -2136,7 +2230,7 @@ static int virtnet_receive(struct receive_queue *rq, int budget, } } else { while (packets < budget && - (buf = virtnet_rq_get_buf(rq, &len, NULL)) != NULL) { + (buf = virtqueue_get_buf(rq->vq, &len)) != NULL) { receive_buf(vi, rq, buf, len, NULL, xdp_xmit, &stats); packets++; } @@ -4257,8 +4351,7 @@ static void _free_receive_bufs(struct virtnet_info *vi) int i; for (i = 0; i < vi->max_queue_pairs; i++) { - while (vi->rq[i].pages) - __free_pages(get_a_page(&vi->rq[i], GFP_KERNEL), 0); + page_chain_release(&vi->rq[i]); old_prog = rtnl_dereference(vi->rq[i].xdp_prog); RCU_INIT_POINTER(vi->rq[i].xdp_prog, NULL); From patchwork Mon Apr 22 01:54:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13637586 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-119.freemail.mail.aliyun.com (out30-119.freemail.mail.aliyun.com [115.124.30.119]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E78A83D60 for ; Mon, 22 Apr 2024 01:54:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.119 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713750861; cv=none; b=hCwZER8RDB4v6GSiV1FSyjVY8yFiPztWpTrrqxivguifI/CStO/nUwOVLWLF5X5xcWyD5zGoFpG4Z7AZfY6UKGqR4Ah+GOiYlccw0iqFjn1Vf3ZpJPp1Khrsn3nlbpKDvZdXMhQeNstmZ5U5W0rbzZ8gsj38kGqYeoUnHGEOu80= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713750861; c=relaxed/simple; bh=L+5iKg2JOcu81gYPrzG12eHMxzy6vRQflndEiznoB/E=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=j4Ypkd2jaYXdhfukTRhNLmwSAw2tJEzD+8d6b1Ll+cO91OfdvRUmfYZBjRdUiFPzvhD4faSnfegCrKm8+WH33Bw3CECxsgPuk2FkrDQ8jR6zTCwk6Cnoc4k4tur6JnVue06GfX6/LjaI8u5UEHTdW1jaOPqmvs9SedbdICPBDrY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=fnSgEDJk; arc=none smtp.client-ip=115.124.30.119 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="fnSgEDJk" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1713750852; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=Od8DVGrElE/G4mB/sMFtr2HoAzT/Sn6eJF9sxWhZtyw=; b=fnSgEDJkIpBprPart/0cyoCiNhqn3rP3mv9y8W5eKiJzSyddrsN6d6Mqb/KyFY1M94DjuDxg6aQGuinGXoVeYw1iO8xGkcA3on5fjaSWaj1Ez/DAVanQQQWk667RdPzi00en5qx62ftYg4VgxRgV5hpdmhe1zS19Ap+ltaxCnvA= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R121e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045170;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0W4y4sLh_1713750850; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W4y4sLh_1713750850) by smtp.aliyun-inc.com; Mon, 22 Apr 2024 09:54:11 +0800 From: Xuan Zhuo To: virtualization@lists.linux.dev Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH vhost v1 5/7] virtio_net: enable premapped by default Date: Mon, 22 Apr 2024 09:54:01 +0800 Message-Id: <20240422015403.72526-6-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240422015403.72526-1-xuanzhuo@linux.alibaba.com> References: <20240422015403.72526-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 2c089e81de7e X-Patchwork-Delegate: kuba@kernel.org Currently, big, merge, and small modes all support the premapped mode. We can now enable premapped mode by default. Furthermore, virtqueue_set_dma_premapped() must succeed when called immediately after find_vqs(). Consequently, we can assume that premapped mode is always enabled. Signed-off-by: Xuan Zhuo --- drivers/net/virtio_net.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 75f33bbfd5fa..1bf49956dce8 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -896,13 +896,9 @@ static void virtnet_rq_set_premapped(struct virtnet_info *vi) { int i; - /* disable for big mode */ - if (!vi->mergeable_rx_bufs && vi->big_packets) - return; - for (i = 0; i < vi->max_queue_pairs; i++) { - if (virtqueue_set_dma_premapped(vi->rq[i].vq)) - continue; + /* error never happen */ + BUG_ON(virtqueue_set_dma_premapped(vi->rq[i].vq)); vi->rq[i].do_dma = true; } From patchwork Mon Apr 22 01:54:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13637587 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-98.freemail.mail.aliyun.com (out30-98.freemail.mail.aliyun.com [115.124.30.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EAB9763C7 for ; Mon, 22 Apr 2024 01:54:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.98 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713750863; cv=none; b=X58YXPeBxCoUFvqDYMPgk0kLP4vnjn4CQi81t5kceAV01ZtdG3Q8zmv4TOKVLqqdigl7sFBmhqu/hsaYNOUhVzLvvCzVbY1mfSEPuPR83NWnwVMqLkWu0W5U66h0vCTEbXcdi+9v/hAOIp1ArBkySjXQNS9nKWvUt8ooa40atZg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713750863; c=relaxed/simple; bh=P5umtN9FvvVvUJkHRB8kXDg/Bg+wdfzapjJoBBic0/A=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=jE+64oOyCGVc56EAgpTrP0tWWcI2rTC6LtaSoQBR+ipmOTT7k544htsPF2z7kOosYiaBKowEweHITBh12YOV0XixwM7jRYSHR0MyYQdWCdpG3IUSOZZA9bhFOr9aDToXedV9FsjjkfZxlfL5c7igsAoYjAGdAp+Ua0lV0ckqbEI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=LL0Z8OIc; arc=none smtp.client-ip=115.124.30.98 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="LL0Z8OIc" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1713750853; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=/s/ZnJtiq2qk0tlKq+wYYri/0DfW8iPl3s++ybYuvbg=; b=LL0Z8OIcbu5Gul8LKdb0BPjQWTpeELcKMJ2Fd9c30REXDxf4fQRUaiS5i49CbszG+nrRy2iQgcRGIs06WTKKjnLG5EfX/fyadOtTGFfduQKlKQAG1ni/lfJrT1VptwIykcbFG6qGv5v6gGc/zac/R192Bd/Hq+httHD9JEUQHd8= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R161e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046049;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0W4yCK4K_1713750851; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W4yCK4K_1713750851) by smtp.aliyun-inc.com; Mon, 22 Apr 2024 09:54:12 +0800 From: Xuan Zhuo To: virtualization@lists.linux.dev Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH vhost v1 6/7] virtio_net: rx remove premapped failover code Date: Mon, 22 Apr 2024 09:54:02 +0800 Message-Id: <20240422015403.72526-7-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240422015403.72526-1-xuanzhuo@linux.alibaba.com> References: <20240422015403.72526-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 2c089e81de7e X-Patchwork-Delegate: kuba@kernel.org Now, the premapped mode can be enabled unconditionally. So we can remove the failover code. Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/net/virtio_net.c | 81 ++++++++++++++++------------------------ 1 file changed, 33 insertions(+), 48 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 1bf49956dce8..eff8f16a9fe0 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -221,9 +221,6 @@ struct receive_queue { /* Record the last dma info to free after new pages is allocated. */ struct virtnet_rq_dma *last_dma; - - /* Do dma by self */ - bool do_dma; }; /* This structure can contain rss message with maximum settings for indirection table and keysize @@ -803,7 +800,7 @@ static void *virtnet_rq_get_buf(struct receive_queue *rq, u32 *len, void **ctx) void *buf; buf = virtqueue_get_buf_ctx(rq->vq, len, ctx); - if (buf && rq->do_dma) + if (buf) virtnet_rq_unmap(rq, buf, *len); return buf; @@ -816,11 +813,6 @@ static void virtnet_rq_init_one_sg(struct receive_queue *rq, void *buf, u32 len) u32 offset; void *head; - if (!rq->do_dma) { - sg_init_one(rq->sg, buf, len); - return; - } - head = page_address(rq->alloc_frag.page); offset = buf - head; @@ -846,44 +838,42 @@ static void *virtnet_rq_alloc(struct receive_queue *rq, u32 size, gfp_t gfp) head = page_address(alloc_frag->page); - if (rq->do_dma) { - dma = head; - - /* new pages */ - if (!alloc_frag->offset) { - if (rq->last_dma) { - /* Now, the new page is allocated, the last dma - * will not be used. So the dma can be unmapped - * if the ref is 0. - */ - virtnet_rq_unmap(rq, rq->last_dma, 0); - rq->last_dma = NULL; - } + dma = head; - dma->len = alloc_frag->size - sizeof(*dma); + /* new pages */ + if (!alloc_frag->offset) { + if (rq->last_dma) { + /* Now, the new page is allocated, the last dma + * will not be used. So the dma can be unmapped + * if the ref is 0. + */ + virtnet_rq_unmap(rq, rq->last_dma, 0); + rq->last_dma = NULL; + } - addr = virtqueue_dma_map_single_attrs(rq->vq, dma + 1, - dma->len, DMA_FROM_DEVICE, 0); - if (virtqueue_dma_mapping_error(rq->vq, addr)) - return NULL; + dma->len = alloc_frag->size - sizeof(*dma); - dma->addr = addr; - dma->need_sync = virtqueue_dma_need_sync(rq->vq, addr); + addr = virtqueue_dma_map_single_attrs(rq->vq, dma + 1, + dma->len, DMA_FROM_DEVICE, 0); + if (virtqueue_dma_mapping_error(rq->vq, addr)) + return NULL; - /* Add a reference to dma to prevent the entire dma from - * being released during error handling. This reference - * will be freed after the pages are no longer used. - */ - get_page(alloc_frag->page); - dma->ref = 1; - alloc_frag->offset = sizeof(*dma); + dma->addr = addr; + dma->need_sync = virtqueue_dma_need_sync(rq->vq, addr); - rq->last_dma = dma; - } + /* Add a reference to dma to prevent the entire dma from + * being released during error handling. This reference + * will be freed after the pages are no longer used. + */ + get_page(alloc_frag->page); + dma->ref = 1; + alloc_frag->offset = sizeof(*dma); - ++dma->ref; + rq->last_dma = dma; } + ++dma->ref; + buf = head + alloc_frag->offset; get_page(alloc_frag->page); @@ -896,12 +886,9 @@ static void virtnet_rq_set_premapped(struct virtnet_info *vi) { int i; - for (i = 0; i < vi->max_queue_pairs; i++) { + for (i = 0; i < vi->max_queue_pairs; i++) /* error never happen */ BUG_ON(virtqueue_set_dma_premapped(vi->rq[i].vq)); - - vi->rq[i].do_dma = true; - } } static void virtnet_rq_unmap_free_buf(struct virtqueue *vq, void *buf) @@ -1978,8 +1965,7 @@ static int add_recvbuf_small(struct virtnet_info *vi, struct receive_queue *rq, err = virtqueue_add_inbuf_ctx(rq->vq, rq->sg, 1, buf, ctx, gfp); if (err < 0) { - if (rq->do_dma) - virtnet_rq_unmap(rq, buf, 0); + virtnet_rq_unmap(rq, buf, 0); put_page(virt_to_head_page(buf)); } @@ -2094,8 +2080,7 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi, ctx = mergeable_len_to_ctx(len + room, headroom); err = virtqueue_add_inbuf_ctx(rq->vq, rq->sg, 1, buf, ctx, gfp); if (err < 0) { - if (rq->do_dma) - virtnet_rq_unmap(rq, buf, 0); + virtnet_rq_unmap(rq, buf, 0); put_page(virt_to_head_page(buf)); } @@ -4368,7 +4353,7 @@ static void free_receive_page_frags(struct virtnet_info *vi) int i; for (i = 0; i < vi->max_queue_pairs; i++) if (vi->rq[i].alloc_frag.page) { - if (vi->rq[i].do_dma && vi->rq[i].last_dma) + if (vi->rq[i].last_dma) virtnet_rq_unmap(&vi->rq[i], vi->rq[i].last_dma, 0); put_page(vi->rq[i].alloc_frag.page); } From patchwork Mon Apr 22 01:54:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13637588 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-110.freemail.mail.aliyun.com (out30-110.freemail.mail.aliyun.com [115.124.30.110]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A17C7205E33 for ; Mon, 22 Apr 2024 01:54:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.110 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713750863; cv=none; b=AmpZrBp90dDTEaGIvtMVmjBJQIjjclgaMjaLgxmhwFkQLTMPvbLJ8RlkIz5l1CBx9aTGyrC4Urrc048fWJ7BbTcALtZpa1le3qIYqsL7qGYGb4Cm+fW6fPhjQ3zXzFMrNAn05ZEUEupn4Nl/j5SKWvzzBdcdcGPYa5xvb+uDoIY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713750863; c=relaxed/simple; bh=RI9oBferiUfhUZJIfb5WszLvNGIFPD09N8cV69d/2jA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=LajpH675+zWYfwD9Br9QX4CDbkqkV5lN2Gg9lVO17aN9VfSYX8MBxPd/yYNjy4ZqpYn0r5YT8Ve6iZq7tb2rMfs3D/0SAYkq0kreewqzh0JLFeMcCf9XtjAYFRr8MR+E2m8Cc/4vmX4ffACdab6Ln28BN9dJWA8WqBxNyykqbZA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=RXcdOw8p; arc=none smtp.client-ip=115.124.30.110 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="RXcdOw8p" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1713750854; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=5EOVd0kTxPUl+7GWsixYT0vlMbIjWLtjuePm8Pe+IaE=; b=RXcdOw8p4Ct4K+dz0pCLAhlg5EtIFddKKM1N6o6M+FeIWcUQZSITko283Pv+21xI/odg9xiXVNTmpLZLgQ4ZEf38cXCcaQMadXDIVffuu4CD7JIBnWlwfTMHHGUFX9I2XGm3wewmfaFS6QD4TFo1E39X8l/8Jz2Xbvc/ipAZ6gs= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R181e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046059;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0W4yCK5L_1713750853; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W4yCK5L_1713750853) by smtp.aliyun-inc.com; Mon, 22 Apr 2024 09:54:13 +0800 From: Xuan Zhuo To: virtualization@lists.linux.dev Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH vhost v1 7/7] virtio_net: remove the misleading comment Date: Mon, 22 Apr 2024 09:54:03 +0800 Message-Id: <20240422015403.72526-8-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240422015403.72526-1-xuanzhuo@linux.alibaba.com> References: <20240422015403.72526-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 2c089e81de7e X-Patchwork-Delegate: kuba@kernel.org We call the build_skb() actually without copying data. The comment is misleading. So remove it. Signed-off-by: Xuan Zhuo --- drivers/net/virtio_net.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index eff8f16a9fe0..a8dac48a1be7 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -690,7 +690,6 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi, shinfo_size = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); - /* copy small packet so we can reuse these pages */ if (!NET_IP_ALIGN && len > GOOD_COPY_LEN && tailroom >= shinfo_size) { skb = virtnet_build_skb(buf, truesize, p - buf, len); if (unlikely(!skb))