From patchwork Mon Apr 22 07:24:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13637823 Received: from out30-111.freemail.mail.aliyun.com (out30-111.freemail.mail.aliyun.com [115.124.30.111]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7C3B54EB3F for ; Mon, 22 Apr 2024 07:24:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.111 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713770656; cv=none; b=eOv6bqLVqnmWnb0Ek25DpvYrAIovwSQtDJ3KOvHNxR/xiCIHWLmX6N7GO7lRupPFAUtAzYptNxNWzo3CtXgS5qJhjQfSKoEwA/DrkYqU3B84ah+Ml7ejwmM7pdRmtnuhwVtFyttZHtgENFQJDUFsvW0lC5+rPFGxO3J1urvxmzk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713770656; c=relaxed/simple; bh=IKPSrjK7GumQkO7rVTXz7glaom9+TDeC1rcFSLC4/ME=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=iPJ2wrZlUpnNcdwodDSNNVr7+0yRR+g6Y1sgnLWG1aifdBOkeO9d0kTfIFmU5rX4kWOTu7iQVzwM4LTfY0WHhSHCbMK+HtRAWBnBpn/GOQb73YiZf/hLvvU/hq+mAx06adc+xjG2x3Sb7+8lVctITrVpjvKYqw8scTtZzVJJ4Ks= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=s59QyBO4; arc=none smtp.client-ip=115.124.30.111 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="s59QyBO4" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1713770652; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=CRDIlHlwyghRqmAon+aJ87AiYl5fnuNF3IDN2znNSds=; b=s59QyBO4LnsyFKqJ5/0JCOjOoGS5dLAv1ArVPcpsjry+KxQIywFN3f1287TRLzV+Y2zvd8d06SGk2+oIuaARrvjj9Me5jrSPn3Y6vz3bmRhcSFWvxNYBFXOO6N6tUQSx9cQ9wQTt/U5LnrvRW9bwQl2SM+Kinrail1G5z1ewyeQ= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R211e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046049;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0W5.T5jw_1713770650; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W5.T5jw_1713770650) by smtp.aliyun-inc.com; Mon, 22 Apr 2024 15:24:11 +0800 From: Xuan Zhuo To: virtualization@lists.linux.dev Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH vhost v2 1/7] virtio_ring: introduce dma map api for page Date: Mon, 22 Apr 2024 15:24:02 +0800 Message-Id: <20240422072408.126821-2-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240422072408.126821-1-xuanzhuo@linux.alibaba.com> References: <20240422072408.126821-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: aa968d36d784 The virtio-net big mode sq will use these APIs to map the pages. dma_addr_t virtqueue_dma_map_page_attrs(struct virtqueue *_vq, struct page *page, size_t offset, size_t size, enum dma_data_direction dir, unsigned long attrs); void virtqueue_dma_unmap_page_attrs(struct virtqueue *_vq, dma_addr_t addr, size_t size, enum dma_data_direction dir, unsigned long attrs); Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/virtio/virtio_ring.c | 52 ++++++++++++++++++++++++++++++++++++ include/linux/virtio.h | 7 +++++ 2 files changed, 59 insertions(+) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 70de1a9a81a3..1b9fb680cff3 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -3100,6 +3100,58 @@ void virtqueue_dma_unmap_single_attrs(struct virtqueue *_vq, dma_addr_t addr, } EXPORT_SYMBOL_GPL(virtqueue_dma_unmap_single_attrs); +/** + * virtqueue_dma_map_page_attrs - map DMA for _vq + * @_vq: the struct virtqueue we're talking about. + * @page: the page to do dma + * @offset: the offset inside the page + * @size: the size of the page to do dma + * @dir: DMA direction + * @attrs: DMA Attrs + * + * The caller calls this to do dma mapping in advance. The DMA address can be + * passed to this _vq when it is in pre-mapped mode. + * + * return DMA address. Caller should check that by virtqueue_dma_mapping_error(). + */ +dma_addr_t virtqueue_dma_map_page_attrs(struct virtqueue *_vq, struct page *page, + size_t offset, size_t size, + enum dma_data_direction dir, + unsigned long attrs) +{ + struct vring_virtqueue *vq = to_vvq(_vq); + + if (!vq->use_dma_api) + return page_to_phys(page) + offset; + + return dma_map_page_attrs(vring_dma_dev(vq), page, offset, size, dir, attrs); +} +EXPORT_SYMBOL_GPL(virtqueue_dma_map_page_attrs); + +/** + * virtqueue_dma_unmap_page_attrs - unmap DMA for _vq + * @_vq: the struct virtqueue we're talking about. + * @addr: the dma address to unmap + * @size: the size of the buffer + * @dir: DMA direction + * @attrs: DMA Attrs + * + * Unmap the address that is mapped by the virtqueue_dma_map_* APIs. + * + */ +void virtqueue_dma_unmap_page_attrs(struct virtqueue *_vq, dma_addr_t addr, + size_t size, enum dma_data_direction dir, + unsigned long attrs) +{ + struct vring_virtqueue *vq = to_vvq(_vq); + + if (!vq->use_dma_api) + return; + + dma_unmap_page_attrs(vring_dma_dev(vq), addr, size, dir, attrs); +} +EXPORT_SYMBOL_GPL(virtqueue_dma_unmap_page_attrs); + /** * virtqueue_dma_mapping_error - check dma address * @_vq: the struct virtqueue we're talking about. diff --git a/include/linux/virtio.h b/include/linux/virtio.h index 26c4325aa373..d6c699553979 100644 --- a/include/linux/virtio.h +++ b/include/linux/virtio.h @@ -228,6 +228,13 @@ dma_addr_t virtqueue_dma_map_single_attrs(struct virtqueue *_vq, void *ptr, size void virtqueue_dma_unmap_single_attrs(struct virtqueue *_vq, dma_addr_t addr, size_t size, enum dma_data_direction dir, unsigned long attrs); +dma_addr_t virtqueue_dma_map_page_attrs(struct virtqueue *_vq, struct page *page, + size_t offset, size_t size, + enum dma_data_direction dir, + unsigned long attrs); +void virtqueue_dma_unmap_page_attrs(struct virtqueue *_vq, dma_addr_t addr, + size_t size, enum dma_data_direction dir, + unsigned long attrs); int virtqueue_dma_mapping_error(struct virtqueue *_vq, dma_addr_t addr); bool virtqueue_dma_need_sync(struct virtqueue *_vq, dma_addr_t addr); From patchwork Mon Apr 22 07:24:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13637824 Received: from out30-112.freemail.mail.aliyun.com (out30-112.freemail.mail.aliyun.com [115.124.30.112]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5378B4F613 for ; Mon, 22 Apr 2024 07:24:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.112 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713770657; cv=none; b=C28B+ZYUg4XOwrOH8CtewfV/v9attsfcdyJxo2SPd6/NKTj4ntzGB32a0rxwLrW7tEQv3z9BZqtINP33Irmdl9E8mkUhIHLd8wfsNt2UyZ+CZTZ6ds5XeVdWlefgPQXp7wJYmpeQFuuFnwyvXrVbeUfGOY+lin7R2FWZkJqtJVQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713770657; c=relaxed/simple; bh=tUMAS9KXe4/O949IttMY1KiQyrT5G9XkJhUfio2jbWg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=qGyHKy3ai5tQqYmWpbJGobFpDD1GS/mEVHg0OOm7x9yR19dk1CTMGpc6BLqlBM/T2kx5FK6pg54wSrPEPtDixUk0gBC1fdRPRI/eEySrZLK2OArmu2sbLPwVUy6bGiV/vF5x6Eog+kVrwx3OIE903kKQkyAObbit/uI0qbDpeoA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=t7KKPKnq; arc=none smtp.client-ip=115.124.30.112 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="t7KKPKnq" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1713770653; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=QHpOwg5Ne5hMPAux5XPNVM+NNWi2LMfMQBgNF4Fv8HI=; b=t7KKPKnqipEWAIFuI1WC4I8XC5kSUWLP3kPPN2TBmaXXJbUtPiRFJF8rXoWaSayzj43pdPpUrvGd2xm7nop4D5kxV2TSwHOz5Lr2WaQ4ksflDJnGA3yc/fyLYgjU00eaWsJfqBF5S4sKaWIfNBb+0DnFIIEF60rQlsHARgsTCYU= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R611e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046059;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0W5.Wv0E_1713770651; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W5.Wv0E_1713770651) by smtp.aliyun-inc.com; Mon, 22 Apr 2024 15:24:12 +0800 From: Xuan Zhuo To: virtualization@lists.linux.dev Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH vhost v2 2/7] virtio_ring: enable premapped mode whatever use_dma_api Date: Mon, 22 Apr 2024 15:24:03 +0800 Message-Id: <20240422072408.126821-3-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240422072408.126821-1-xuanzhuo@linux.alibaba.com> References: <20240422072408.126821-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: aa968d36d784 Now, we have virtio DMA APIs, the driver can be the premapped mode whatever the virtio core uses dma api or not. So remove the limit of checking use_dma_api from virtqueue_set_dma_premapped(). Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/virtio/virtio_ring.c | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 1b9fb680cff3..1924402e0d84 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -2730,7 +2730,7 @@ EXPORT_SYMBOL_GPL(virtqueue_resize); * * Returns zero or a negative error. * 0: success. - * -EINVAL: vring does not use the dma api, so we can not enable premapped mode. + * -EINVAL: the vq is in use. */ int virtqueue_set_dma_premapped(struct virtqueue *_vq) { @@ -2746,11 +2746,6 @@ int virtqueue_set_dma_premapped(struct virtqueue *_vq) return -EINVAL; } - if (!vq->use_dma_api) { - END_USE(vq); - return -EINVAL; - } - vq->premapped = true; vq->do_unmap = false; From patchwork Mon Apr 22 07:24:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13637825 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-100.freemail.mail.aliyun.com (out30-100.freemail.mail.aliyun.com [115.124.30.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8268E4F898 for ; Mon, 22 Apr 2024 07:24:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.100 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713770658; cv=none; b=pQ0Sp9SoTtjjSGLNpJ1FHxuEIqwk9+GFu+PgUD66jMncAiych3v60iKEcxHfZHtcnt28Fanbn5vopYrSGGEWnSue999wI/NwzYEHqIgoEOU3NVJh4fERhUQjjWvfQbKDlJMM+3S/vDedrGrnuwGajfq1miO5uLlcYhi1eidBs/0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713770658; c=relaxed/simple; bh=gdHESuJyO226XLT6IWnJgIW1k9TxKZVwRi8aj7P3lAM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=dCmtbQCWK1vQAlcLM3n3srlTtnYY+0ANx7u9IiYSUdEQ2TOT4jVfQ4ddEZtIFBEtPimIbgOH6qUMDRPn8zr1G2bh6G3wGI6SgMjiyjP/6F6/9bD6OUpittvcM8kFtM8Nl5m/JE7yLwGD1B3cEKtWrciocIf8ZVy3H0S21fxB/N0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=LUQlJlaV; arc=none smtp.client-ip=115.124.30.100 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="LUQlJlaV" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1713770654; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=7vayq+l1KamS3AW4JkAfgCLE++jtcAZ089i7qobCbfQ=; b=LUQlJlaVau/vj+r2E8EP8DRiv06qZjXGtsNg+XfzxHeTpzm1E4q5a7xlOnCP0HDlF+3PhAUCv9IrpA+R0Pc0zdLTb8A2QnYlWB7mUrFbrsbS9SFl/sm0St6FtDQ1vIgAUaOLWR+CKu5b2U0B5oPCqUVfe6VUeuSwHgSYCtBYEQQ= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R631e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045170;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0W50.I2k_1713770652; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W50.I2k_1713770652) by smtp.aliyun-inc.com; Mon, 22 Apr 2024 15:24:13 +0800 From: Xuan Zhuo To: virtualization@lists.linux.dev Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH vhost v2 3/7] virtio_net: replace private by pp struct inside page Date: Mon, 22 Apr 2024 15:24:04 +0800 Message-Id: <20240422072408.126821-4-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240422072408.126821-1-xuanzhuo@linux.alibaba.com> References: <20240422072408.126821-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: aa968d36d784 X-Patchwork-Delegate: kuba@kernel.org Now, we chain the pages of big mode by the page's private variable. But a subsequent patch aims to make the big mode to support premapped mode. This requires additional space to store the dma addr. Within the sub-struct that contains the 'private', there is no suitable variable for storing the DMA addr. struct { /* Page cache and anonymous pages */ /** * @lru: Pageout list, eg. active_list protected by * lruvec->lru_lock. Sometimes used as a generic list * by the page owner. */ union { struct list_head lru; /* Or, for the Unevictable "LRU list" slot */ struct { /* Always even, to negate PageTail */ void *__filler; /* Count page's or folio's mlocks */ unsigned int mlock_count; }; /* Or, free page */ struct list_head buddy_list; struct list_head pcp_list; }; /* See page-flags.h for PAGE_MAPPING_FLAGS */ struct address_space *mapping; union { pgoff_t index; /* Our offset within mapping. */ unsigned long share; /* share count for fsdax */ }; /** * @private: Mapping-private opaque data. * Usually used for buffer_heads if PagePrivate. * Used for swp_entry_t if PageSwapCache. * Indicates order in the buddy system if PageBuddy. */ unsigned long private; }; But within the page pool struct, we have a variable called dma_addr that is appropriate for storing dma addr. And that struct is used by netstack. That works to our advantage. struct { /* page_pool used by netstack */ /** * @pp_magic: magic value to avoid recycling non * page_pool allocated pages. */ unsigned long pp_magic; struct page_pool *pp; unsigned long _pp_mapping_pad; unsigned long dma_addr; atomic_long_t pp_ref_count; }; On the other side, we should use variables from the same sub-struct. So this patch replaces the "private" with "pp". Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/net/virtio_net.c | 32 ++++++++++++++++++++------------ 1 file changed, 20 insertions(+), 12 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index c22d1118a133..2c7a67ad4789 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -48,6 +48,14 @@ module_param(napi_tx, bool, 0644); #define VIRTIO_XDP_FLAG BIT(0) +/* In big mode, we use a page chain to manage multiple pages submitted to the + * ring. These pages are connected using page.pp. The following two macros are + * used to obtain the next page in a page chain and set the next page in the + * page chain. + */ +#define page_chain_next(p) ((struct page *)((p)->pp)) +#define page_chain_add(p, n) ((p)->pp = (void *)n) + /* RX packet size EWMA. The average packet size is used to determine the packet * buffer size when refilling RX rings. As the entire RX ring may be refilled * at once, the weight is chosen so that the EWMA will be insensitive to short- @@ -191,7 +199,7 @@ struct receive_queue { struct virtnet_interrupt_coalesce intr_coal; - /* Chain pages by the private ptr. */ + /* Chain pages by the page's pp struct. */ struct page *pages; /* Average packet length for mergeable receive buffers. */ @@ -432,16 +440,16 @@ skb_vnet_common_hdr(struct sk_buff *skb) } /* - * private is used to chain pages for big packets, put the whole - * most recent used list in the beginning for reuse + * put the whole most recent used list in the beginning for reuse */ static void give_pages(struct receive_queue *rq, struct page *page) { struct page *end; /* Find end of list, sew whole thing into vi->rq.pages. */ - for (end = page; end->private; end = (struct page *)end->private); - end->private = (unsigned long)rq->pages; + for (end = page; page_chain_next(end); end = page_chain_next(end)); + + page_chain_add(end, rq->pages); rq->pages = page; } @@ -450,9 +458,9 @@ static struct page *get_a_page(struct receive_queue *rq, gfp_t gfp_mask) struct page *p = rq->pages; if (p) { - rq->pages = (struct page *)p->private; - /* clear private here, it is used to chain pages */ - p->private = 0; + rq->pages = page_chain_next(p); + /* clear chain here, it is used to chain pages */ + page_chain_add(p, NULL); } else p = alloc_page(gfp_mask); return p; @@ -609,7 +617,7 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi, if (unlikely(!skb)) return NULL; - page = (struct page *)page->private; + page = page_chain_next(page); if (page) give_pages(rq, page); goto ok; @@ -657,7 +665,7 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi, skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page, offset, frag_size, truesize); len -= frag_size; - page = (struct page *)page->private; + page = page_chain_next(page); offset = 0; } @@ -1909,7 +1917,7 @@ static int add_recvbuf_big(struct virtnet_info *vi, struct receive_queue *rq, sg_set_buf(&rq->sg[i], page_address(first), PAGE_SIZE); /* chain new page in list head to match sg */ - first->private = (unsigned long)list; + page_chain_add(first, list); list = first; } @@ -1929,7 +1937,7 @@ static int add_recvbuf_big(struct virtnet_info *vi, struct receive_queue *rq, sg_set_buf(&rq->sg[1], p + offset, PAGE_SIZE - offset); /* chain first in list head */ - first->private = (unsigned long)list; + page_chain_add(first, list); err = virtqueue_add_inbuf(rq->vq, rq->sg, vi->big_packets_num_skbfrags + 2, first, gfp); if (err < 0) From patchwork Mon Apr 22 07:24:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13637826 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-133.freemail.mail.aliyun.com (out30-133.freemail.mail.aliyun.com [115.124.30.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2A54650267 for ; Mon, 22 Apr 2024 07:24:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713770659; cv=none; b=RzJp4Yeg1hAxSCvUesQb8ySxaIreXiyXIxtBpT13anxAuPpjRbN27+LHQ/vCLv9myuMwnIQ7WZR8Cy/cnPByQLmakPJ6HLLGTcUork1iOEzE6Q3DKw3z31v30HN5DSbxXIm0hgP36ns32LA2bmGw7ajY4nBddaK1GhqT85EXyq4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713770659; c=relaxed/simple; bh=1V2cSy4Qn5Uu5YJfy9JvaYfHlz00Ymx7O5nMeRwzeHA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=SgcC3GvsYShhnK8FTAO4PCDxp+LoglpDwT5oZCsJy54NmO91xhVWyyOR2wyGt583xsNcn9I9oiy7Y62U7FNjV62VTf0VNIcB1LC/4piszQxncigseOMTPsNogwU5jzU9uxaO6akVEdqkqREYO3mJtLZNCMdX0mPz50xRTLp0XA4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=Oca09H1x; arc=none smtp.client-ip=115.124.30.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="Oca09H1x" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1713770654; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=P4miIfCjHgZMSSjJWXEpkx2InGCmad8wS2mBlM5NLR0=; b=Oca09H1xhNMto2XP/vM4SIybzKNToPUUEIl39SuD7jJzRGeYwNydH7R9yon7UtULOBDkFU8DeMVuyZGUDXfW2LpoLlZ4hKDUsqkb8gGx4Ha7Dluymq9tK+36lv4QLk9jChSl07OAbNZlmA8ntYAQoq9QXbvZrAV8tQcuXhuawBY= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R131e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046060;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0W5.Wv0r_1713770653; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W5.Wv0r_1713770653) by smtp.aliyun-inc.com; Mon, 22 Apr 2024 15:24:14 +0800 From: Xuan Zhuo To: virtualization@lists.linux.dev Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH vhost v2 4/7] virtio_net: big mode support premapped Date: Mon, 22 Apr 2024 15:24:05 +0800 Message-Id: <20240422072408.126821-5-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240422072408.126821-1-xuanzhuo@linux.alibaba.com> References: <20240422072408.126821-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: aa968d36d784 X-Patchwork-Delegate: kuba@kernel.org In big mode, pre-mapping DMA is beneficial because if the pages are not used, we can reuse them without needing to unmap and remap. We require space to store the DMA address. I use the page.dma_addr to store the DMA address from the pp structure inside the page. Every page retrieved from get_a_page() is mapped, and its DMA address is stored in page.dma_addr. When a page is returned to the chain, we check the DMA status; if it is not mapped (potentially having been unmapped), we remap it before returning it to the chain. Based on the following points, we do not use page pool to manage these pages: 1. virtio-net uses the DMA APIs wrapped by virtio core. Therefore, we can only prevent the page pool from performing DMA operations, and let the driver perform DMA operations on the allocated pages. 2. But when the page pool releases the page, we have no chance to execute dma unmap. 3. A solution to #2 is to execute dma unmap every time before putting the page back to the page pool. (This is actually a waste, we don't execute unmap so frequently.) 4. But there is another problem, we still need to use page.dma_addr to save the dma address. Using page.dma_addr while using page pool is unsafe behavior. More: https://lore.kernel.org/all/CACGkMEu=Aok9z2imB_c5qVuujSh=vjj1kx12fy9N7hqyi+M5Ow@mail.gmail.com/ Signed-off-by: Xuan Zhuo --- drivers/net/virtio_net.c | 123 ++++++++++++++++++++++++++++++++++----- 1 file changed, 108 insertions(+), 15 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 2c7a67ad4789..d4f5e65b247e 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -439,6 +439,81 @@ skb_vnet_common_hdr(struct sk_buff *skb) return (struct virtio_net_common_hdr *)skb->cb; } +static void sg_fill_dma(struct scatterlist *sg, dma_addr_t addr, u32 len) +{ + sg->dma_address = addr; + sg->length = len; +} + +/* For pages submitted to the ring, we need to record its dma for unmap. + * Here, we use the page.dma_addr and page.pp_magic to store the dma + * address. + */ +static void page_chain_set_dma(struct page *p, dma_addr_t addr) +{ + if (sizeof(dma_addr_t) > sizeof(unsigned long)) { + p->dma_addr = lower_32_bits(addr); + p->pp_magic = upper_32_bits(addr); + } else { + p->dma_addr = addr; + } +} + +static dma_addr_t page_chain_get_dma(struct page *p) +{ + if (sizeof(dma_addr_t) > sizeof(unsigned long)) { + u64 addr; + + addr = p->pp_magic; + return (addr << 32) + p->dma_addr; + } else { + return p->dma_addr; + } +} + +static void page_chain_sync_for_cpu(struct receive_queue *rq, struct page *p) +{ + virtqueue_dma_sync_single_range_for_cpu(rq->vq, page_chain_get_dma(p), + 0, PAGE_SIZE, DMA_FROM_DEVICE); +} + +static void page_chain_unmap(struct receive_queue *rq, struct page *p, bool sync) +{ + int attr = 0; + + if (!sync) + attr = DMA_ATTR_SKIP_CPU_SYNC; + + virtqueue_dma_unmap_page_attrs(rq->vq, page_chain_get_dma(p), PAGE_SIZE, + DMA_FROM_DEVICE, attr); +} + +static int page_chain_map(struct receive_queue *rq, struct page *p) +{ + dma_addr_t addr; + + addr = virtqueue_dma_map_page_attrs(rq->vq, p, 0, PAGE_SIZE, DMA_FROM_DEVICE, 0); + if (virtqueue_dma_mapping_error(rq->vq, addr)) + return -ENOMEM; + + page_chain_set_dma(p, addr); + return 0; +} + +static void page_chain_release(struct receive_queue *rq) +{ + struct page *p, *n; + + for (p = rq->pages; p; p = n) { + n = page_chain_next(p); + + page_chain_unmap(rq, p, true); + __free_pages(p, 0); + } + + rq->pages = NULL; +} + /* * put the whole most recent used list in the beginning for reuse */ @@ -461,8 +536,15 @@ static struct page *get_a_page(struct receive_queue *rq, gfp_t gfp_mask) rq->pages = page_chain_next(p); /* clear chain here, it is used to chain pages */ page_chain_add(p, NULL); - } else + } else { p = alloc_page(gfp_mask); + + if (page_chain_map(rq, p)) { + __free_pages(p, 0); + return NULL; + } + } + return p; } @@ -617,9 +699,12 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi, if (unlikely(!skb)) return NULL; - page = page_chain_next(page); - if (page) - give_pages(rq, page); + if (!vi->mergeable_rx_bufs) { + page_chain_unmap(rq, page, false); + page = page_chain_next(page); + if (page) + give_pages(rq, page); + } goto ok; } @@ -662,6 +747,9 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi, BUG_ON(offset >= PAGE_SIZE); while (len) { unsigned int frag_size = min((unsigned)PAGE_SIZE - offset, len); + + page_chain_unmap(rq, page, !offset); + skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page, offset, frag_size, truesize); len -= frag_size; @@ -828,7 +916,8 @@ static void virtnet_rq_unmap_free_buf(struct virtqueue *vq, void *buf) rq = &vi->rq[i]; - if (rq->do_dma) + /* Skip the unmap for big mode. */ + if (!vi->big_packets || vi->mergeable_rx_bufs) virtnet_rq_unmap(rq, buf, 0); virtnet_rq_free_buf(vi, rq, buf); @@ -1351,8 +1440,12 @@ static struct sk_buff *receive_big(struct net_device *dev, struct virtnet_rq_stats *stats) { struct page *page = buf; - struct sk_buff *skb = - page_to_skb(vi, rq, page, 0, len, PAGE_SIZE, 0); + struct sk_buff *skb; + + /* sync first page. The follow code may read this page. */ + page_chain_sync_for_cpu(rq, page); + + skb = page_to_skb(vi, rq, page, 0, len, PAGE_SIZE, 0); u64_stats_add(&stats->bytes, len - vi->hdr_len); if (unlikely(!skb)) @@ -1901,7 +1994,7 @@ static int add_recvbuf_big(struct virtnet_info *vi, struct receive_queue *rq, gfp_t gfp) { struct page *first, *list = NULL; - char *p; + dma_addr_t p; int i, err, offset; sg_init_table(rq->sg, vi->big_packets_num_skbfrags + 2); @@ -1914,7 +2007,7 @@ static int add_recvbuf_big(struct virtnet_info *vi, struct receive_queue *rq, give_pages(rq, list); return -ENOMEM; } - sg_set_buf(&rq->sg[i], page_address(first), PAGE_SIZE); + sg_fill_dma(&rq->sg[i], page_chain_get_dma(first), PAGE_SIZE); /* chain new page in list head to match sg */ page_chain_add(first, list); @@ -1926,15 +2019,16 @@ static int add_recvbuf_big(struct virtnet_info *vi, struct receive_queue *rq, give_pages(rq, list); return -ENOMEM; } - p = page_address(first); + + p = page_chain_get_dma(first); /* rq->sg[0], rq->sg[1] share the same page */ /* a separated rq->sg[0] for header - required in case !any_header_sg */ - sg_set_buf(&rq->sg[0], p, vi->hdr_len); + sg_fill_dma(&rq->sg[0], p, vi->hdr_len); /* rq->sg[1] for data packet, from offset */ offset = sizeof(struct padded_vnet_hdr); - sg_set_buf(&rq->sg[1], p + offset, PAGE_SIZE - offset); + sg_fill_dma(&rq->sg[1], p + offset, PAGE_SIZE - offset); /* chain first in list head */ page_chain_add(first, list); @@ -2136,7 +2230,7 @@ static int virtnet_receive(struct receive_queue *rq, int budget, } } else { while (packets < budget && - (buf = virtnet_rq_get_buf(rq, &len, NULL)) != NULL) { + (buf = virtqueue_get_buf(rq->vq, &len)) != NULL) { receive_buf(vi, rq, buf, len, NULL, xdp_xmit, &stats); packets++; } @@ -4257,8 +4351,7 @@ static void _free_receive_bufs(struct virtnet_info *vi) int i; for (i = 0; i < vi->max_queue_pairs; i++) { - while (vi->rq[i].pages) - __free_pages(get_a_page(&vi->rq[i], GFP_KERNEL), 0); + page_chain_release(&vi->rq[i]); old_prog = rtnl_dereference(vi->rq[i].xdp_prog); RCU_INIT_POINTER(vi->rq[i].xdp_prog, NULL); From patchwork Mon Apr 22 07:24:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13637827 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-118.freemail.mail.aliyun.com (out30-118.freemail.mail.aliyun.com [115.124.30.118]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5C4A05029E for ; Mon, 22 Apr 2024 07:24:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.118 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713770660; cv=none; b=WyC0chL6TXD0ypd6dWkrZ2ZEmDpCJMiJjOuxbhnzRq8zLipG+6KWI9Dy3u5QSKSbnyfD1QgCVggWSmrQ9s8TXYgCtQWHhQzJ48PE7JlgJJjv63umSf3M2f3Mn7mlzsgJZBmg6stDPXiZ8f2A9RWiGF2/blZQ3x4u5sAXveRR0uY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713770660; c=relaxed/simple; bh=GwRgAB8+myCzYs49EetAYoyxlLK3o1zRbwFkQVNKsOU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=VNz3UfjfvmYnCyIGbDpmWNLj404zQz+mRKuf9lbP93rWNY8HhG6Uv7ddADCkDzGoyEi8DsvX0Cn5AGBFz5z9qWpS7jvoJdiKb7bd0ULeZaPn8Ui396x7+UsnZkvAjKdyfwwZENJXvnIZ8qlT6DuukGzJiTuyhkaccxxGMDTD+sM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=JvBLwAXz; arc=none smtp.client-ip=115.124.30.118 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="JvBLwAXz" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1713770656; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=FJpFi/MWH6Aswgcto79sFfR+i5d1gpvvgOn4AesLcLs=; b=JvBLwAXzYeO0dS/r33wnEEnb/3Xe6ncVnKAOdKdKdtnSt9JfJiM6YopKxUzj6uNuxeJ/n+gdB8Lvutj10n/T0O5ffXzWqU3dTwZV7A+/JdNAQhaStGg1gMjh9V+kGD/MhhgiwdZxYQauJ4e6VyjX0l2M/+V0Edwq1bi9b8zsvk8= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R141e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046060;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0W5.kZo7_1713770654; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W5.kZo7_1713770654) by smtp.aliyun-inc.com; Mon, 22 Apr 2024 15:24:15 +0800 From: Xuan Zhuo To: virtualization@lists.linux.dev Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH vhost v2 5/7] virtio_net: enable premapped by default Date: Mon, 22 Apr 2024 15:24:06 +0800 Message-Id: <20240422072408.126821-6-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240422072408.126821-1-xuanzhuo@linux.alibaba.com> References: <20240422072408.126821-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: aa968d36d784 X-Patchwork-Delegate: kuba@kernel.org Currently, big, merge, and small modes all support the premapped mode. We can now enable premapped mode by default. Furthermore, virtqueue_set_dma_premapped() must succeed when called immediately after find_vqs(). Consequently, we can assume that premapped mode is always enabled. Signed-off-by: Xuan Zhuo --- drivers/net/virtio_net.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index d4f5e65b247e..f04b10426b8f 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -896,13 +896,9 @@ static void virtnet_rq_set_premapped(struct virtnet_info *vi) { int i; - /* disable for big mode */ - if (!vi->mergeable_rx_bufs && vi->big_packets) - return; - for (i = 0; i < vi->max_queue_pairs; i++) { - if (virtqueue_set_dma_premapped(vi->rq[i].vq)) - continue; + /* error never happen */ + BUG_ON(virtqueue_set_dma_premapped(vi->rq[i].vq)); vi->rq[i].do_dma = true; } From patchwork Mon Apr 22 07:24:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13637828 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-99.freemail.mail.aliyun.com (out30-99.freemail.mail.aliyun.com [115.124.30.99]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E6594502B9 for ; Mon, 22 Apr 2024 07:24:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.99 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713770661; cv=none; b=NGJTR6+kI9Gk1iCpMbEHjGfzI44mb7gdURKwZRRVY3CTZolAEa7OgnF7pVZ5wsXRi3qB+0Uw+YFiaoqrCcwhSc4s2Zfj91Q59PKIhuBCMXbF9e5qTbxTdJmYHpO33kSeHP+qjbY2VstrsXvq88n+7LxEtZKt4nGUzhoShIE4gEk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713770661; c=relaxed/simple; bh=T57i4S+hFuEEJAarRQFX1fCsVPPwIHXeM401eZY6yT8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ju+NQx+St3tGVmeNUSgUUh8WJQQIrdx48Np0y+3NvXvzMax+yf+anI7771GlCp7p60pBHRhc/Kw4V0h6/yzNY3tg8nJEtTJlttvEBt6209uRQUiA/ji59oZ2K283GRpZ9TIlmLw+AIVEzSvvz99Vp9g0+QSK7tO6fQRDQA/XJpU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=WsgfDJe4; arc=none smtp.client-ip=115.124.30.99 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="WsgfDJe4" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1713770657; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=T2DlyIjNdC2EZeZRVRT33XXBWFz/h54eq0REapQRDvo=; b=WsgfDJe4bcmaQ6bfTJPi+MPkWJi96kW5hS7g3JHP8ZeYTjcp/vRZJsLrV4Ue3wYbk/NGlunqnzLkemgN9wVQR++YtT/8In6qqc4+yC8vDCuRyzuw6GInT1DQ28xfHpbEq/MmswAyZVlI4a8K1mztc3k/FbmwNi3byvxIZSKV0cg= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R191e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046049;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0W5.r0H-_1713770655; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W5.r0H-_1713770655) by smtp.aliyun-inc.com; Mon, 22 Apr 2024 15:24:16 +0800 From: Xuan Zhuo To: virtualization@lists.linux.dev Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH vhost v2 6/7] virtio_net: rx remove premapped failover code Date: Mon, 22 Apr 2024 15:24:07 +0800 Message-Id: <20240422072408.126821-7-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240422072408.126821-1-xuanzhuo@linux.alibaba.com> References: <20240422072408.126821-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: aa968d36d784 X-Patchwork-Delegate: kuba@kernel.org Now, the premapped mode can be enabled unconditionally. So we can remove the failover code. Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/net/virtio_net.c | 81 ++++++++++++++++------------------------ 1 file changed, 33 insertions(+), 48 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index f04b10426b8f..848e93ccf2ef 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -221,9 +221,6 @@ struct receive_queue { /* Record the last dma info to free after new pages is allocated. */ struct virtnet_rq_dma *last_dma; - - /* Do dma by self */ - bool do_dma; }; /* This structure can contain rss message with maximum settings for indirection table and keysize @@ -803,7 +800,7 @@ static void *virtnet_rq_get_buf(struct receive_queue *rq, u32 *len, void **ctx) void *buf; buf = virtqueue_get_buf_ctx(rq->vq, len, ctx); - if (buf && rq->do_dma) + if (buf) virtnet_rq_unmap(rq, buf, *len); return buf; @@ -816,11 +813,6 @@ static void virtnet_rq_init_one_sg(struct receive_queue *rq, void *buf, u32 len) u32 offset; void *head; - if (!rq->do_dma) { - sg_init_one(rq->sg, buf, len); - return; - } - head = page_address(rq->alloc_frag.page); offset = buf - head; @@ -846,44 +838,42 @@ static void *virtnet_rq_alloc(struct receive_queue *rq, u32 size, gfp_t gfp) head = page_address(alloc_frag->page); - if (rq->do_dma) { - dma = head; - - /* new pages */ - if (!alloc_frag->offset) { - if (rq->last_dma) { - /* Now, the new page is allocated, the last dma - * will not be used. So the dma can be unmapped - * if the ref is 0. - */ - virtnet_rq_unmap(rq, rq->last_dma, 0); - rq->last_dma = NULL; - } + dma = head; - dma->len = alloc_frag->size - sizeof(*dma); + /* new pages */ + if (!alloc_frag->offset) { + if (rq->last_dma) { + /* Now, the new page is allocated, the last dma + * will not be used. So the dma can be unmapped + * if the ref is 0. + */ + virtnet_rq_unmap(rq, rq->last_dma, 0); + rq->last_dma = NULL; + } - addr = virtqueue_dma_map_single_attrs(rq->vq, dma + 1, - dma->len, DMA_FROM_DEVICE, 0); - if (virtqueue_dma_mapping_error(rq->vq, addr)) - return NULL; + dma->len = alloc_frag->size - sizeof(*dma); - dma->addr = addr; - dma->need_sync = virtqueue_dma_need_sync(rq->vq, addr); + addr = virtqueue_dma_map_single_attrs(rq->vq, dma + 1, + dma->len, DMA_FROM_DEVICE, 0); + if (virtqueue_dma_mapping_error(rq->vq, addr)) + return NULL; - /* Add a reference to dma to prevent the entire dma from - * being released during error handling. This reference - * will be freed after the pages are no longer used. - */ - get_page(alloc_frag->page); - dma->ref = 1; - alloc_frag->offset = sizeof(*dma); + dma->addr = addr; + dma->need_sync = virtqueue_dma_need_sync(rq->vq, addr); - rq->last_dma = dma; - } + /* Add a reference to dma to prevent the entire dma from + * being released during error handling. This reference + * will be freed after the pages are no longer used. + */ + get_page(alloc_frag->page); + dma->ref = 1; + alloc_frag->offset = sizeof(*dma); - ++dma->ref; + rq->last_dma = dma; } + ++dma->ref; + buf = head + alloc_frag->offset; get_page(alloc_frag->page); @@ -896,12 +886,9 @@ static void virtnet_rq_set_premapped(struct virtnet_info *vi) { int i; - for (i = 0; i < vi->max_queue_pairs; i++) { + for (i = 0; i < vi->max_queue_pairs; i++) /* error never happen */ BUG_ON(virtqueue_set_dma_premapped(vi->rq[i].vq)); - - vi->rq[i].do_dma = true; - } } static void virtnet_rq_unmap_free_buf(struct virtqueue *vq, void *buf) @@ -1978,8 +1965,7 @@ static int add_recvbuf_small(struct virtnet_info *vi, struct receive_queue *rq, err = virtqueue_add_inbuf_ctx(rq->vq, rq->sg, 1, buf, ctx, gfp); if (err < 0) { - if (rq->do_dma) - virtnet_rq_unmap(rq, buf, 0); + virtnet_rq_unmap(rq, buf, 0); put_page(virt_to_head_page(buf)); } @@ -2094,8 +2080,7 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi, ctx = mergeable_len_to_ctx(len + room, headroom); err = virtqueue_add_inbuf_ctx(rq->vq, rq->sg, 1, buf, ctx, gfp); if (err < 0) { - if (rq->do_dma) - virtnet_rq_unmap(rq, buf, 0); + virtnet_rq_unmap(rq, buf, 0); put_page(virt_to_head_page(buf)); } @@ -4368,7 +4353,7 @@ static void free_receive_page_frags(struct virtnet_info *vi) int i; for (i = 0; i < vi->max_queue_pairs; i++) if (vi->rq[i].alloc_frag.page) { - if (vi->rq[i].do_dma && vi->rq[i].last_dma) + if (vi->rq[i].last_dma) virtnet_rq_unmap(&vi->rq[i], vi->rq[i].last_dma, 0); put_page(vi->rq[i].alloc_frag.page); } From patchwork Mon Apr 22 07:24:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13637829 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-110.freemail.mail.aliyun.com (out30-110.freemail.mail.aliyun.com [115.124.30.110]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 44F5E5102B for ; Mon, 22 Apr 2024 07:24:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.110 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713770663; cv=none; b=iazvMHwpewhIwpvM4Mu7Uv7RBFmLTkzrFut79DCmD6vjLBCR+1okVhep2nt7MyV84xIVQFWSFxfBcaHh9IH1cNJNOXYzwt0aq/xHbx+iT5rmwynApcegkzHm3vp79UvN7nAj+VPPsIZLlOaDd6O58oswUtlK8GuJ2FypPQN0oNg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713770663; c=relaxed/simple; bh=3oWT7AoYVZrze8Cj18fmZ5ho3Y/CFeJtjphsP3pahJ4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=kjQYiueKW31kGR+rv0xqmIdoLjkMYNaF9MGTr1yAMGqEQzyS9V5MBUeibKSqtju/shJwUNWSz0P5IHghD7ftm9Uh4SPZyCS/LW4f7Od45A6tbc77HfB/mKXcueRss2l3kpjbKjsSUncqS627I+zIDcCwOOK+DGK5ax+da27aAj0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=aA6DlAXj; arc=none smtp.client-ip=115.124.30.110 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="aA6DlAXj" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1713770659; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=QH7S8pMVCFt/6I21U1mohQsJuE0lv4rRBEhEbNpDji0=; b=aA6DlAXjdwASpkCE4tlmKVVMFr/ddouki8ttTexufnFohPFFjL5SA7PzdHT6oSz5zVKUR8EJ38yiTVQAYohvB0gGCMgSgHo6/AEUkM44W9kgNhdzQkBmaErtWz5rWldMhvT5EvFOJtZ2C10D8RqqOQFF7Q72ipGsBsdFd+h7Udo= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R431e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045192;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0W5.kZp3_1713770657; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W5.kZp3_1713770657) by smtp.aliyun-inc.com; Mon, 22 Apr 2024 15:24:18 +0800 From: Xuan Zhuo To: virtualization@lists.linux.dev Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH vhost v2 7/7] virtio_net: remove the misleading comment Date: Mon, 22 Apr 2024 15:24:08 +0800 Message-Id: <20240422072408.126821-8-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240422072408.126821-1-xuanzhuo@linux.alibaba.com> References: <20240422072408.126821-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: aa968d36d784 X-Patchwork-Delegate: kuba@kernel.org We call the build_skb() actually without copying data. The comment is misleading. So remove it. Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/net/virtio_net.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 848e93ccf2ef..ae15254a673b 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -690,7 +690,6 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi, shinfo_size = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); - /* copy small packet so we can reuse these pages */ if (!NET_IP_ALIGN && len > GOOD_COPY_LEN && tailroom >= shinfo_size) { skb = virtnet_build_skb(buf, truesize, p - buf, len); if (unlikely(!skb))