From patchwork Tue Oct 29 08:46:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13854501 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-112.freemail.mail.aliyun.com (out30-112.freemail.mail.aliyun.com [115.124.30.112]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 414061DFE3C for ; Tue, 29 Oct 2024 08:46:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.112 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730191589; cv=none; b=E5JcYEuO7zjao2/Sfhw+wUmrE8q9jQgC0sYQGJInZ2yJfF4LTK9XfSJmMChqi31Wfs75VSVqkx0fzeR0oEpjAsUybYI4rKa67Cyy0gyNMiju2ng6meK7dTPwerJalW5U7V1dOxk/ro0GVdKluhXfqZBMkLKmjqjnVj7yzAXjC+A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730191589; c=relaxed/simple; bh=P2c7LxZXQrJRecWyHq0eLcetewJNr61UfLHD5iBGOFw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=VBJiHZVSECN91wLjTnbV/+tibQ68/b/4jwJlN80wl/TKIXy3rgAv95po1msUQtM5DilffQtxaPSVP+6+ItsU5QQtySluzDhnuYc9UWxfBqXWpuXjw/wYOq6B7gEgCbU/IdbigREDcaX0KJ4kCy6gpqTqnXbn4szuNlS9Y8RIQak= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=L7phQ52a; arc=none smtp.client-ip=115.124.30.112 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="L7phQ52a" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1730191578; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=VCqPISeZfYcaXdcvh6yVHHXg+YVB/qLvdu3aEXx9Qao=; b=L7phQ52aciHgbAlfIToiJf5x1vaZafIoxNaLsbs27J7wMS1ViNLXVt5VP3TWf8cJwZ4yyh/+RQipDzC2UhbnlkgwossfRfCEsm+702wcH8+NnyeTSY4whs6j6Sy85BnAtTkQOZHzE/4+sVw8jwjAL95GvtBru7lWhoSUST/vpMA= Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0WI9drjB_1730191576 cluster:ay36) by smtp.aliyun-inc.com; Tue, 29 Oct 2024 16:46:17 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , virtualization@lists.linux.dev, "Si-Wei Liu" , Darren Kenny Subject: [PATCH net-next v1 1/4] virtio-net: fix overflow inside virtnet_rq_alloc Date: Tue, 29 Oct 2024 16:46:12 +0800 Message-Id: <20241029084615.91049-2-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20241029084615.91049-1-xuanzhuo@linux.alibaba.com> References: <20241029084615.91049-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: df8220a5376e X-Patchwork-Delegate: kuba@kernel.org When the frag just got a page, then may lead to regression on VM. Specially if the sysctl net.core.high_order_alloc_disable value is 1, then the frag always get a page when do refill. Which could see reliable crashes or scp failure (scp a file 100M in size to VM). The issue is that the virtnet_rq_dma takes up 16 bytes at the beginning of a new frag. When the frag size is larger than PAGE_SIZE, everything is fine. However, if the frag is only one page and the total size of the buffer and virtnet_rq_dma is larger than one page, an overflow may occur. The commit f9dac92ba908 ("virtio_ring: enable premapped mode whatever use_dma_api") introduced this problem. And we reverted some commits to fix this in last linux version. Now we try to enable it and fix this bug directly. Here, when the frag size is not enough, we reduce the buffer len to fix this problem. Reported-by: "Si-Wei Liu" Tested-by: Darren Kenny Signed-off-by: Xuan Zhuo --- drivers/net/virtio_net.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 792e9eadbfc3..d50c1940eb23 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -926,9 +926,6 @@ static void *virtnet_rq_alloc(struct receive_queue *rq, u32 size, gfp_t gfp) void *buf, *head; dma_addr_t addr; - if (unlikely(!skb_page_frag_refill(size, alloc_frag, gfp))) - return NULL; - head = page_address(alloc_frag->page); if (rq->do_dma) { @@ -2423,6 +2420,9 @@ static int add_recvbuf_small(struct virtnet_info *vi, struct receive_queue *rq, len = SKB_DATA_ALIGN(len) + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + if (unlikely(!skb_page_frag_refill(len, &rq->alloc_frag, gfp))) + return -ENOMEM; + buf = virtnet_rq_alloc(rq, len, gfp); if (unlikely(!buf)) return -ENOMEM; @@ -2525,6 +2525,12 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi, */ len = get_mergeable_buf_len(rq, &rq->mrg_avg_pkt_len, room); + if (unlikely(!skb_page_frag_refill(len + room, alloc_frag, gfp))) + return -ENOMEM; + + if (!alloc_frag->offset && len + room + sizeof(struct virtnet_rq_dma) > alloc_frag->size) + len -= sizeof(struct virtnet_rq_dma); + buf = virtnet_rq_alloc(rq, len + room, gfp); if (unlikely(!buf)) return -ENOMEM; From patchwork Tue Oct 29 08:46:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13854502 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-110.freemail.mail.aliyun.com (out30-110.freemail.mail.aliyun.com [115.124.30.110]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8145920494E for ; Tue, 29 Oct 2024 08:46:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.110 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730191589; cv=none; b=EzPRyL6V5Xc1USBXgAECSL9wFa9IzwqO34KOoKDiu4eY5AHexzpZE8OcZ0Ss9PZQIlI87PeP1chJTsyDZOuFxXRO6H4M4NHHID91pqFTrkqA7L0mg3QUVEjpTpFtf3WhpRLu3RNlQrH+dbl0A71rl9e7mHEqpKFCVv0aStcSMMs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730191589; c=relaxed/simple; bh=tLayIPhAFs4aevN7QDSWDSVqF4NHjBOAVESNKWt5heQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=j12CviyG+Qf+d0HIatK0UFenH9QXTsRzBlNnWGExfycxRqL9X0WV8mHtD76KOV02SLIVCuvrWQ+30Jha5+EOxREE/5bJQp+zgqILYXqcvMUgX2ORxKztPx3zDuO8uaK+vhp5D8v6Sxk4aaYFSl1LHmmU3AzIA8B6oVGJAct8qkY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=TT+42A09; arc=none smtp.client-ip=115.124.30.110 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="TT+42A09" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1730191579; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=C6wOnS9hID7035sURlfUXbTKbV7jOWOgQEJEnU/CjSM=; b=TT+42A09J7FElfpjXoYJNZq54DfZubYyWQ1wMVk4r+qYtdYi07xLpqQIjcXjvZwWopwaB/FohsX/FMtkJIJ0tK82p2LKIO2Q5JP2BDpZ7fEDpAbO0ISI98TU/zgyf19UEgsCYYwn6Qjg1wyc4Rkpf1haZvGck178tZys3Vecnrk= Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0WI9W1FN_1730191577 cluster:ay36) by smtp.aliyun-inc.com; Tue, 29 Oct 2024 16:46:18 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , virtualization@lists.linux.dev, Darren Kenny Subject: [PATCH net-next v1 2/4] virtio_net: big mode skip the unmap check Date: Tue, 29 Oct 2024 16:46:13 +0800 Message-Id: <20241029084615.91049-3-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20241029084615.91049-1-xuanzhuo@linux.alibaba.com> References: <20241029084615.91049-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: df8220a5376e X-Patchwork-Delegate: kuba@kernel.org The virtio-net big mode did not enable premapped mode, so we did not need to check the unmap. And the subsequent commit will remove the failover code for failing enable premapped for merge and small mode. So we need to remove the checking do_dma code in the big mode path. Tested-by: Darren Kenny Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/net/virtio_net.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index d50c1940eb23..7557808e8c1f 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -987,7 +987,7 @@ static void virtnet_rq_unmap_free_buf(struct virtqueue *vq, void *buf) return; } - if (rq->do_dma) + if (!vi->big_packets || vi->mergeable_rx_bufs) virtnet_rq_unmap(rq, buf, 0); virtnet_rq_free_buf(vi, rq, buf); @@ -2712,7 +2712,7 @@ static int virtnet_receive_packets(struct virtnet_info *vi, } } else { while (packets < budget && - (buf = virtnet_rq_get_buf(rq, &len, NULL)) != NULL) { + (buf = virtqueue_get_buf(rq->vq, &len)) != NULL) { receive_buf(vi, rq, buf, len, NULL, xdp_xmit, stats); packets++; } From patchwork Tue Oct 29 08:46:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13854499 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-124.freemail.mail.aliyun.com (out30-124.freemail.mail.aliyun.com [115.124.30.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 256D8204926 for ; Tue, 29 Oct 2024 08:46:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730191586; cv=none; b=W+vGRC7P8vFxzRXp/juLfsjxM0qdUCBcqdKiCoIUaBJER/2t0iUzuaJvEm7U54gbrUtwRV2PWX77NiA8wXVWqD3vz79iYnrm9D9KqkRdRn97wUAkYswIFrMpyyVJRnYImzi1e52Af5qv5al9S7yZbpaS7q90jC1FCAKo3M82DOE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730191586; c=relaxed/simple; bh=jyjL2KxEBw4MlV8WGNcRAYTyK35gLoNx+kZASufdjTk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=bbTm8bJ+R2MnYcTdABua+OFI29gQEs65jl4FrfosJZcp/fYKP07QcV9qCZkf+HjrQc+mLkOZzgxg8qyQwRYob2B9LDoTLR45dHm+eNBByT7zh2lWsbi3B4Gjr8r/6H364zsfjyeuBKz0c1hzm/tmbIOHEcgG4e7f9rbb/zkIIFw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=KfcYUm+a; arc=none smtp.client-ip=115.124.30.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="KfcYUm+a" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1730191580; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=48sS2yg0L4ksCXiPJjQSpqFMoSDM2yftC7jysiaxfFQ=; b=KfcYUm+au7i12xFCiaSq1wZBx2kp3u+LGJVNpchGloVgtksgoD1OAVe6Sg9LzjN78DOUUbi9WdR3TnHFmbHovSWlPIjN7acmhr63nF+qqXZMr7Sr1+irWk5qdkjp4+KmkvnycawU7/ibQ6lL07ewlHuJ31hbVqR9sWZC/9Cw3gE= Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0WI9Ye2._1730191578 cluster:ay36) by smtp.aliyun-inc.com; Tue, 29 Oct 2024 16:46:19 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , virtualization@lists.linux.dev, Darren Kenny Subject: [PATCH net-next v1 3/4] virtio_net: enable premapped mode for merge and small by default Date: Tue, 29 Oct 2024 16:46:14 +0800 Message-Id: <20241029084615.91049-4-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20241029084615.91049-1-xuanzhuo@linux.alibaba.com> References: <20241029084615.91049-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: df8220a5376e X-Patchwork-Delegate: kuba@kernel.org Currently, the virtio core will perform a dma operation for each buffer. Although, the same page may be operated multiple times. In premapped mod, we can perform only one dma operation for the pages of the alloc frag. This is beneficial for the iommu device. kernel command line: intel_iommu=on iommu.passthrough=0 | strict=0 | strict=1 Before | 775496pps | 428614pps After | 1109316pps | 742853pps In the 6.11, we disabled this feature because a regress [1]. Now, we fix the problem and re-enable it. [1]: http://lore.kernel.org/all/8b20cc28-45a9-4643-8e87-ba164a540c0a@oracle.com Tested-by: Darren Kenny Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/net/virtio_net.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 7557808e8c1f..ea433e9650eb 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -6107,6 +6107,17 @@ static int virtnet_alloc_queues(struct virtnet_info *vi) return -ENOMEM; } +static void virtnet_rq_set_premapped(struct virtnet_info *vi) +{ + int i; + + for (i = 0; i < vi->max_queue_pairs; i++) { + /* error should never happen */ + BUG_ON(virtqueue_set_dma_premapped(vi->rq[i].vq)); + vi->rq[i].do_dma = true; + } +} + static int init_vqs(struct virtnet_info *vi) { int ret; @@ -6120,6 +6131,10 @@ static int init_vqs(struct virtnet_info *vi) if (ret) goto err_free; + /* disable for big mode */ + if (!vi->big_packets || vi->mergeable_rx_bufs) + virtnet_rq_set_premapped(vi); + cpus_read_lock(); virtnet_set_affinity(vi); cpus_read_unlock(); From patchwork Tue Oct 29 08:46:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13854503 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-113.freemail.mail.aliyun.com (out30-113.freemail.mail.aliyun.com [115.124.30.113]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E5DC120515D for ; Tue, 29 Oct 2024 08:46:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.113 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730191592; cv=none; b=PkbxAxEBdI6w2h8V6s6v6s0L2qe6/bH3SWG8w/wMkq2akeUTUsz1EY681Kgxr+pERFXwGljRcZRMAb1jYC9H/I6d5g0HscVkSgtgdBLxn5od15PD4IGNO6KejVA2Wq5ZekjcjKxqNn83mrZd2ApxnPUxARfxS1LRjIA87EW+h3o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730191592; c=relaxed/simple; bh=91mrumVkdlPqeD2+dSYpUpG+Nn8EK9xVP+MqDJ5HTYU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=guJMHbokFobufqkJmksi+loDh8rHgXtyxrXfJjqacAj05N1j4wCemGrJGGPZJrI5nOsfEwTJwoBfCCxMsWIUTUniTBb/UvIMh9HswTbZysg42m+4U7nBZwXPu47fO/xq/GppTGunT9zRX0w5Hjwm4g23JuWJfxyiBXEXtIKNcXM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=D54Z/dZ0; arc=none smtp.client-ip=115.124.30.113 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="D54Z/dZ0" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1730191580; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=VS8UktA8ZNliib/aYi2nR0Q3M7Rj5h7uc2RVDMMm6KU=; b=D54Z/dZ0zK3u7vholYtyuJOOg63mZuPOjhHI2HKLgjtlphwh5JFpdl9aqW6WSHEP05obclSNNUEKNlrajI4ikbeM4KcoSpRgRQ9dUUaN6lqQ7yINwoHB6P7uwvc1h6pL8654ingDnuhO2s2Kji2HeQkpCci/CXj0t+ARu+x8H4E= Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0WI9iIYa_1730191579 cluster:ay36) by smtp.aliyun-inc.com; Tue, 29 Oct 2024 16:46:20 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , virtualization@lists.linux.dev, Darren Kenny Subject: [PATCH net-next v1 4/4] virtio_net: rx remove premapped failover code Date: Tue, 29 Oct 2024 16:46:15 +0800 Message-Id: <20241029084615.91049-5-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20241029084615.91049-1-xuanzhuo@linux.alibaba.com> References: <20241029084615.91049-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: df8220a5376e X-Patchwork-Delegate: kuba@kernel.org Now, the premapped mode can be enabled unconditionally. So we can remove the failover code for merge and small mode. The virtnet_rq_xxx() helper would be only used if the mode is using pre mapping. A check is added to prevent misusing of these API. Tested-by: Darren Kenny Signed-off-by: Xuan Zhuo --- drivers/net/virtio_net.c | 90 ++++++++++++++++++++-------------------- 1 file changed, 44 insertions(+), 46 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index ea433e9650eb..169c1aa87da5 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -356,9 +356,6 @@ struct receive_queue { struct xdp_rxq_info xsk_rxq_info; struct xdp_buff **xsk_buffs; - - /* Do dma by self */ - bool do_dma; }; /* This structure can contain rss message with maximum settings for indirection table and keysize @@ -856,11 +853,14 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi, static void virtnet_rq_unmap(struct receive_queue *rq, void *buf, u32 len) { + struct virtnet_info *vi = rq->vq->vdev->priv; struct page *page = virt_to_head_page(buf); struct virtnet_rq_dma *dma; void *head; int offset; + BUG_ON(vi->big_packets && !vi->mergeable_rx_bufs); + head = page_address(page); dma = head; @@ -885,10 +885,13 @@ static void virtnet_rq_unmap(struct receive_queue *rq, void *buf, u32 len) static void *virtnet_rq_get_buf(struct receive_queue *rq, u32 *len, void **ctx) { + struct virtnet_info *vi = rq->vq->vdev->priv; void *buf; + BUG_ON(vi->big_packets && !vi->mergeable_rx_bufs); + buf = virtqueue_get_buf_ctx(rq->vq, len, ctx); - if (buf && rq->do_dma) + if (buf) virtnet_rq_unmap(rq, buf, *len); return buf; @@ -896,15 +899,13 @@ static void *virtnet_rq_get_buf(struct receive_queue *rq, u32 *len, void **ctx) static void virtnet_rq_init_one_sg(struct receive_queue *rq, void *buf, u32 len) { + struct virtnet_info *vi = rq->vq->vdev->priv; struct virtnet_rq_dma *dma; dma_addr_t addr; u32 offset; void *head; - if (!rq->do_dma) { - sg_init_one(rq->sg, buf, len); - return; - } + BUG_ON(vi->big_packets && !vi->mergeable_rx_bufs); head = page_address(rq->alloc_frag.page); @@ -922,50 +923,51 @@ static void virtnet_rq_init_one_sg(struct receive_queue *rq, void *buf, u32 len) static void *virtnet_rq_alloc(struct receive_queue *rq, u32 size, gfp_t gfp) { struct page_frag *alloc_frag = &rq->alloc_frag; + struct virtnet_info *vi = rq->vq->vdev->priv; struct virtnet_rq_dma *dma; void *buf, *head; dma_addr_t addr; + BUG_ON(vi->big_packets && !vi->mergeable_rx_bufs); + head = page_address(alloc_frag->page); - if (rq->do_dma) { - dma = head; - - /* new pages */ - if (!alloc_frag->offset) { - if (rq->last_dma) { - /* Now, the new page is allocated, the last dma - * will not be used. So the dma can be unmapped - * if the ref is 0. - */ - virtnet_rq_unmap(rq, rq->last_dma, 0); - rq->last_dma = NULL; - } + dma = head; - dma->len = alloc_frag->size - sizeof(*dma); + /* new pages */ + if (!alloc_frag->offset) { + if (rq->last_dma) { + /* Now, the new page is allocated, the last dma + * will not be used. So the dma can be unmapped + * if the ref is 0. + */ + virtnet_rq_unmap(rq, rq->last_dma, 0); + rq->last_dma = NULL; + } - addr = virtqueue_dma_map_single_attrs(rq->vq, dma + 1, - dma->len, DMA_FROM_DEVICE, 0); - if (virtqueue_dma_mapping_error(rq->vq, addr)) - return NULL; + dma->len = alloc_frag->size - sizeof(*dma); - dma->addr = addr; - dma->need_sync = virtqueue_dma_need_sync(rq->vq, addr); + addr = virtqueue_dma_map_single_attrs(rq->vq, dma + 1, + dma->len, DMA_FROM_DEVICE, 0); + if (virtqueue_dma_mapping_error(rq->vq, addr)) + return NULL; - /* Add a reference to dma to prevent the entire dma from - * being released during error handling. This reference - * will be freed after the pages are no longer used. - */ - get_page(alloc_frag->page); - dma->ref = 1; - alloc_frag->offset = sizeof(*dma); + dma->addr = addr; + dma->need_sync = virtqueue_dma_need_sync(rq->vq, addr); - rq->last_dma = dma; - } + /* Add a reference to dma to prevent the entire dma from + * being released during error handling. This reference + * will be freed after the pages are no longer used. + */ + get_page(alloc_frag->page); + dma->ref = 1; + alloc_frag->offset = sizeof(*dma); - ++dma->ref; + rq->last_dma = dma; } + ++dma->ref; + buf = head + alloc_frag->offset; get_page(alloc_frag->page); @@ -2433,8 +2435,7 @@ static int add_recvbuf_small(struct virtnet_info *vi, struct receive_queue *rq, err = virtqueue_add_inbuf_ctx(rq->vq, rq->sg, 1, buf, ctx, gfp); if (err < 0) { - if (rq->do_dma) - virtnet_rq_unmap(rq, buf, 0); + virtnet_rq_unmap(rq, buf, 0); put_page(virt_to_head_page(buf)); } @@ -2554,8 +2555,7 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi, ctx = mergeable_len_to_ctx(len + room, headroom); err = virtqueue_add_inbuf_ctx(rq->vq, rq->sg, 1, buf, ctx, gfp); if (err < 0) { - if (rq->do_dma) - virtnet_rq_unmap(rq, buf, 0); + virtnet_rq_unmap(rq, buf, 0); put_page(virt_to_head_page(buf)); } @@ -5922,7 +5922,7 @@ static void free_receive_page_frags(struct virtnet_info *vi) int i; for (i = 0; i < vi->max_queue_pairs; i++) if (vi->rq[i].alloc_frag.page) { - if (vi->rq[i].do_dma && vi->rq[i].last_dma) + if (vi->rq[i].last_dma) virtnet_rq_unmap(&vi->rq[i], vi->rq[i].last_dma, 0); put_page(vi->rq[i].alloc_frag.page); } @@ -6111,11 +6111,9 @@ static void virtnet_rq_set_premapped(struct virtnet_info *vi) { int i; - for (i = 0; i < vi->max_queue_pairs; i++) { + for (i = 0; i < vi->max_queue_pairs; i++) /* error should never happen */ BUG_ON(virtqueue_set_dma_premapped(vi->rq[i].vq)); - vi->rq[i].do_dma = true; - } } static int init_vqs(struct virtnet_info *vi)