From patchwork Tue Mar 12 03:35:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13589494 Received: from out30-112.freemail.mail.aliyun.com (out30-112.freemail.mail.aliyun.com [115.124.30.112]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 52612AD58 for ; Tue, 12 Mar 2024 03:36:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.112 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710214565; cv=none; b=qblg7OJsQoUSsQs1NvRtxufHseT2ngLCc7LZrgoy8Z4Geds1MpTzArNys/hVmHGtkb56UygHD79Lh1mCDeuLbnDE3NiixOOx1/5zV5IX+krILVlonAgt9w5ZfLb2mHLjzq5WlkieUTwLzq44TcvKbAaXDGmUaH8ZH6ccT8R7sec= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710214565; c=relaxed/simple; bh=xhUTpnaeeKlUpCSXtJ7jvRZyUUTaLoA2azCYJmvZLgo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=JfMBacYOXNH/h121KpimYkGykvJxZT4ZlZI59jV7kpAKdXzyUBVapD80IWUKOBnF0dNC76TApgbIg8fQWCUKr/BBo624vJwbG0gJJPZV3If2oLOi8cw09kbMONYC9PmfvDBohD4fTcuFaLck9sCxY/OJ81gZASsdUP6TSSduZzI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=L6L8SOtn; arc=none smtp.client-ip=115.124.30.112 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="L6L8SOtn" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1710214560; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=51uP+XzVf6qKOPVOKjRFKoVdSryMstnzgYrhBOj0Uqo=; b=L6L8SOtnUUAm6P7UtrOj1On3GqCdMINgFK+vsOYuosGfww0gWESWl24jW3+m1/Az1Z4/y/PgdTvdFVFf3LHMW/cFTKePNOUcVdq9/zTNcTs0uhOVJJV9DkOX8wx/8nEh0aggffCdhxYDIFRSP7Gg0++rB70yZRAI02CIIGurPMM= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R521e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045168;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0W2KNDNT_1710214558; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W2KNDNT_1710214558) by smtp.aliyun-inc.com; Tue, 12 Mar 2024 11:35:59 +0800 From: Xuan Zhuo To: virtualization@lists.linux.dev Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH vhost v4 01/10] virtio_ring: introduce vring_need_unmap_buffer Date: Tue, 12 Mar 2024 11:35:48 +0800 Message-Id: <20240312033557.6351-2-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240312033557.6351-1-xuanzhuo@linux.alibaba.com> References: <20240312033557.6351-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 89bc1d4948eb To make the code readable, introduce vring_need_unmap_buffer() to replace do_unmap. use_dma_api premapped -> vring_need_unmap_buffer() 1. false false false 2. true false true 3. true true false Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/virtio/virtio_ring.c | 27 ++++++++++++--------------- 1 file changed, 12 insertions(+), 15 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 94c442ba844f..c2779e34aac7 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -175,11 +175,6 @@ struct vring_virtqueue { /* Do DMA mapping by driver */ bool premapped; - /* Do unmap or not for desc. Just when premapped is False and - * use_dma_api is true, this is true. - */ - bool do_unmap; - /* Head of free buffer list. */ unsigned int free_head; /* Number we've added since last sync. */ @@ -295,6 +290,11 @@ static bool vring_use_dma_api(const struct virtio_device *vdev) return false; } +static bool vring_need_unmap_buffer(const struct vring_virtqueue *vring) +{ + return vring->use_dma_api && !vring->premapped; +} + size_t virtio_max_dma_size(const struct virtio_device *vdev) { size_t max_segment_size = SIZE_MAX; @@ -443,7 +443,7 @@ static void vring_unmap_one_split_indirect(const struct vring_virtqueue *vq, { u16 flags; - if (!vq->do_unmap) + if (!vring_need_unmap_buffer(vq)) return; flags = virtio16_to_cpu(vq->vq.vdev, desc->flags); @@ -473,7 +473,7 @@ static unsigned int vring_unmap_one_split(const struct vring_virtqueue *vq, (flags & VRING_DESC_F_WRITE) ? DMA_FROM_DEVICE : DMA_TO_DEVICE); } else { - if (!vq->do_unmap) + if (!vring_need_unmap_buffer(vq)) goto out; dma_unmap_page(vring_dma_dev(vq), @@ -641,7 +641,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq, } /* Last one doesn't continue. */ desc[prev].flags &= cpu_to_virtio16(_vq->vdev, ~VRING_DESC_F_NEXT); - if (!indirect && vq->do_unmap) + if (!indirect && vring_need_unmap_buffer(vq)) vq->split.desc_extra[prev & (vq->split.vring.num - 1)].flags &= ~VRING_DESC_F_NEXT; @@ -800,7 +800,7 @@ static void detach_buf_split(struct vring_virtqueue *vq, unsigned int head, VRING_DESC_F_INDIRECT)); BUG_ON(len == 0 || len % sizeof(struct vring_desc)); - if (vq->do_unmap) { + if (vring_need_unmap_buffer(vq)) { for (j = 0; j < len / sizeof(struct vring_desc); j++) vring_unmap_one_split_indirect(vq, &indir_desc[j]); } @@ -1230,7 +1230,7 @@ static void vring_unmap_extra_packed(const struct vring_virtqueue *vq, (flags & VRING_DESC_F_WRITE) ? DMA_FROM_DEVICE : DMA_TO_DEVICE); } else { - if (!vq->do_unmap) + if (!vring_need_unmap_buffer(vq)) return; dma_unmap_page(vring_dma_dev(vq), @@ -1245,7 +1245,7 @@ static void vring_unmap_desc_packed(const struct vring_virtqueue *vq, { u16 flags; - if (!vq->do_unmap) + if (!vring_need_unmap_buffer(vq)) return; flags = le16_to_cpu(desc->flags); @@ -1626,7 +1626,7 @@ static void detach_buf_packed(struct vring_virtqueue *vq, if (!desc) return; - if (vq->do_unmap) { + if (vring_need_unmap_buffer(vq)) { len = vq->packed.desc_extra[id].len; for (i = 0; i < len / sizeof(struct vring_packed_desc); i++) @@ -2080,7 +2080,6 @@ static struct virtqueue *vring_create_virtqueue_packed(struct virtio_device *vde vq->dma_dev = dma_dev; vq->use_dma_api = vring_use_dma_api(vdev); vq->premapped = false; - vq->do_unmap = vq->use_dma_api; vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC) && !cfg_vq_get(cfg, ctx); @@ -2621,7 +2620,6 @@ static struct virtqueue *__vring_new_virtqueue(struct virtio_device *vdev, vq->dma_dev = tp_cfg->dma_dev; vq->use_dma_api = vring_use_dma_api(vdev); vq->premapped = false; - vq->do_unmap = vq->use_dma_api; vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC) && !cfg_vq_get(cfg, ctx); @@ -2752,7 +2750,6 @@ int virtqueue_set_dma_premapped(struct virtqueue *_vq) } vq->premapped = true; - vq->do_unmap = false; END_USE(vq);