From patchwork Mon Mar 25 08:54:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13601788 Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com [115.124.30.132]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4101C13A89D for ; Mon, 25 Mar 2024 08:54:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.132 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711356876; cv=none; b=tcWMB6VYAjIgHfwUYawJxvngVxfHfkZ+n3HyJuAQceC8yrjv/a/3IgEpFD9EgGoqy9OkfSEzZyGGsNqpyQOXuCTVvxLRyHnZXaMEhba+enjDH5l7jOKTW3cVKe1lmiYEIbe6E7QSOci4XW/LdlgF/lGzVf0PcpjFKZS4aWR/7FQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711356876; c=relaxed/simple; bh=xhUTpnaeeKlUpCSXtJ7jvRZyUUTaLoA2azCYJmvZLgo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=pZJUrkuNPON+sI/YkZxzw7WZuCC/TO7wNLuA1WxqTbEsCQwR4QmN7yrlSJ01zoClz+YnzQ88Z3wjMSu8WsmhGKyt+DbRMM5VUo/r68Zhat9th37QpU1PLzEEKwdEqmrROw57Fpyi3pm09xCVpx9NEkJ1eAIFkzwc7hjcw7Vf5Rc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=Reu5Fd2q; arc=none smtp.client-ip=115.124.30.132 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="Reu5Fd2q" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1711356871; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=51uP+XzVf6qKOPVOKjRFKoVdSryMstnzgYrhBOj0Uqo=; b=Reu5Fd2qu+N8Qd7bVJ/hZOC48mQ1vCrm2GJmqCn7M1SlcEG8C5e1Mhe89JP0H+OEr4EGxkybruHlBO1EyU3PlYtErNUYUY3vi9mE+Zk7JkPY0gQSKl5JPh2sTc6041SIJS7Jw7ZMZ7BKB62x3oHLSG/j0ZfcyViNf9/7Ao83IgY= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R181e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046050;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0W3DYiHr_1711356870; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W3DYiHr_1711356870) by smtp.aliyun-inc.com; Mon, 25 Mar 2024 16:54:30 +0800 From: Xuan Zhuo To: virtualization@lists.linux.dev Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH vhost v5 01/10] virtio_ring: introduce vring_need_unmap_buffer Date: Mon, 25 Mar 2024 16:54:19 +0800 Message-Id: <20240325085428.7275-2-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240325085428.7275-1-xuanzhuo@linux.alibaba.com> References: <20240325085428.7275-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 630d711f51f7 To make the code readable, introduce vring_need_unmap_buffer() to replace do_unmap. use_dma_api premapped -> vring_need_unmap_buffer() 1. false false false 2. true false true 3. true true false Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/virtio/virtio_ring.c | 27 ++++++++++++--------------- 1 file changed, 12 insertions(+), 15 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 94c442ba844f..c2779e34aac7 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -175,11 +175,6 @@ struct vring_virtqueue { /* Do DMA mapping by driver */ bool premapped; - /* Do unmap or not for desc. Just when premapped is False and - * use_dma_api is true, this is true. - */ - bool do_unmap; - /* Head of free buffer list. */ unsigned int free_head; /* Number we've added since last sync. */ @@ -295,6 +290,11 @@ static bool vring_use_dma_api(const struct virtio_device *vdev) return false; } +static bool vring_need_unmap_buffer(const struct vring_virtqueue *vring) +{ + return vring->use_dma_api && !vring->premapped; +} + size_t virtio_max_dma_size(const struct virtio_device *vdev) { size_t max_segment_size = SIZE_MAX; @@ -443,7 +443,7 @@ static void vring_unmap_one_split_indirect(const struct vring_virtqueue *vq, { u16 flags; - if (!vq->do_unmap) + if (!vring_need_unmap_buffer(vq)) return; flags = virtio16_to_cpu(vq->vq.vdev, desc->flags); @@ -473,7 +473,7 @@ static unsigned int vring_unmap_one_split(const struct vring_virtqueue *vq, (flags & VRING_DESC_F_WRITE) ? DMA_FROM_DEVICE : DMA_TO_DEVICE); } else { - if (!vq->do_unmap) + if (!vring_need_unmap_buffer(vq)) goto out; dma_unmap_page(vring_dma_dev(vq), @@ -641,7 +641,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq, } /* Last one doesn't continue. */ desc[prev].flags &= cpu_to_virtio16(_vq->vdev, ~VRING_DESC_F_NEXT); - if (!indirect && vq->do_unmap) + if (!indirect && vring_need_unmap_buffer(vq)) vq->split.desc_extra[prev & (vq->split.vring.num - 1)].flags &= ~VRING_DESC_F_NEXT; @@ -800,7 +800,7 @@ static void detach_buf_split(struct vring_virtqueue *vq, unsigned int head, VRING_DESC_F_INDIRECT)); BUG_ON(len == 0 || len % sizeof(struct vring_desc)); - if (vq->do_unmap) { + if (vring_need_unmap_buffer(vq)) { for (j = 0; j < len / sizeof(struct vring_desc); j++) vring_unmap_one_split_indirect(vq, &indir_desc[j]); } @@ -1230,7 +1230,7 @@ static void vring_unmap_extra_packed(const struct vring_virtqueue *vq, (flags & VRING_DESC_F_WRITE) ? DMA_FROM_DEVICE : DMA_TO_DEVICE); } else { - if (!vq->do_unmap) + if (!vring_need_unmap_buffer(vq)) return; dma_unmap_page(vring_dma_dev(vq), @@ -1245,7 +1245,7 @@ static void vring_unmap_desc_packed(const struct vring_virtqueue *vq, { u16 flags; - if (!vq->do_unmap) + if (!vring_need_unmap_buffer(vq)) return; flags = le16_to_cpu(desc->flags); @@ -1626,7 +1626,7 @@ static void detach_buf_packed(struct vring_virtqueue *vq, if (!desc) return; - if (vq->do_unmap) { + if (vring_need_unmap_buffer(vq)) { len = vq->packed.desc_extra[id].len; for (i = 0; i < len / sizeof(struct vring_packed_desc); i++) @@ -2080,7 +2080,6 @@ static struct virtqueue *vring_create_virtqueue_packed(struct virtio_device *vde vq->dma_dev = dma_dev; vq->use_dma_api = vring_use_dma_api(vdev); vq->premapped = false; - vq->do_unmap = vq->use_dma_api; vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC) && !cfg_vq_get(cfg, ctx); @@ -2621,7 +2620,6 @@ static struct virtqueue *__vring_new_virtqueue(struct virtio_device *vdev, vq->dma_dev = tp_cfg->dma_dev; vq->use_dma_api = vring_use_dma_api(vdev); vq->premapped = false; - vq->do_unmap = vq->use_dma_api; vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC) && !cfg_vq_get(cfg, ctx); @@ -2752,7 +2750,6 @@ int virtqueue_set_dma_premapped(struct virtqueue *_vq) } vq->premapped = true; - vq->do_unmap = false; END_USE(vq); From patchwork Mon Mar 25 08:54:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13601789 Received: from out30-110.freemail.mail.aliyun.com (out30-110.freemail.mail.aliyun.com [115.124.30.110]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6F97213C3FE for ; Mon, 25 Mar 2024 08:54:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.110 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711356876; cv=none; b=SeXJSf/GYyVM8JwPF7/nVfZl0JeFATcmGVh1Wt/htHeenv5yDNNDV+mKiUupoAKkSX3GxiRxQYImOUibyDqNb7uCAAk/ZRKG08zGdzBTvE3SkOvRyymWjzzMgrj14Lb65KVmkuAsI46VXDYIamuIfQTR9sfVj7lZdRn8pcO0wlE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711356876; c=relaxed/simple; bh=h15bjF9llJ429ZZNbUZa8ltsQBLyfwodjC2/k51RXbg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=sZG/pLsZt2xGQ6mbwpPX7B7XKT7ZLi6FFG1QM2/5v4t4P8kgjlvx4bGh5IxqeUbsvgfpwhHr8j/rA9oOfTd63LM2ohtJIBeYkHAkyj0U5Usk9SgeOGApcYugyQWQewEWayn2sPnVAhH9wjzbeRNXWQAYnme/drBQBp5a8cnMf38= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=CpTQOord; arc=none smtp.client-ip=115.124.30.110 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="CpTQOord" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1711356872; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=7Gh4r4QXr5PHwXkNelVNohKEpxzSnmWRm5u+xlKX7lU=; b=CpTQOord6cPNjz/kXIHZ/530YD2D8E2ndSH+0lFtN1zK/955opWbkY0OX9BLQoNRChrVsbQuEH+ShEzBYbK+nrkEKN6E6qchbGXmtoLxjaBzbB4FxkeigTKFHBX3ydqENNOXmq0D4DaQkD8TuzK7t9fRTu2Ua793+jXwkUNpTjM= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R291e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045170;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0W3DVftL_1711356871; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W3DVftL_1711356871) by smtp.aliyun-inc.com; Mon, 25 Mar 2024 16:54:31 +0800 From: Xuan Zhuo To: virtualization@lists.linux.dev Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH vhost v5 02/10] virtio_ring: packed: remove double check of the unmap ops Date: Mon, 25 Mar 2024 16:54:20 +0800 Message-Id: <20240325085428.7275-3-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240325085428.7275-1-xuanzhuo@linux.alibaba.com> References: <20240325085428.7275-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 630d711f51f7 In the functions vring_unmap_extra_packed and vring_unmap_desc_packed, multiple checks are made whether unmap is performed and whether it is INDIRECT. These two functions are usually called in a loop, and we should put the check outside the loop. And we unmap the descs with VRING_DESC_F_INDIRECT on the same path with other descs, that make the thing more complex. If we distinguish the descs with VRING_DESC_F_INDIRECT before unmap, thing will be clearer. For desc with VRING_DESC_F_INDIRECT flag: 1. only one desc of the desc table is used, we do not need the loop Theoretically, indirect descriptors could be chained. But now, that is not supported by "add", so we ignore this case. 2. the called unmap api is difference from the other desc 3. the vq->premapped is not needed to check 4. the vq->indirect is not needed to check 5. the state->indir_desc must not be null Signed-off-by: Xuan Zhuo --- drivers/virtio/virtio_ring.c | 78 ++++++++++++++++++------------------ 1 file changed, 40 insertions(+), 38 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index c2779e34aac7..0dfbd17e5a87 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -1214,6 +1214,7 @@ static u16 packed_last_used(u16 last_used_idx) return last_used_idx & ~(-(1 << VRING_PACKED_EVENT_F_WRAP_CTR)); } +/* caller must check vring_need_unmap_buffer() */ static void vring_unmap_extra_packed(const struct vring_virtqueue *vq, const struct vring_desc_extra *extra) { @@ -1221,33 +1222,18 @@ static void vring_unmap_extra_packed(const struct vring_virtqueue *vq, flags = extra->flags; - if (flags & VRING_DESC_F_INDIRECT) { - if (!vq->use_dma_api) - return; - - dma_unmap_single(vring_dma_dev(vq), - extra->addr, extra->len, - (flags & VRING_DESC_F_WRITE) ? - DMA_FROM_DEVICE : DMA_TO_DEVICE); - } else { - if (!vring_need_unmap_buffer(vq)) - return; - - dma_unmap_page(vring_dma_dev(vq), - extra->addr, extra->len, - (flags & VRING_DESC_F_WRITE) ? - DMA_FROM_DEVICE : DMA_TO_DEVICE); - } + dma_unmap_page(vring_dma_dev(vq), + extra->addr, extra->len, + (flags & VRING_DESC_F_WRITE) ? + DMA_FROM_DEVICE : DMA_TO_DEVICE); } +/* caller must check vring_need_unmap_buffer() */ static void vring_unmap_desc_packed(const struct vring_virtqueue *vq, const struct vring_packed_desc *desc) { u16 flags; - if (!vring_need_unmap_buffer(vq)) - return; - flags = le16_to_cpu(desc->flags); dma_unmap_page(vring_dma_dev(vq), @@ -1323,7 +1309,7 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq, total_sg * sizeof(struct vring_packed_desc), DMA_TO_DEVICE); if (vring_mapping_error(vq, addr)) { - if (vq->premapped) + if (!vring_need_unmap_buffer(vq)) goto free_desc; goto unmap_release; @@ -1338,10 +1324,11 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq, vq->packed.desc_extra[id].addr = addr; vq->packed.desc_extra[id].len = total_sg * sizeof(struct vring_packed_desc); - vq->packed.desc_extra[id].flags = VRING_DESC_F_INDIRECT | - vq->packed.avail_used_flags; } + vq->packed.desc_extra[id].flags = VRING_DESC_F_INDIRECT | + vq->packed.avail_used_flags; + /* * A driver MUST NOT make the first descriptor in the list * available before all subsequent descriptors comprising @@ -1382,6 +1369,8 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq, unmap_release: err_idx = i; + WARN_ON(!vring_need_unmap_buffer(vq)); + for (i = 0; i < err_idx; i++) vring_unmap_desc_packed(vq, &desc[i]); @@ -1475,12 +1464,13 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq, desc[i].len = cpu_to_le32(sg->length); desc[i].id = cpu_to_le16(id); - if (unlikely(vq->use_dma_api)) { + if (vring_need_unmap_buffer(vq)) { vq->packed.desc_extra[curr].addr = addr; vq->packed.desc_extra[curr].len = sg->length; - vq->packed.desc_extra[curr].flags = - le16_to_cpu(flags); } + + vq->packed.desc_extra[curr].flags = le16_to_cpu(flags); + prev = curr; curr = vq->packed.desc_extra[curr].next; @@ -1530,6 +1520,8 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq, vq->packed.avail_used_flags = avail_used_flags; + WARN_ON(!vring_need_unmap_buffer(vq)); + for (n = 0; n < total_sg; n++) { if (i == err_idx) break; @@ -1599,7 +1591,9 @@ static void detach_buf_packed(struct vring_virtqueue *vq, struct vring_desc_state_packed *state = NULL; struct vring_packed_desc *desc; unsigned int i, curr; + u16 flags; + flags = vq->packed.desc_extra[id].flags; state = &vq->packed.desc_state[id]; /* Clear data ptr. */ @@ -1609,22 +1603,32 @@ static void detach_buf_packed(struct vring_virtqueue *vq, vq->free_head = id; vq->vq.num_free += state->num; - if (unlikely(vq->use_dma_api)) { - curr = id; - for (i = 0; i < state->num; i++) { - vring_unmap_extra_packed(vq, - &vq->packed.desc_extra[curr]); - curr = vq->packed.desc_extra[curr].next; + if (!(flags & VRING_DESC_F_INDIRECT)) { + if (vring_need_unmap_buffer(vq)) { + curr = id; + for (i = 0; i < state->num; i++) { + vring_unmap_extra_packed(vq, + &vq->packed.desc_extra[curr]); + curr = vq->packed.desc_extra[curr].next; + } } - } - if (vq->indirect) { + if (ctx) + *ctx = state->indir_desc; + } else { + const struct vring_desc_extra *extra; u32 len; + if (vq->use_dma_api) { + extra = &vq->packed.desc_extra[id]; + dma_unmap_single(vring_dma_dev(vq), + extra->addr, extra->len, + (flags & VRING_DESC_F_WRITE) ? + DMA_FROM_DEVICE : DMA_TO_DEVICE); + } + /* Free the indirect table, if any, now that it's unmapped. */ desc = state->indir_desc; - if (!desc) - return; if (vring_need_unmap_buffer(vq)) { len = vq->packed.desc_extra[id].len; @@ -1634,8 +1638,6 @@ static void detach_buf_packed(struct vring_virtqueue *vq, } kfree(desc); state->indir_desc = NULL; - } else if (ctx) { - *ctx = state->indir_desc; } } From patchwork Mon Mar 25 08:54:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13601785 Received: from out30-113.freemail.mail.aliyun.com (out30-113.freemail.mail.aliyun.com [115.124.30.113]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 34A3015E5DF for ; Mon, 25 Mar 2024 08:54:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.113 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711356877; cv=none; b=DgXEI9rmnOo/IVXrUTmP915AMMUfFjxeaZqWX9AiC3k1okBgmk87cpZq6uCCT9QsFWFr7n4n9LCHyW2X2sCt6/i+gvstTlwLfHSZH03tNgMRB5YDkeRT4g9+dg+Pk9MTbKjjCq4xnjVcIImXpzgdlGM5IfcPE1gymwmGtpaVY0w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711356877; c=relaxed/simple; bh=c4jfFmqYnQggFPQ6Wol/611ygcSvayV7Of1tx0wDmjM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ofRGAmlOinZYXp3G62+yWOVWcEtrWoz7zqJ5ZcGg8QHWyRIj8tkZdxmTSxin0EPLPdpGa//a5PAdADgnve1xC0RaTKLR4tkIsGmrqFUYxc8stXYdiKlgyo5ELxcfZi4y5r4SUkBbvHF9U/Za+5KSpsNVqr4yOlRe9y+84cC2wWM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=n4tJPQbB; arc=none smtp.client-ip=115.124.30.113 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="n4tJPQbB" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1711356873; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=IIODulX2bDUS1gdYH0/YfIZG+SUOw6BVT5FYBTh739M=; b=n4tJPQbB9ozlPijlt/nog3SbaMXA1H52WhlkIZbY1K5unQj3LapMS0fro16IdP7rK+r07cLgkQ8wC5zvNspt+RdkBgmVyXHAyZZep1mUPraGM8P4PvreqwgT4ZAXMQcK61TXF9elr74k2Pp85+9eMt44AfxHQEY/fMrBq/oK54M= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R141e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045170;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0W3DYiIv_1711356872; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W3DYiIv_1711356872) by smtp.aliyun-inc.com; Mon, 25 Mar 2024 16:54:32 +0800 From: Xuan Zhuo To: virtualization@lists.linux.dev Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH vhost v5 03/10] virtio_ring: packed: structure the indirect desc table Date: Mon, 25 Mar 2024 16:54:21 +0800 Message-Id: <20240325085428.7275-4-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240325085428.7275-1-xuanzhuo@linux.alibaba.com> References: <20240325085428.7275-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 630d711f51f7 This commit structure the indirect desc table. Then we can get the desc num directly when doing unmap. And save the dma info to the struct, then the indirect will not use the dma fields of the desc_extra. The subsequent commits will make the dma fields are optional. But for the indirect case, we must record the dma info. Signed-off-by: Xuan Zhuo --- drivers/virtio/virtio_ring.c | 61 +++++++++++++++++++----------------- 1 file changed, 33 insertions(+), 28 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 0dfbd17e5a87..cf17456f4d95 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -74,7 +74,7 @@ struct vring_desc_state_split { struct vring_desc_state_packed { void *data; /* Data for callback. */ - struct vring_packed_desc *indir_desc; /* Indirect descriptor, if any. */ + struct vring_desc_extra *indir_desc; /* Indirect descriptor, if any. */ u16 num; /* Descriptor list length. */ u16 last; /* The last desc state in a list. */ }; @@ -1243,10 +1243,13 @@ static void vring_unmap_desc_packed(const struct vring_virtqueue *vq, DMA_FROM_DEVICE : DMA_TO_DEVICE); } -static struct vring_packed_desc *alloc_indirect_packed(unsigned int total_sg, - gfp_t gfp) +static struct vring_desc_extra *alloc_indirect_packed(unsigned int total_sg, + gfp_t gfp) { - struct vring_packed_desc *desc; + struct vring_desc_extra *in_extra; + u32 size; + + size = sizeof(*in_extra) + sizeof(struct vring_packed_desc) * total_sg; /* * We require lowmem mappings for the descriptors because @@ -1255,9 +1258,10 @@ static struct vring_packed_desc *alloc_indirect_packed(unsigned int total_sg, */ gfp &= ~__GFP_HIGHMEM; - desc = kmalloc_array(total_sg, sizeof(struct vring_packed_desc), gfp); - return desc; + in_extra = kmalloc(size, gfp); + + return in_extra; } static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq, @@ -1268,6 +1272,7 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq, void *data, gfp_t gfp) { + struct vring_desc_extra *in_extra; struct vring_packed_desc *desc; struct scatterlist *sg; unsigned int i, n, err_idx; @@ -1275,10 +1280,12 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq, dma_addr_t addr; head = vq->packed.next_avail_idx; - desc = alloc_indirect_packed(total_sg, gfp); - if (!desc) + in_extra = alloc_indirect_packed(total_sg, gfp); + if (!in_extra) return -ENOMEM; + desc = (struct vring_packed_desc *)(in_extra + 1); + if (unlikely(vq->vq.num_free < 1)) { pr_debug("Can't add buf len 1 - avail = 0\n"); kfree(desc); @@ -1315,17 +1322,16 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq, goto unmap_release; } + if (vq->use_dma_api) { + in_extra->addr = addr; + in_extra->len = total_sg * sizeof(struct vring_packed_desc); + } + vq->packed.vring.desc[head].addr = cpu_to_le64(addr); vq->packed.vring.desc[head].len = cpu_to_le32(total_sg * sizeof(struct vring_packed_desc)); vq->packed.vring.desc[head].id = cpu_to_le16(id); - if (vq->use_dma_api) { - vq->packed.desc_extra[id].addr = addr; - vq->packed.desc_extra[id].len = total_sg * - sizeof(struct vring_packed_desc); - } - vq->packed.desc_extra[id].flags = VRING_DESC_F_INDIRECT | vq->packed.avail_used_flags; @@ -1356,7 +1362,7 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq, /* Store token and indirect buffer state. */ vq->packed.desc_state[id].num = 1; vq->packed.desc_state[id].data = data; - vq->packed.desc_state[id].indir_desc = desc; + vq->packed.desc_state[id].indir_desc = in_extra; vq->packed.desc_state[id].last = id; vq->num_added += 1; @@ -1375,7 +1381,7 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq, vring_unmap_desc_packed(vq, &desc[i]); free_desc: - kfree(desc); + kfree(in_extra); END_USE(vq); return -ENOMEM; @@ -1589,7 +1595,6 @@ static void detach_buf_packed(struct vring_virtqueue *vq, unsigned int id, void **ctx) { struct vring_desc_state_packed *state = NULL; - struct vring_packed_desc *desc; unsigned int i, curr; u16 flags; @@ -1616,27 +1621,27 @@ static void detach_buf_packed(struct vring_virtqueue *vq, if (ctx) *ctx = state->indir_desc; } else { - const struct vring_desc_extra *extra; - u32 len; + struct vring_desc_extra *in_extra; + struct vring_packed_desc *desc; + u32 num; + + in_extra = state->indir_desc; if (vq->use_dma_api) { - extra = &vq->packed.desc_extra[id]; dma_unmap_single(vring_dma_dev(vq), - extra->addr, extra->len, + in_extra->addr, in_extra->len, (flags & VRING_DESC_F_WRITE) ? DMA_FROM_DEVICE : DMA_TO_DEVICE); } - /* Free the indirect table, if any, now that it's unmapped. */ - desc = state->indir_desc; - if (vring_need_unmap_buffer(vq)) { - len = vq->packed.desc_extra[id].len; - for (i = 0; i < len / sizeof(struct vring_packed_desc); - i++) + num = in_extra->len / sizeof(struct vring_packed_desc); + desc = (struct vring_packed_desc *)(in_extra + 1); + + for (i = 0; i < num; i++) vring_unmap_desc_packed(vq, &desc[i]); } - kfree(desc); + kfree(in_extra); state->indir_desc = NULL; } } From patchwork Mon Mar 25 08:54:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13601786 Received: from out30-98.freemail.mail.aliyun.com (out30-98.freemail.mail.aliyun.com [115.124.30.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0CB0415ECDD for ; Mon, 25 Mar 2024 08:54:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.98 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711356879; cv=none; b=N8BFF4fPRfjJpxQZonQQorHYprfhZhLo8kp/2BCp6e4/Z+WCB+PA3o4g70PERrIQuXe0BR1ipGfeI/+4IGHvWj3YISGoPVWuGnpspCIsqQZgzjxw2F3rsaXkacHj5bfRg3kGmLsUiV+qZwvGiwByb0dLhYcXzAWPsRWK2HNEZfo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711356879; c=relaxed/simple; bh=DPRoIkmnYMa/dcJ2TKh+KVyDxFSQWcPdz8E1wPIifLY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=k81bvZAOC7sBX1DjDVbRluiUW18X6GlSaWHKzlsxc6HAvPxvCsjuBUXT73V/NGthAIuOzhJcTdOQ8lIG7VP4x0netXDP+FMTDFnIFG3ZU8smpNmIfSum05/J3u3YRLRrD9ApM9jr1c8o9f5HxYkfCGJ398/OLRFE5xRKMC6vIQw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=mn8T6cWM; arc=none smtp.client-ip=115.124.30.98 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="mn8T6cWM" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1711356874; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=CA4YW8r0cYLWT3zrjhdfM/fiz/WJQnh8X5xbdQ1VIBg=; b=mn8T6cWMHYvmH+QG/aL55cj5p4JI4aKPnlTS/x97r7rGIq6mX+WrbeffsLvWaN0EpTikbFh0UZl+RtUfaF3KIKHXZoPJHGLcwwGoyS6Q4w3y7TuLs0p9hT5wh5F8frrNg91rBx2o+vkZdUwykZBh79isUsc4DHF9WUYVw+/R1mY= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R341e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045192;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0W3DMyyl_1711356872; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W3DMyyl_1711356872) by smtp.aliyun-inc.com; Mon, 25 Mar 2024 16:54:33 +0800 From: Xuan Zhuo To: virtualization@lists.linux.dev Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH vhost v5 04/10] virtio_ring: split: remove double check of the unmap ops Date: Mon, 25 Mar 2024 16:54:22 +0800 Message-Id: <20240325085428.7275-5-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240325085428.7275-1-xuanzhuo@linux.alibaba.com> References: <20240325085428.7275-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 630d711f51f7 In the functions vring_unmap_one_split and vring_unmap_one_split_indirect, multiple checks are made whether unmap is performed and whether it is INDIRECT. These two functions are usually called in a loop, and we should put the check outside the loop. And we unmap the descs with VRING_DESC_F_INDIRECT on the same path with other descs, that make the thing more complex. If we distinguish the descs with VRING_DESC_F_INDIRECT before unmap, thing will be clearer. For desc with VRING_DESC_F_INDIRECT flag: 1. only one desc of the desc table is used, we do not need the loop Theoretically, indirect descriptors could be chained. But now, that is not supported by "add", so we ignore this case. 2. the called unmap api is difference from the other desc 3. the vq->premapped is not needed to check 4. the vq->indirect is not needed to check 5. the state->indir_desc must not be null Signed-off-by: Xuan Zhuo --- drivers/virtio/virtio_ring.c | 79 +++++++++++++++++------------------- 1 file changed, 38 insertions(+), 41 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index cf17456f4d95..a8d176abc9ea 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -443,9 +443,6 @@ static void vring_unmap_one_split_indirect(const struct vring_virtqueue *vq, { u16 flags; - if (!vring_need_unmap_buffer(vq)) - return; - flags = virtio16_to_cpu(vq->vq.vdev, desc->flags); dma_unmap_page(vring_dma_dev(vq), @@ -463,27 +460,12 @@ static unsigned int vring_unmap_one_split(const struct vring_virtqueue *vq, flags = extra[i].flags; - if (flags & VRING_DESC_F_INDIRECT) { - if (!vq->use_dma_api) - goto out; - - dma_unmap_single(vring_dma_dev(vq), - extra[i].addr, - extra[i].len, - (flags & VRING_DESC_F_WRITE) ? - DMA_FROM_DEVICE : DMA_TO_DEVICE); - } else { - if (!vring_need_unmap_buffer(vq)) - goto out; - - dma_unmap_page(vring_dma_dev(vq), - extra[i].addr, - extra[i].len, - (flags & VRING_DESC_F_WRITE) ? - DMA_FROM_DEVICE : DMA_TO_DEVICE); - } + dma_unmap_page(vring_dma_dev(vq), + extra[i].addr, + extra[i].len, + (flags & VRING_DESC_F_WRITE) ? + DMA_FROM_DEVICE : DMA_TO_DEVICE); -out: return extra[i].next; } @@ -651,7 +633,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq, vq, desc, total_sg * sizeof(struct vring_desc), DMA_TO_DEVICE); if (vring_mapping_error(vq, addr)) { - if (vq->premapped) + if (!vring_need_unmap_buffer(vq)) goto free_indirect; goto unmap_release; @@ -704,6 +686,9 @@ static inline int virtqueue_add_split(struct virtqueue *_vq, return 0; unmap_release: + + WARN_ON(!vring_need_unmap_buffer(vq)); + err_idx = i; if (indirect) @@ -765,34 +750,42 @@ static void detach_buf_split(struct vring_virtqueue *vq, unsigned int head, { unsigned int i, j; __virtio16 nextflag = cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT); + u16 flags; /* Clear data ptr. */ vq->split.desc_state[head].data = NULL; + flags = vq->split.desc_extra[head].flags; /* Put back on free list: unmap first-level descriptors and find end */ i = head; - while (vq->split.vring.desc[i].flags & nextflag) { - vring_unmap_one_split(vq, i); - i = vq->split.desc_extra[i].next; - vq->vq.num_free++; - } - - vring_unmap_one_split(vq, i); - vq->split.desc_extra[i].next = vq->free_head; - vq->free_head = head; + if (!(flags & VRING_DESC_F_INDIRECT)) { + while (vq->split.vring.desc[i].flags & nextflag) { + if (vring_need_unmap_buffer(vq)) + vring_unmap_one_split(vq, i); + i = vq->split.desc_extra[i].next; + vq->vq.num_free++; + } - /* Plus final descriptor */ - vq->vq.num_free++; + if (vring_need_unmap_buffer(vq)) + vring_unmap_one_split(vq, i); - if (vq->indirect) { + if (ctx) + *ctx = vq->split.desc_state[head].indir_desc; + } else { struct vring_desc *indir_desc = vq->split.desc_state[head].indir_desc; u32 len; - /* Free the indirect table, if any, now that it's unmapped. */ - if (!indir_desc) - return; + if (vq->use_dma_api) { + struct vring_desc_extra *extra = vq->split.desc_extra; + + dma_unmap_single(vring_dma_dev(vq), + extra[i].addr, + extra[i].len, + (flags & VRING_DESC_F_WRITE) ? + DMA_FROM_DEVICE : DMA_TO_DEVICE); + } len = vq->split.desc_extra[head].len; @@ -807,9 +800,13 @@ static void detach_buf_split(struct vring_virtqueue *vq, unsigned int head, kfree(indir_desc); vq->split.desc_state[head].indir_desc = NULL; - } else if (ctx) { - *ctx = vq->split.desc_state[head].indir_desc; } + + vq->split.desc_extra[i].next = vq->free_head; + vq->free_head = head; + + /* Plus final descriptor */ + vq->vq.num_free++; } static bool more_used_split(const struct vring_virtqueue *vq) From patchwork Mon Mar 25 08:54:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13601783 Received: from out30-113.freemail.mail.aliyun.com (out30-113.freemail.mail.aliyun.com [115.124.30.113]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F03F915E5C8 for ; Mon, 25 Mar 2024 08:54:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.113 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711356879; cv=none; b=l1xQfRJaefqETvpZkyEUBWSpSkRntfDFHrgba3SshVjm3tOdqScS7FstQ55KeyaYZm0tXERpMSRvbAeFbaVKp7aN8oG1LwcHO+vKg0eivmUr0MaOghHjhcE4UGHS3VATh0rujVIFejHE9Vk1UU2AUnG/NO0EhVIsKyFB37b71Zw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711356879; c=relaxed/simple; bh=xAX0TjsOTrHXWZQghtpcGx2zNI8rDk13MpT0U8GUG/Y=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=rY0oZb6j/0juVa0HZqOEsCiCIQXb3C+hmPEROWAtiSQ2ylbYxY5chwAOUC0j4esS3wsVnU+mBhYI1PORsJDalkipkQUshLpWKn4Uh4uuT9zJ54JFhmg/wrRxygfQiR2BapuVZxa9bGdh2Po97mZH9f+qL+xs0Ybert24YQ9xikM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=FGE5gLQu; arc=none smtp.client-ip=115.124.30.113 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="FGE5gLQu" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1711356874; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=9IavLU2HzxoOaeP6Cs1b2C3W+Phxe68GTnVSjDF9qsE=; b=FGE5gLQufEgpvt3YFWbqZu+jsD3IvGQVm2gpjI+6cloH/L8NaT6u8MAgRWULAJuQ2WxRSgfjMt4vqr7/WvxSILY6hAoOcH+pNv5HwwlhVXlcJgEn9cI3y77Jdvcm75fWjpTlAn7gjuVtxm+AbmXlYoZg7VgP2PCQuXqWHLdgOGs= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R231e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045168;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0W3DVfuI_1711356873; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W3DVfuI_1711356873) by smtp.aliyun-inc.com; Mon, 25 Mar 2024 16:54:34 +0800 From: Xuan Zhuo To: virtualization@lists.linux.dev Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH vhost v5 05/10] virtio_ring: split: structure the indirect desc table Date: Mon, 25 Mar 2024 16:54:23 +0800 Message-Id: <20240325085428.7275-6-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240325085428.7275-1-xuanzhuo@linux.alibaba.com> References: <20240325085428.7275-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 630d711f51f7 This commit structure the indirect desc table. Then we can get the desc num directly when doing unmap. And save the dma info to the struct, then the indirect will not use the dma fields of the desc_extra. The subsequent commits will make the dma fields are optional. But for the indirect case, we must record the dma info. Signed-off-by: Xuan Zhuo --- drivers/virtio/virtio_ring.c | 87 +++++++++++++++++++++--------------- 1 file changed, 51 insertions(+), 36 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index a8d176abc9ea..980f81f5ab76 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -69,7 +69,7 @@ struct vring_desc_state_split { void *data; /* Data for callback. */ - struct vring_desc *indir_desc; /* Indirect descriptor, if any. */ + struct vring_desc_extra *indir_desc; /* Indirect descriptor, if any. */ }; struct vring_desc_state_packed { @@ -469,12 +469,16 @@ static unsigned int vring_unmap_one_split(const struct vring_virtqueue *vq, return extra[i].next; } -static struct vring_desc *alloc_indirect_split(struct virtqueue *_vq, - unsigned int total_sg, - gfp_t gfp) +static struct vring_desc_extra *alloc_indirect_split(struct virtqueue *_vq, + unsigned int total_sg, + gfp_t gfp) { + struct vring_desc_extra *in_extra; struct vring_desc *desc; unsigned int i; + u32 size; + + size = sizeof(*in_extra) + sizeof(struct vring_desc) * total_sg; /* * We require lowmem mappings for the descriptors because @@ -483,13 +487,16 @@ static struct vring_desc *alloc_indirect_split(struct virtqueue *_vq, */ gfp &= ~__GFP_HIGHMEM; - desc = kmalloc_array(total_sg, sizeof(struct vring_desc), gfp); - if (!desc) + in_extra = kmalloc(size, gfp); + if (!in_extra) return NULL; + desc = (struct vring_desc *)(in_extra + 1); + for (i = 0; i < total_sg; i++) desc[i].next = cpu_to_virtio16(_vq->vdev, i + 1); - return desc; + + return in_extra; } static inline unsigned int virtqueue_add_desc_split(struct virtqueue *vq, @@ -531,6 +538,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq, gfp_t gfp) { struct vring_virtqueue *vq = to_vvq(_vq); + struct vring_desc_extra *in_extra; struct scatterlist *sg; struct vring_desc *desc; unsigned int i, n, avail, descs_used, prev, err_idx; @@ -553,9 +561,13 @@ static inline int virtqueue_add_split(struct virtqueue *_vq, head = vq->free_head; - if (virtqueue_use_indirect(vq, total_sg)) - desc = alloc_indirect_split(_vq, total_sg, gfp); - else { + if (virtqueue_use_indirect(vq, total_sg)) { + in_extra = alloc_indirect_split(_vq, total_sg, gfp); + if (!in_extra) + desc = NULL; + else + desc = (struct vring_desc *)(in_extra + 1); + } else { desc = NULL; WARN_ON_ONCE(total_sg > vq->split.vring.num && !vq->indirect); } @@ -628,10 +640,10 @@ static inline int virtqueue_add_split(struct virtqueue *_vq, ~VRING_DESC_F_NEXT; if (indirect) { + u32 size = total_sg * sizeof(struct vring_desc); + /* Now that the indirect table is filled in, map it. */ - dma_addr_t addr = vring_map_single( - vq, desc, total_sg * sizeof(struct vring_desc), - DMA_TO_DEVICE); + dma_addr_t addr = vring_map_single(vq, desc, size, DMA_TO_DEVICE); if (vring_mapping_error(vq, addr)) { if (!vring_need_unmap_buffer(vq)) goto free_indirect; @@ -639,11 +651,18 @@ static inline int virtqueue_add_split(struct virtqueue *_vq, goto unmap_release; } - virtqueue_add_desc_split(_vq, vq->split.vring.desc, - head, addr, - total_sg * sizeof(struct vring_desc), - VRING_DESC_F_INDIRECT, - false); + desc = &vq->split.vring.desc[head]; + + desc->flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_INDIRECT); + desc->addr = cpu_to_virtio64(_vq->vdev, addr); + desc->len = cpu_to_virtio32(_vq->vdev, size); + + vq->split.desc_extra[head].flags = VRING_DESC_F_INDIRECT; + + if (vq->use_dma_api) { + in_extra->addr = addr; + in_extra->len = size; + } } /* We're using some buffers from the free list. */ @@ -658,7 +677,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq, /* Store token and indirect buffer state. */ vq->split.desc_state[head].data = data; if (indirect) - vq->split.desc_state[head].indir_desc = desc; + vq->split.desc_state[head].indir_desc = in_extra; else vq->split.desc_state[head].indir_desc = ctx; @@ -708,7 +727,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq, free_indirect: if (indirect) - kfree(desc); + kfree(in_extra); END_USE(vq); return -ENOMEM; @@ -773,32 +792,28 @@ static void detach_buf_split(struct vring_virtqueue *vq, unsigned int head, if (ctx) *ctx = vq->split.desc_state[head].indir_desc; } else { - struct vring_desc *indir_desc = - vq->split.desc_state[head].indir_desc; - u32 len; + struct vring_desc_extra *in_extra; + struct vring_desc *desc; + u32 num; - if (vq->use_dma_api) { - struct vring_desc_extra *extra = vq->split.desc_extra; + in_extra = vq->split.desc_state[head].indir_desc; + if (vq->use_dma_api) { dma_unmap_single(vring_dma_dev(vq), - extra[i].addr, - extra[i].len, + in_extra->addr, in_extra->len, (flags & VRING_DESC_F_WRITE) ? DMA_FROM_DEVICE : DMA_TO_DEVICE); } - len = vq->split.desc_extra[head].len; - - BUG_ON(!(vq->split.desc_extra[head].flags & - VRING_DESC_F_INDIRECT)); - BUG_ON(len == 0 || len % sizeof(struct vring_desc)); - if (vring_need_unmap_buffer(vq)) { - for (j = 0; j < len / sizeof(struct vring_desc); j++) - vring_unmap_one_split_indirect(vq, &indir_desc[j]); + num = in_extra->len / sizeof(struct vring_desc); + desc = (struct vring_desc *)(in_extra + 1); + + for (j = 0; j < num; j++) + vring_unmap_one_split_indirect(vq, &desc[j]); } - kfree(indir_desc); + kfree(in_extra); vq->split.desc_state[head].indir_desc = NULL; } From patchwork Mon Mar 25 08:54:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13601784 Received: from out30-111.freemail.mail.aliyun.com (out30-111.freemail.mail.aliyun.com [115.124.30.111]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9B41B15ECD6 for ; Mon, 25 Mar 2024 08:54:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.111 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711356880; cv=none; b=gUcTtasvAw1L4mnpDITbSx1dIpvdVIKXQzqmW+KgJ9sXL5FFv3zwKak14mg6To0RUesubGssBkptYQCBYvb0TJmElIkl8CsJip0/R//K13r8aFkai4AFA4ia583AjZ5rSJHHNE8tupjXy+S2ky+C+qNkFfcf264fOukPFEO1tPs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711356880; c=relaxed/simple; bh=MmS7ChfI+8BWtZQDMl9bMpseuFcbUuboZ2gKaSLIAR4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=fEPmWZjMNf/qC/JTzpqdOXFcISu2z341eN0A6Id58CVvdQKr5w2qibzNjDK/QFJoudWD5Lt/Tr3W3PvhAUZVbykuzBdSBZxne6nVT8YW/ol4MiCp4G5jvVPAFS7jEVBD3aK1xpHOjlHz91zyYrxDqCJcoCZZvEVSJbMxGhxCLCM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=TYgBxgty; arc=none smtp.client-ip=115.124.30.111 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="TYgBxgty" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1711356875; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=LYjyA2QM+6eHX+8FCiMjD17IA9KEMA9CxIxZaiQCPsQ=; b=TYgBxgtybZaU2bqEAeSXsVGI+ZgMFIkx/yxOm+yuOF8jHePl5F+3GTCXOUhiTUCnkD6r5FQo8HvJw6hAMA/T0W0TGynky66fwN4Txibcdr3n+ZlZW4Ppbeisfj4QQsEE+u3AqyyKcs1x2NzUC6NLxsl9Ep5qQSS4pfHVjTODjdM= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R111e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046059;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0W3DYiJp_1711356874; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W3DYiJp_1711356874) by smtp.aliyun-inc.com; Mon, 25 Mar 2024 16:54:35 +0800 From: Xuan Zhuo To: virtualization@lists.linux.dev Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH vhost v5 06/10] virtio_ring: no store dma info when unmap is not needed Date: Mon, 25 Mar 2024 16:54:24 +0800 Message-Id: <20240325085428.7275-7-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240325085428.7275-1-xuanzhuo@linux.alibaba.com> References: <20240325085428.7275-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 630d711f51f7 As discussed: http://lore.kernel.org/all/CACGkMEug-=C+VQhkMYSgUKMC==04m7-uem_yC21bgGkKZh845w@mail.gmail.com When the vq is premapped mode, the driver manages the dma info is a good way. So this commit make the virtio core not to store the dma info and release the memory which is used to store the dma info. If the use_dma_api is false, the memory is also not allocated. Signed-off-by: Xuan Zhuo --- drivers/virtio/virtio_ring.c | 120 ++++++++++++++++++++++++++++------- 1 file changed, 97 insertions(+), 23 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 980f81f5ab76..f67f4ac2d58f 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -69,23 +69,26 @@ struct vring_desc_state_split { void *data; /* Data for callback. */ - struct vring_desc_extra *indir_desc; /* Indirect descriptor, if any. */ + struct vring_desc_dma *indir_desc; /* Indirect descriptor, if any. */ }; struct vring_desc_state_packed { void *data; /* Data for callback. */ - struct vring_desc_extra *indir_desc; /* Indirect descriptor, if any. */ + struct vring_desc_dma *indir_desc; /* Indirect descriptor, if any. */ u16 num; /* Descriptor list length. */ u16 last; /* The last desc state in a list. */ }; struct vring_desc_extra { - dma_addr_t addr; /* Descriptor DMA addr. */ - u32 len; /* Descriptor length. */ u16 flags; /* Descriptor flags. */ u16 next; /* The next desc state in a list. */ }; +struct vring_desc_dma { + dma_addr_t addr; /* Descriptor DMA addr. */ + u32 len; /* Descriptor length. */ +}; + struct vring_virtqueue_split { /* Actual memory layout for this queue. */ struct vring vring; @@ -102,6 +105,7 @@ struct vring_virtqueue_split { /* Per-descriptor state. */ struct vring_desc_state_split *desc_state; struct vring_desc_extra *desc_extra; + struct vring_desc_dma *desc_dma; /* DMA address and size information */ dma_addr_t queue_dma_addr; @@ -142,6 +146,7 @@ struct vring_virtqueue_packed { /* Per-descriptor state. */ struct vring_desc_state_packed *desc_state; struct vring_desc_extra *desc_extra; + struct vring_desc_dma *desc_dma; /* DMA address and size information */ dma_addr_t ring_dma_addr; @@ -456,24 +461,25 @@ static unsigned int vring_unmap_one_split(const struct vring_virtqueue *vq, unsigned int i) { struct vring_desc_extra *extra = vq->split.desc_extra; + struct vring_desc_dma *dma = vq->split.desc_dma; u16 flags; flags = extra[i].flags; dma_unmap_page(vring_dma_dev(vq), - extra[i].addr, - extra[i].len, + dma[i].addr, + dma[i].len, (flags & VRING_DESC_F_WRITE) ? DMA_FROM_DEVICE : DMA_TO_DEVICE); return extra[i].next; } -static struct vring_desc_extra *alloc_indirect_split(struct virtqueue *_vq, +static struct vring_desc_dma *alloc_indirect_split(struct virtqueue *_vq, unsigned int total_sg, gfp_t gfp) { - struct vring_desc_extra *in_extra; + struct vring_desc_dma *in_extra; struct vring_desc *desc; unsigned int i; u32 size; @@ -519,8 +525,11 @@ static inline unsigned int virtqueue_add_desc_split(struct virtqueue *vq, next = extra[i].next; desc[i].next = cpu_to_virtio16(vq->vdev, next); - extra[i].addr = addr; - extra[i].len = len; + if (vring->split.desc_dma) { + vring->split.desc_dma[i].addr = addr; + vring->split.desc_dma[i].len = len; + } + extra[i].flags = flags; } else next = virtio16_to_cpu(vq->vdev, desc[i].next); @@ -538,7 +547,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq, gfp_t gfp) { struct vring_virtqueue *vq = to_vvq(_vq); - struct vring_desc_extra *in_extra; + struct vring_desc_dma *in_extra; struct scatterlist *sg; struct vring_desc *desc; unsigned int i, n, avail, descs_used, prev, err_idx; @@ -792,7 +801,7 @@ static void detach_buf_split(struct vring_virtqueue *vq, unsigned int head, if (ctx) *ctx = vq->split.desc_state[head].indir_desc; } else { - struct vring_desc_extra *in_extra; + struct vring_desc_dma *in_extra; struct vring_desc *desc; u32 num; @@ -1059,6 +1068,23 @@ static void virtqueue_vring_attach_split(struct vring_virtqueue *vq, vq->free_head = 0; } +static int vring_alloc_dma_split(struct vring_virtqueue_split *vring_split, + bool need_unmap) +{ + u32 num = vring_split->vring.num; + struct vring_desc_dma *dma; + + if (!need_unmap) + return 0; + + dma = kmalloc_array(num, sizeof(struct vring_desc_dma), GFP_KERNEL); + if (!dma) + return -ENOMEM; + + vring_split->desc_dma = dma; + return 0; +} + static int vring_alloc_state_extra_split(struct vring_virtqueue_split *vring_split) { struct vring_desc_state_split *state; @@ -1095,6 +1121,7 @@ static void vring_free_split(struct vring_virtqueue_split *vring_split, kfree(vring_split->desc_state); kfree(vring_split->desc_extra); + kfree(vring_split->desc_dma); } static int vring_alloc_queue_split(struct vring_virtqueue_split *vring_split, @@ -1196,6 +1223,10 @@ static int virtqueue_resize_split(struct virtqueue *_vq, u32 num) if (err) goto err_state_extra; + err = vring_alloc_dma_split(&vring_split, vring_need_unmap_buffer(vq)); + if (err) + goto err_state_extra; + vring_free(&vq->vq); virtqueue_vring_init_split(&vring_split, vq); @@ -1228,14 +1259,16 @@ static u16 packed_last_used(u16 last_used_idx) /* caller must check vring_need_unmap_buffer() */ static void vring_unmap_extra_packed(const struct vring_virtqueue *vq, - const struct vring_desc_extra *extra) + unsigned int i) { + const struct vring_desc_extra *extra = &vq->packed.desc_extra[i]; + const struct vring_desc_dma *dma = &vq->packed.desc_dma[i]; u16 flags; flags = extra->flags; dma_unmap_page(vring_dma_dev(vq), - extra->addr, extra->len, + dma->addr, dma->len, (flags & VRING_DESC_F_WRITE) ? DMA_FROM_DEVICE : DMA_TO_DEVICE); } @@ -1255,10 +1288,10 @@ static void vring_unmap_desc_packed(const struct vring_virtqueue *vq, DMA_FROM_DEVICE : DMA_TO_DEVICE); } -static struct vring_desc_extra *alloc_indirect_packed(unsigned int total_sg, +static struct vring_desc_dma *alloc_indirect_packed(unsigned int total_sg, gfp_t gfp) { - struct vring_desc_extra *in_extra; + struct vring_desc_dma *in_extra; u32 size; size = sizeof(*in_extra) + sizeof(struct vring_packed_desc) * total_sg; @@ -1284,7 +1317,7 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq, void *data, gfp_t gfp) { - struct vring_desc_extra *in_extra; + struct vring_desc_dma *in_extra; struct vring_packed_desc *desc; struct scatterlist *sg; unsigned int i, n, err_idx; @@ -1483,8 +1516,8 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq, desc[i].id = cpu_to_le16(id); if (vring_need_unmap_buffer(vq)) { - vq->packed.desc_extra[curr].addr = addr; - vq->packed.desc_extra[curr].len = sg->length; + vq->packed.desc_dma[curr].addr = addr; + vq->packed.desc_dma[curr].len = sg->length; } vq->packed.desc_extra[curr].flags = le16_to_cpu(flags); @@ -1543,7 +1576,7 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq, for (n = 0; n < total_sg; n++) { if (i == err_idx) break; - vring_unmap_extra_packed(vq, &vq->packed.desc_extra[curr]); + vring_unmap_extra_packed(vq, curr); curr = vq->packed.desc_extra[curr].next; i++; if (i >= vq->packed.vring.num) @@ -1624,8 +1657,7 @@ static void detach_buf_packed(struct vring_virtqueue *vq, if (vring_need_unmap_buffer(vq)) { curr = id; for (i = 0; i < state->num; i++) { - vring_unmap_extra_packed(vq, - &vq->packed.desc_extra[curr]); + vring_unmap_extra_packed(vq, curr); curr = vq->packed.desc_extra[curr].next; } } @@ -1633,7 +1665,7 @@ static void detach_buf_packed(struct vring_virtqueue *vq, if (ctx) *ctx = state->indir_desc; } else { - struct vring_desc_extra *in_extra; + struct vring_desc_dma *in_extra; struct vring_packed_desc *desc; u32 num; @@ -1943,6 +1975,7 @@ static void vring_free_packed(struct vring_virtqueue_packed *vring_packed, kfree(vring_packed->desc_state); kfree(vring_packed->desc_extra); + kfree(vring_packed->desc_dma); } static int vring_alloc_queue_packed(struct vring_virtqueue_packed *vring_packed, @@ -1999,6 +2032,23 @@ static int vring_alloc_queue_packed(struct vring_virtqueue_packed *vring_packed, return -ENOMEM; } +static int vring_alloc_dma_packed(struct vring_virtqueue_packed *vring_packed, + bool need_unmap) +{ + u32 num = vring_packed->vring.num; + struct vring_desc_dma *dma; + + if (!need_unmap) + return 0; + + dma = kmalloc_array(num, sizeof(struct vring_desc_dma), GFP_KERNEL); + if (!dma) + return -ENOMEM; + + vring_packed->desc_dma = dma; + return 0; +} + static int vring_alloc_state_extra_packed(struct vring_virtqueue_packed *vring_packed) { struct vring_desc_state_packed *state; @@ -2111,6 +2161,10 @@ static struct virtqueue *vring_create_virtqueue_packed(struct virtio_device *vde if (err) goto err_state_extra; + err = vring_alloc_dma_packed(&vring_packed, vring_need_unmap_buffer(vq)); + if (err) + goto err_state_extra; + virtqueue_vring_init_packed(&vring_packed, !!cfg_vq_val(cfg, callbacks)); virtqueue_init(vq, tp_cfg->num); @@ -2143,6 +2197,10 @@ static int virtqueue_resize_packed(struct virtqueue *_vq, u32 num) if (err) goto err_state_extra; + err = vring_alloc_dma_packed(&vring_packed, vring_need_unmap_buffer(vq)); + if (err) + goto err_state_extra; + vring_free(&vq->vq); virtqueue_vring_init_packed(&vring_packed, !!vq->vq.callback); @@ -2653,6 +2711,12 @@ static struct virtqueue *__vring_new_virtqueue(struct virtio_device *vdev, return NULL; } + err = vring_alloc_dma_split(vring_split, vring_need_unmap_buffer(vq)); + if (err) { + kfree(vq); + return NULL; + } + virtqueue_vring_init_split(vring_split, vq); virtqueue_init(vq, vring_split->vring.num); @@ -2770,6 +2834,14 @@ int virtqueue_set_dma_premapped(struct virtqueue *_vq) vq->premapped = true; + if (vq->packed_ring) { + kfree(vq->packed.desc_dma); + vq->packed.desc_dma = NULL; + } else { + kfree(vq->split.desc_dma); + vq->split.desc_dma = NULL; + } + END_USE(vq); return 0; @@ -2854,6 +2926,7 @@ static void vring_free(struct virtqueue *_vq) kfree(vq->packed.desc_state); kfree(vq->packed.desc_extra); + kfree(vq->packed.desc_dma); } else { vring_free_queue(vq->vq.vdev, vq->split.queue_size_in_bytes, @@ -2865,6 +2938,7 @@ static void vring_free(struct virtqueue *_vq) if (!vq->packed_ring) { kfree(vq->split.desc_state); kfree(vq->split.desc_extra); + kfree(vq->split.desc_dma); } } From patchwork Mon Mar 25 08:54:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13601791 Received: from out30-101.freemail.mail.aliyun.com (out30-101.freemail.mail.aliyun.com [115.124.30.101]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0943B1598F7 for ; Mon, 25 Mar 2024 08:54:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.101 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711356882; cv=none; b=EO3d4CbjMmyGp+sxg+ZJ8wuoQIS8GVRt6NXGszsyYhFizoZnHlxOj9E47BEAze0cWsj1GQfEOtJ9EvF0weUUHBoy3LWn3l9A9RFy2dUp+OmqxJMMmiGODEswmn/3ukljZD+DbPbMijcocfkc+0Hxte67R2/24NnyACcrfmCTo6k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711356882; c=relaxed/simple; bh=YzZ6Dmk5yesSigLff95Hr/BBX+cZXEqEv3/7ZiUTwu4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ZsdtszFeuI9EnTgAPo25PzozeD+PsmVDlNvbNA/73DTczaG+zEJIADVW3j1G+/MzSFDgj98vveikTnjtnq1+qPJJK7NIpnewhvuHwR/USB3n+CI/69eCWhUoDyOE3VbP7ZdOZWCP1w3FuYvWfzwcr1sXgRXnM7G45kiQ0GNtoPk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=mZVOYL0C; arc=none smtp.client-ip=115.124.30.101 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="mZVOYL0C" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1711356876; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=fH3/UYfFm0HBtaqIAa282OSPXTuZ06kVQ9Av1BoHR7E=; b=mZVOYL0CswrI+MmG7DU7Mu1jEd2boPMAVuOSdu1FdMJoKdC/rP0kGaNBAV/FpnHMuE/9geMccQzG2ihXb3aGJRei9HnaU68dSjRvMSbQGPN4rgXoY5mhSOGM8zXvgaSNaJsLmDuoaswvjCKPFTnt2U9uR/O8hmu9O/DBKyY2YBM= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R111e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046051;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0W3DVful_1711356875; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W3DVful_1711356875) by smtp.aliyun-inc.com; Mon, 25 Mar 2024 16:54:36 +0800 From: Xuan Zhuo To: virtualization@lists.linux.dev Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH vhost v5 07/10] virtio: find_vqs: add new parameter premapped Date: Mon, 25 Mar 2024 16:54:25 +0800 Message-Id: <20240325085428.7275-8-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240325085428.7275-1-xuanzhuo@linux.alibaba.com> References: <20240325085428.7275-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 630d711f51f7 If the premapped mode is enabled, the dma array(struct vring_desc_dma) of virtio core will not be allocated. That is judged when find_vqs() is called. To avoid allocating dma array in find_vqs() and releasing it immediately by virtqueue_set_dma_premapped(). This patch introduces a new parameter to find_vqs(). Then we can judge should we allocate the dma array(struct vring_desc_dma) or not inside find_vqs(). The driver must check the premapped mode of every vq after find_vqs(). Signed-off-by: Xuan Zhuo --- drivers/virtio/virtio_ring.c | 4 ++-- include/linux/virtio_config.h | 1 + 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index f67f4ac2d58f..b0a715f23f17 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -2148,7 +2148,7 @@ static struct virtqueue *vring_create_virtqueue_packed(struct virtio_device *vde vq->packed_ring = true; vq->dma_dev = dma_dev; vq->use_dma_api = vring_use_dma_api(vdev); - vq->premapped = false; + vq->premapped = vq->use_dma_api && cfg_vq_get(cfg, premapped); vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC) && !cfg_vq_get(cfg, ctx); @@ -2696,7 +2696,7 @@ static struct virtqueue *__vring_new_virtqueue(struct virtio_device *vdev, #endif vq->dma_dev = tp_cfg->dma_dev; vq->use_dma_api = vring_use_dma_api(vdev); - vq->premapped = false; + vq->premapped = vq->use_dma_api && cfg_vq_get(cfg, premapped); vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC) && !cfg_vq_get(cfg, ctx); diff --git a/include/linux/virtio_config.h b/include/linux/virtio_config.h index d47188303d34..f1f62e57f395 100644 --- a/include/linux/virtio_config.h +++ b/include/linux/virtio_config.h @@ -107,6 +107,7 @@ struct virtio_vq_config { vq_callback_t **callbacks; const char **names; const bool *ctx; + const bool *premapped; struct irq_affinity *desc; }; From patchwork Mon Mar 25 08:54:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13601792 Received: from out30-100.freemail.mail.aliyun.com (out30-100.freemail.mail.aliyun.com [115.124.30.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1CF9F15ECED for ; Mon, 25 Mar 2024 08:54:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.100 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711356882; cv=none; b=lXxK0cmoApt83jZi1z2Y/L2q8tfiLmbEEqX3KMSb0mIPTj31HMq5vvWJKhSwl3ChF8XrCVGYFo+g3L/NKt8ABs1RE39tDjuZDxMP9VkHWYRRJ406Eud8IM2i1i2G08nVYjMzz4V1NLR4dx+CYWm+xRoTDLW1fV/EuoUS5HnLEkA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711356882; c=relaxed/simple; bh=C2wareyrNe4XTdjYFHSW33kyY+EDYFppe02n9prGlrI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=GFLXXSR3nfDJDTz/KpyUXvFCj17ryhxs7jnr9UnB9GzklnCKDK97telfEUiEq5fbtuiG2ZqRUn4eWjYBEvOjEzUU+oDM9XgMPqcZiAV8V+IVkA7wLeAAMVURzKc2EaBp8JTBkCuYel8LEIoNq1wGBqkT6CJCfiS5gFb0EMKLUBU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=vBaE13DC; arc=none smtp.client-ip=115.124.30.100 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="vBaE13DC" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1711356878; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=mjyaJ/lHge8lX78gpF62ArwsRfPZNDUl+yUs9r/5GSQ=; b=vBaE13DCV6mvSbYmlEtZGaot0XX+pNFaYJIABkQj565rtumn6I6sM+t7Zp50VwsWXhqZ+NkHEJhGsWKd89gr7CKFDGkZgE1ovttuOFW+Z64JPJeincGGhDG58t5Bwls1PVxzzSqcOATJ1hS0ERNpEd9D8IS/FZnaq3sdZkC9tKY= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R291e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045176;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0W3DMz-0_1711356876; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W3DMz-0_1711356876) by smtp.aliyun-inc.com; Mon, 25 Mar 2024 16:54:37 +0800 From: Xuan Zhuo To: virtualization@lists.linux.dev Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH vhost v5 08/10] virtio_ring: export premapped to driver by struct virtqueue Date: Mon, 25 Mar 2024 16:54:26 +0800 Message-Id: <20240325085428.7275-9-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240325085428.7275-1-xuanzhuo@linux.alibaba.com> References: <20240325085428.7275-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 630d711f51f7 Export the premapped to drivers, then drivers can check the vq premapped mode after the find_vqs(). Because the find_vqs() just try to enable the vq premapped mode, the driver must check that after find_vqs(). Signed-off-by: Xuan Zhuo --- drivers/virtio/virtio_ring.c | 13 +++++-------- include/linux/virtio.h | 1 + 2 files changed, 6 insertions(+), 8 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index b0a715f23f17..86a60c720a62 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -177,9 +177,6 @@ struct vring_virtqueue { /* Host publishes avail event idx */ bool event; - /* Do DMA mapping by driver */ - bool premapped; - /* Head of free buffer list. */ unsigned int free_head; /* Number we've added since last sync. */ @@ -297,7 +294,7 @@ static bool vring_use_dma_api(const struct virtio_device *vdev) static bool vring_need_unmap_buffer(const struct vring_virtqueue *vring) { - return vring->use_dma_api && !vring->premapped; + return vring->use_dma_api && !vring->vq.premapped; } size_t virtio_max_dma_size(const struct virtio_device *vdev) @@ -369,7 +366,7 @@ static struct device *vring_dma_dev(const struct vring_virtqueue *vq) static int vring_map_one_sg(const struct vring_virtqueue *vq, struct scatterlist *sg, enum dma_data_direction direction, dma_addr_t *addr) { - if (vq->premapped) { + if (vq->vq.premapped) { *addr = sg_dma_address(sg); return 0; } @@ -2148,7 +2145,7 @@ static struct virtqueue *vring_create_virtqueue_packed(struct virtio_device *vde vq->packed_ring = true; vq->dma_dev = dma_dev; vq->use_dma_api = vring_use_dma_api(vdev); - vq->premapped = vq->use_dma_api && cfg_vq_get(cfg, premapped); + vq->vq.premapped = vq->use_dma_api && cfg_vq_get(cfg, premapped); vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC) && !cfg_vq_get(cfg, ctx); @@ -2696,7 +2693,7 @@ static struct virtqueue *__vring_new_virtqueue(struct virtio_device *vdev, #endif vq->dma_dev = tp_cfg->dma_dev; vq->use_dma_api = vring_use_dma_api(vdev); - vq->premapped = vq->use_dma_api && cfg_vq_get(cfg, premapped); + vq->vq.premapped = vq->use_dma_api && cfg_vq_get(cfg, premapped); vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC) && !cfg_vq_get(cfg, ctx); @@ -2832,7 +2829,7 @@ int virtqueue_set_dma_premapped(struct virtqueue *_vq) return -EINVAL; } - vq->premapped = true; + vq->vq.premapped = true; if (vq->packed_ring) { kfree(vq->packed.desc_dma); diff --git a/include/linux/virtio.h b/include/linux/virtio.h index b0201747a263..407277d5a16b 100644 --- a/include/linux/virtio.h +++ b/include/linux/virtio.h @@ -36,6 +36,7 @@ struct virtqueue { unsigned int num_free; unsigned int num_max; bool reset; + bool premapped; void *priv; }; From patchwork Mon Mar 25 08:54:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13601790 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-100.freemail.mail.aliyun.com (out30-100.freemail.mail.aliyun.com [115.124.30.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6AE9415EFC0 for ; Mon, 25 Mar 2024 08:54:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.100 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711356883; cv=none; b=XGlkif884e5gF7Ju4KqjLRGr1Lu6wuyjbYg9Pqln9miWApcXHBu88eeStE3tOF2z2xfLMyAPQeyM/KPAvqMRkNBsbK/8l1p9xVTAPGvlLTWsvPJwnN3zTDcCUXlwPSJJKoHupTcR3a7jYhwjKjKDt10Ryf2NcmVa3CoLfQ8u1VE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711356883; c=relaxed/simple; bh=qcGAW4p+IV8nSZqRazZbvyCtKApuP4jZJMR6ryxwPvA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=E/FhbqwKhZ077zLn/2HzIykGoPAIoqR8/2rGDF7AvgwkKB5y0GQgNu4t8yOcBcA0q96OWQy8Uduh6i/UOC3+NGo4Le7RIfIFmEZOLBddxJl3eo/zn/pmWf4IKU37DZ090E0zDQN0WXS6+VDGDvyAgulQnuMQDrxxm2dAF41EX9o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=or1F8koV; arc=none smtp.client-ip=115.124.30.100 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="or1F8koV" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1711356878; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=oOpk9FpG5nOjm2GwYefs6v1fCuafjg2T8xvA7LY8KWo=; b=or1F8koVKxtfusSHb7fv9EPhnd5sgmbEDsGr7v5Y/eIUOOX6CC8SFPMCTHhQJOCIVmMZo06mbWT7il5Dsub2E9gbA5uETjyVOaodEB4el7UPLfcYX3WbZYO2MQFJvBVtvPmR7kONd/PHqInJCHwG7YCUt6K/sU6phkPMVi/ix+U= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R201e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046050;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0W3DYiL6_1711356877; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W3DYiL6_1711356877) by smtp.aliyun-inc.com; Mon, 25 Mar 2024 16:54:37 +0800 From: Xuan Zhuo To: virtualization@lists.linux.dev Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH vhost v5 09/10] virtio_net: set premapped mode by find_vqs() Date: Mon, 25 Mar 2024 16:54:27 +0800 Message-Id: <20240325085428.7275-10-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240325085428.7275-1-xuanzhuo@linux.alibaba.com> References: <20240325085428.7275-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 630d711f51f7 X-Patchwork-Delegate: kuba@kernel.org Now, the virtio core can set the premapped mode by find_vqs(). If the premapped can be enabled, the dma array will not be allocated. So virtio-net use the api of find_vqs to enable the premapped. Judge the premapped mode by the vq->premapped instead of saving local variable. Signed-off-by: Xuan Zhuo --- drivers/net/virtio_net.c | 57 +++++++++++++++++------------------ include/linux/virtio_config.h | 16 ++-------- 2 files changed, 29 insertions(+), 44 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index c22d1118a133..107aef2c9458 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -213,9 +213,6 @@ struct receive_queue { /* Record the last dma info to free after new pages is allocated. */ struct virtnet_rq_dma *last_dma; - - /* Do dma by self */ - bool do_dma; }; /* This structure can contain rss message with maximum settings for indirection table and keysize @@ -707,7 +704,7 @@ static void *virtnet_rq_get_buf(struct receive_queue *rq, u32 *len, void **ctx) void *buf; buf = virtqueue_get_buf_ctx(rq->vq, len, ctx); - if (buf && rq->do_dma) + if (buf && rq->vq->premapped) virtnet_rq_unmap(rq, buf, *len); return buf; @@ -720,7 +717,7 @@ static void virtnet_rq_init_one_sg(struct receive_queue *rq, void *buf, u32 len) u32 offset; void *head; - if (!rq->do_dma) { + if (!rq->vq->premapped) { sg_init_one(rq->sg, buf, len); return; } @@ -750,7 +747,7 @@ static void *virtnet_rq_alloc(struct receive_queue *rq, u32 size, gfp_t gfp) head = page_address(alloc_frag->page); - if (rq->do_dma) { + if (rq->vq->premapped) { dma = head; /* new pages */ @@ -796,22 +793,6 @@ static void *virtnet_rq_alloc(struct receive_queue *rq, u32 size, gfp_t gfp) return buf; } -static void virtnet_rq_set_premapped(struct virtnet_info *vi) -{ - int i; - - /* disable for big mode */ - if (!vi->mergeable_rx_bufs && vi->big_packets) - return; - - for (i = 0; i < vi->max_queue_pairs; i++) { - if (virtqueue_set_dma_premapped(vi->rq[i].vq)) - continue; - - vi->rq[i].do_dma = true; - } -} - static void virtnet_rq_unmap_free_buf(struct virtqueue *vq, void *buf) { struct virtnet_info *vi = vq->vdev->priv; @@ -820,7 +801,7 @@ static void virtnet_rq_unmap_free_buf(struct virtqueue *vq, void *buf) rq = &vi->rq[i]; - if (rq->do_dma) + if (rq->vq->premapped) virtnet_rq_unmap(rq, buf, 0); virtnet_rq_free_buf(vi, rq, buf); @@ -1881,7 +1862,7 @@ static int add_recvbuf_small(struct virtnet_info *vi, struct receive_queue *rq, err = virtqueue_add_inbuf_ctx(rq->vq, rq->sg, 1, buf, ctx, gfp); if (err < 0) { - if (rq->do_dma) + if (rq->vq->premapped) virtnet_rq_unmap(rq, buf, 0); put_page(virt_to_head_page(buf)); } @@ -1996,7 +1977,7 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi, ctx = mergeable_len_to_ctx(len + room, headroom); err = virtqueue_add_inbuf_ctx(rq->vq, rq->sg, 1, buf, ctx, gfp); if (err < 0) { - if (rq->do_dma) + if (rq->vq->premapped) virtnet_rq_unmap(rq, buf, 0); put_page(virt_to_head_page(buf)); } @@ -4271,7 +4252,7 @@ static void free_receive_page_frags(struct virtnet_info *vi) int i; for (i = 0; i < vi->max_queue_pairs; i++) if (vi->rq[i].alloc_frag.page) { - if (vi->rq[i].do_dma && vi->rq[i].last_dma) + if (vi->rq[i].vq->premapped && vi->rq[i].last_dma) virtnet_rq_unmap(&vi->rq[i], vi->rq[i].last_dma, 0); put_page(vi->rq[i].alloc_frag.page); } @@ -4335,11 +4316,13 @@ static unsigned int mergeable_min_buf_len(struct virtnet_info *vi, struct virtqu static int virtnet_find_vqs(struct virtnet_info *vi) { + struct virtio_vq_config cfg = {}; vq_callback_t **callbacks; struct virtqueue **vqs; const char **names; int ret = -ENOMEM; int total_vqs; + bool *premapped; bool *ctx; u16 i; @@ -4364,8 +4347,13 @@ static int virtnet_find_vqs(struct virtnet_info *vi) ctx = kcalloc(total_vqs, sizeof(*ctx), GFP_KERNEL); if (!ctx) goto err_ctx; + + premapped = kcalloc(total_vqs, sizeof(*premapped), GFP_KERNEL); + if (!ctx) + goto err_premapped; } else { ctx = NULL; + premapped = NULL; } /* Parameters for control virtqueue, if any */ @@ -4384,10 +4372,19 @@ static int virtnet_find_vqs(struct virtnet_info *vi) names[txq2vq(i)] = vi->sq[i].name; if (ctx) ctx[rxq2vq(i)] = true; + + if (premapped) + premapped[rxq2vq(i)] = true; } - ret = virtio_find_vqs_ctx(vi->vdev, total_vqs, vqs, callbacks, - names, ctx, NULL); + cfg.nvqs = total_vqs; + cfg.vqs = vqs; + cfg.callbacks = callbacks; + cfg.names = names; + cfg.ctx = ctx; + cfg.premapped = premapped; + + ret = virtio_find_vqs_cfg(vi->vdev, &cfg); if (ret) goto err_find; @@ -4407,6 +4404,8 @@ static int virtnet_find_vqs(struct virtnet_info *vi) err_find: + kfree(premapped); +err_premapped: kfree(ctx); err_ctx: kfree(names); @@ -4479,8 +4478,6 @@ static int init_vqs(struct virtnet_info *vi) if (ret) goto err_free; - virtnet_rq_set_premapped(vi); - cpus_read_lock(); virtnet_set_affinity(vi); cpus_read_unlock(); diff --git a/include/linux/virtio_config.h b/include/linux/virtio_config.h index f1f62e57f395..e40509fef5fe 100644 --- a/include/linux/virtio_config.h +++ b/include/linux/virtio_config.h @@ -260,21 +260,9 @@ int virtio_find_vqs(struct virtio_device *vdev, unsigned nvqs, } static inline -int virtio_find_vqs_ctx(struct virtio_device *vdev, unsigned nvqs, - struct virtqueue *vqs[], vq_callback_t *callbacks[], - const char * const names[], const bool *ctx, - struct irq_affinity *desc) +int virtio_find_vqs_cfg(struct virtio_device *vdev, struct virtio_vq_config *cfg) { - struct virtio_vq_config cfg = {}; - - cfg.nvqs = nvqs; - cfg.vqs = vqs; - cfg.callbacks = callbacks; - cfg.names = (const char **)names; - cfg.ctx = ctx; - cfg.desc = desc; - - return vdev->config->find_vqs(vdev, &cfg); + return vdev->config->find_vqs(vdev, cfg); } /** From patchwork Mon Mar 25 08:54:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13601782 Received: from out30-112.freemail.mail.aliyun.com (out30-112.freemail.mail.aliyun.com [115.124.30.112]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2E5DF15EFD0 for ; Mon, 25 Mar 2024 08:54:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.112 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711356884; cv=none; b=axyPbSqIFAIIdxuQR8YjvqNwBMNhagr2GCV5c3nUolgWZ07JYeLmAAwo2YM/RQZJ4kClZ9fzKzQ+Vs7V+7YXSmBj+tkOUT/uduZD1z4LP8hLFM31KTiA3RFw6sEamxyjGpeOg59yb/MLuC+gJDRF8w0RzyfPUYCJ/7ArW74CZJY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711356884; c=relaxed/simple; bh=PenURxLjJqQVuVFEQ4m9LeQUvZgE4pnwCHNO6ibq9bY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=DlBrTIVcHZDcbhfO5Q8zlusMSLIQjRroPTEbogdjVflhHNO07gKtss67nN+lf5IIUlLfgEeHGC/L79LspI2t3MK//c/zwfNl5z6wY5byHfYPSromdPdya5s/bdVgRNjwQpdjw8Kt249fPj9pd6k/oZLkGObld++PjtaJ88tbTTY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=a8IQyreF; arc=none smtp.client-ip=115.124.30.112 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="a8IQyreF" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1711356880; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=NIMwk14M2PMyID2sN7g5BjYmL2dIHvJ1tq5lXY8RhzI=; b=a8IQyreFFd/LYKX1ODc5L0gXURqL1XFhPzvfAJuhIKuukRCyQxB1a/FnuB8eaS+QViN6VZZjJhRLDG9rV5yR5p1xS0pLB7J9ca7Fn0CFRYbUk03UMe4emv3sbaxwOraWuWxxkMNMuuvnSScgg3wHGu0yGoMjrJu2E3vhK55pVIk= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R131e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045168;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0W3DMz-k_1711356878; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0W3DMz-k_1711356878) by smtp.aliyun-inc.com; Mon, 25 Mar 2024 16:54:38 +0800 From: Xuan Zhuo To: virtualization@lists.linux.dev Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH vhost v5 10/10] virtio_ring: virtqueue_set_dma_premapped support disable Date: Mon, 25 Mar 2024 16:54:28 +0800 Message-Id: <20240325085428.7275-11-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240325085428.7275-1-xuanzhuo@linux.alibaba.com> References: <20240325085428.7275-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 630d711f51f7 Now, the API virtqueue_set_dma_premapped just support to enable premapped mode. If we allow enabling the premapped dynamically, we should make this API to support disable the premapped mode. Signed-off-by: Xuan Zhuo --- drivers/virtio/virtio_ring.c | 39 +++++++++++++++++++++++++++--------- include/linux/virtio.h | 2 +- 2 files changed, 31 insertions(+), 10 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 86a60c720a62..6ddabf280218 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -2792,6 +2792,7 @@ EXPORT_SYMBOL_GPL(virtqueue_resize); /** * virtqueue_set_dma_premapped - set the vring premapped mode * @_vq: the struct virtqueue we're talking about. + * @premapped: enable/disable the premapped mode. * * Enable the premapped mode of the vq. * @@ -2808,11 +2809,15 @@ EXPORT_SYMBOL_GPL(virtqueue_resize); * * Returns zero or a negative error. * 0: success. - * -EINVAL: vring does not use the dma api, so we can not enable premapped mode. + * -EINVAL: + * vring does not use the dma api, so we can not enable premapped mode. + * Or some descs are used, this is not called immediately after creating + * the vq, or after vq reset. */ -int virtqueue_set_dma_premapped(struct virtqueue *_vq) +int virtqueue_set_dma_premapped(struct virtqueue *_vq, bool premapped) { struct vring_virtqueue *vq = to_vvq(_vq); + int err = 0; u32 num; START_USE(vq); @@ -2824,24 +2829,40 @@ int virtqueue_set_dma_premapped(struct virtqueue *_vq) return -EINVAL; } + if (vq->vq.premapped == premapped) { + END_USE(vq); + return 0; + } + if (!vq->use_dma_api) { END_USE(vq); return -EINVAL; } - vq->vq.premapped = true; + if (premapped) { + vq->vq.premapped = true; + + if (vq->packed_ring) { + kfree(vq->packed.desc_dma); + vq->packed.desc_dma = NULL; + } else { + kfree(vq->split.desc_dma); + vq->split.desc_dma = NULL; + } - if (vq->packed_ring) { - kfree(vq->packed.desc_dma); - vq->packed.desc_dma = NULL; } else { - kfree(vq->split.desc_dma); - vq->split.desc_dma = NULL; + if (vq->packed_ring) + err = vring_alloc_dma_split(&vq->split, false); + else + err = vring_alloc_dma_packed(&vq->packed, false); + + if (!err) + vq->vq.premapped = false; } END_USE(vq); - return 0; + return err; } EXPORT_SYMBOL_GPL(virtqueue_set_dma_premapped); diff --git a/include/linux/virtio.h b/include/linux/virtio.h index 407277d5a16b..4b338590abf4 100644 --- a/include/linux/virtio.h +++ b/include/linux/virtio.h @@ -82,7 +82,7 @@ bool virtqueue_enable_cb(struct virtqueue *vq); unsigned virtqueue_enable_cb_prepare(struct virtqueue *vq); -int virtqueue_set_dma_premapped(struct virtqueue *_vq); +int virtqueue_set_dma_premapped(struct virtqueue *_vq, bool premapped); bool virtqueue_poll(struct virtqueue *vq, unsigned);