From patchwork Thu Feb 24 08:10:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 12757924 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4148C433EF for ; Thu, 24 Feb 2022 08:11:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231702AbiBXIMO (ORCPT ); Thu, 24 Feb 2022 03:12:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37632 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231689AbiBXIMJ (ORCPT ); Thu, 24 Feb 2022 03:12:09 -0500 Received: from out30-42.freemail.mail.aliyun.com (out30-42.freemail.mail.aliyun.com [115.124.30.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 53B0824CCEE; Thu, 24 Feb 2022 00:11:27 -0800 (PST) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R211e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e01424;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=34;SR=0;TI=SMTPD_---0V5NDo9f_1645690281; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0V5NDo9f_1645690281) by smtp.aliyun-inc.com(127.0.0.1); Thu, 24 Feb 2022 16:11:22 +0800 From: Xuan Zhuo To: virtualization@lists.linux-foundation.org, netdev@vger.kernel.org Cc: Jeff Dike , Richard Weinberger , Anton Ivanov , "Michael S. Tsirkin" , Jason Wang , "David S. Miller" , Jakub Kicinski , Hans de Goede , Mark Gross , Vadim Pasternak , Bjorn Andersson , Mathieu Poirier , Cornelia Huck , Halil Pasic , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Alexander Gordeev , Sven Schnelle , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Johannes Berg , Vincent Whitchurch , Xuan Zhuo , linux-um@lists.infradead.org, platform-driver-x86@vger.kernel.org, linux-remoteproc@vger.kernel.org, linux-s390@vger.kernel.org, kvm@vger.kernel.org, bpf@vger.kernel.org Subject: [PATCH v6 09/26] virtio_ring: split: implement virtqueue_reset_vring_split() Date: Thu, 24 Feb 2022 16:10:45 +0800 Message-Id: <20220224081102.80224-10-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.31.0 In-Reply-To: <20220224081102.80224-1-xuanzhuo@linux.alibaba.com> References: <20220224081102.80224-1-xuanzhuo@linux.alibaba.com> MIME-Version: 1.0 X-Git-Hash: bd1c915e263f Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org virtio ring supports reset. Queue reset is divided into several stages. 1. notify device queue reset 2. vring release 3. attach new vring 4. notify device queue re-enable After the first step is completed, the vring reset operation can be performed. If the newly set vring num does not change, then just reset the vq related value. Otherwise, the vring will be released and the vring will be reallocated. And the vring will be attached to the vq. If this process fails, the function will exit, and the state of the vq will be the vring release state. You can call this function again to reallocate the vring. In addition, vring_align, may_reduce_num are necessary for reallocating vring, so they are retained when creating vq. Signed-off-by: Xuan Zhuo --- drivers/virtio/virtio_ring.c | 69 ++++++++++++++++++++++++++++++++++++ 1 file changed, 69 insertions(+) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 0b5360052ac2..a2e771263ea7 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -158,6 +158,12 @@ struct vring_virtqueue { /* DMA address and size information */ dma_addr_t queue_dma_addr; size_t queue_size_in_bytes; + + /* The parameters for creating vrings are reserved for + * creating new vrings when enabling reset queue. + */ + u32 vring_align; + bool may_reduce_num; } split; /* Available for packed ring */ @@ -217,6 +223,12 @@ struct vring_virtqueue { #endif }; +static void vring_free(struct virtqueue *vq); +static void __vring_virtqueue_init_split(struct vring_virtqueue *vq, + struct virtio_device *vdev); +static int __vring_virtqueue_attach_split(struct vring_virtqueue *vq, + struct virtio_device *vdev, + struct vring vring); /* * Helpers. @@ -1012,6 +1024,8 @@ static struct virtqueue *vring_create_virtqueue_split( return NULL; } + to_vvq(vq)->split.vring_align = vring_align; + to_vvq(vq)->split.may_reduce_num = may_reduce_num; to_vvq(vq)->split.queue_dma_addr = vring.dma_addr; to_vvq(vq)->split.queue_size_in_bytes = vring.queue_size_in_bytes; to_vvq(vq)->we_own_ring = true; @@ -1019,6 +1033,59 @@ static struct virtqueue *vring_create_virtqueue_split( return vq; } +static int virtqueue_reset_vring_split(struct virtqueue *_vq, u32 num) +{ + struct vring_virtqueue *vq = to_vvq(_vq); + struct virtio_device *vdev = _vq->vdev; + struct vring_split vring; + int err; + + if (num > _vq->num_max) + return -E2BIG; + + switch (vq->vq.reset) { + case VIRTIO_VQ_RESET_STEP_NONE: + return -ENOENT; + + case VIRTIO_VQ_RESET_STEP_VRING_ATTACH: + case VIRTIO_VQ_RESET_STEP_DEVICE: + if (vq->split.vring.num == num || !num) + break; + + vring_free(_vq); + + fallthrough; + + case VIRTIO_VQ_RESET_STEP_VRING_RELEASE: + if (!num) + num = vq->split.vring.num; + + err = vring_create_vring_split(&vring, vdev, + vq->split.vring_align, + vq->weak_barriers, + vq->split.may_reduce_num, num); + if (err) + return -ENOMEM; + + err = __vring_virtqueue_attach_split(vq, vdev, vring.vring); + if (err) { + vring_free_queue(vdev, vring.queue_size_in_bytes, + vring.queue, + vring.dma_addr); + return -ENOMEM; + } + + vq->split.queue_dma_addr = vring.dma_addr; + vq->split.queue_size_in_bytes = vring.queue_size_in_bytes; + } + + __vring_virtqueue_init_split(vq, vdev); + vq->we_own_ring = true; + vq->vq.reset = VIRTIO_VQ_RESET_STEP_VRING_ATTACH; + + return 0; +} + /* * Packed ring specific functions - *_packed(). @@ -2317,6 +2384,8 @@ static int __vring_virtqueue_attach_split(struct vring_virtqueue *vq, static void __vring_virtqueue_init_split(struct vring_virtqueue *vq, struct virtio_device *vdev) { + vq->vq.reset = VIRTIO_VQ_RESET_STEP_NONE; + vq->packed_ring = false; vq->we_own_ring = false; vq->broken = false;