From patchwork Tue Mar 8 12:35:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 12773635 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1AD33C4167D for ; Tue, 8 Mar 2022 12:36:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345336AbiCHMhk (ORCPT ); Tue, 8 Mar 2022 07:37:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34760 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346966AbiCHMhN (ORCPT ); Tue, 8 Mar 2022 07:37:13 -0500 Received: from out30-54.freemail.mail.aliyun.com (out30-54.freemail.mail.aliyun.com [115.124.30.54]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F21A547572; Tue, 8 Mar 2022 04:36:01 -0800 (PST) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R391e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04394;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=34;SR=0;TI=SMTPD_---0V6ebqYR_1646742955; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0V6ebqYR_1646742955) by smtp.aliyun-inc.com(127.0.0.1); Tue, 08 Mar 2022 20:35:56 +0800 From: Xuan Zhuo To: virtualization@lists.linux-foundation.org, netdev@vger.kernel.org Cc: Jeff Dike , Richard Weinberger , Anton Ivanov , "Michael S. Tsirkin" , Jason Wang , "David S. Miller" , Jakub Kicinski , Hans de Goede , Mark Gross , Vadim Pasternak , Bjorn Andersson , Mathieu Poirier , Cornelia Huck , Halil Pasic , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Alexander Gordeev , Sven Schnelle , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Johannes Berg , Vincent Whitchurch , Xuan Zhuo , linux-um@lists.infradead.org, platform-driver-x86@vger.kernel.org, linux-remoteproc@vger.kernel.org, linux-s390@vger.kernel.org, kvm@vger.kernel.org, bpf@vger.kernel.org Subject: [PATCH v7 17/26] virtio_pci: queue_reset: support VIRTIO_F_RING_RESET Date: Tue, 8 Mar 2022 20:35:09 +0800 Message-Id: <20220308123518.33800-18-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.31.0 In-Reply-To: <20220308123518.33800-1-xuanzhuo@linux.alibaba.com> References: <20220308123518.33800-1-xuanzhuo@linux.alibaba.com> MIME-Version: 1.0 X-Git-Hash: f06b131dbfed Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch implements virtio pci support for QUEUE RESET. Performing reset on a queue is divided into these steps: 1. virtio_reset_vq() - notify the device to reset the queue 2. virtqueue_detach_unused_buf() - recycle the buffer submitted 3. virtqueue_reset_vring() - reset the vring (may re-alloc) 4. virtio_enable_resetq() - mmap vring to device, and enable the queue This patch implements virtio_reset_vq(), virtio_enable_resetq() in the pci scenario. Signed-off-by: Xuan Zhuo --- drivers/virtio/virtio_pci_common.c | 8 +-- drivers/virtio/virtio_pci_modern.c | 83 ++++++++++++++++++++++++++++++ 2 files changed, 88 insertions(+), 3 deletions(-) diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c index fdbde1db5ec5..863d3a8a0956 100644 --- a/drivers/virtio/virtio_pci_common.c +++ b/drivers/virtio/virtio_pci_common.c @@ -248,9 +248,11 @@ static void vp_del_vq(struct virtqueue *vq) struct virtio_pci_vq_info *info = vp_dev->vqs[vq->index]; unsigned long flags; - spin_lock_irqsave(&vp_dev->lock, flags); - list_del(&info->node); - spin_unlock_irqrestore(&vp_dev->lock, flags); + if (!vq->reset) { + spin_lock_irqsave(&vp_dev->lock, flags); + list_del(&info->node); + spin_unlock_irqrestore(&vp_dev->lock, flags); + } vp_dev->del_vq(info); kfree(info); diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c index 49a4493732cf..3c67d3607802 100644 --- a/drivers/virtio/virtio_pci_modern.c +++ b/drivers/virtio/virtio_pci_modern.c @@ -34,6 +34,9 @@ static void vp_transport_features(struct virtio_device *vdev, u64 features) if ((features & BIT_ULL(VIRTIO_F_SR_IOV)) && pci_find_ext_capability(pci_dev, PCI_EXT_CAP_ID_SRIOV)) __virtio_set_bit(vdev, VIRTIO_F_SR_IOV); + + if (features & BIT_ULL(VIRTIO_F_RING_RESET)) + __virtio_set_bit(vdev, VIRTIO_F_RING_RESET); } /* virtio config->finalize_features() implementation */ @@ -199,6 +202,82 @@ static int vp_active_vq(struct virtqueue *vq, u16 msix_vec) return 0; } +static int vp_modern_reset_vq(struct virtqueue *vq) +{ + struct virtio_pci_device *vp_dev = to_vp_device(vq->vdev); + struct virtio_pci_modern_device *mdev = &vp_dev->mdev; + struct virtio_pci_vq_info *info; + unsigned long flags; + unsigned int irq; + + if (!virtio_has_feature(vq->vdev, VIRTIO_F_RING_RESET)) + return -ENOENT; + + vp_modern_set_queue_reset(mdev, vq->index); + + info = vp_dev->vqs[vq->index]; + + /* delete vq from irq handler */ + spin_lock_irqsave(&vp_dev->lock, flags); + list_del(&info->node); + spin_unlock_irqrestore(&vp_dev->lock, flags); + + INIT_LIST_HEAD(&info->node); + + vq->reset = VIRTIO_VQ_RESET_STEP_DEVICE; + + /* sync irq callback. */ + if (vp_dev->intx_enabled) { + irq = vp_dev->pci_dev->irq; + + } else { + if (info->msix_vector == VIRTIO_MSI_NO_VECTOR) + return 0; + + irq = pci_irq_vector(vp_dev->pci_dev, info->msix_vector); + } + + synchronize_irq(irq); + + return 0; +} + +static int vp_modern_enable_reset_vq(struct virtqueue *vq) +{ + struct virtio_pci_device *vp_dev = to_vp_device(vq->vdev); + struct virtio_pci_modern_device *mdev = &vp_dev->mdev; + struct virtio_pci_vq_info *info; + unsigned long flags, index; + int err; + + if (vq->reset != VIRTIO_VQ_RESET_STEP_VRING_ATTACH) + return -EBUSY; + + index = vq->index; + info = vp_dev->vqs[index]; + + /* check queue reset status */ + if (vp_modern_get_queue_reset(mdev, index) != 1) + return -EBUSY; + + err = vp_active_vq(vq, info->msix_vector); + if (err) + return err; + + if (vq->callback) { + spin_lock_irqsave(&vp_dev->lock, flags); + list_add(&info->node, &vp_dev->virtqueues); + spin_unlock_irqrestore(&vp_dev->lock, flags); + } else { + INIT_LIST_HEAD(&info->node); + } + + vp_modern_set_queue_enable(&vp_dev->mdev, index, true); + vq->reset = VIRTIO_VQ_RESET_STEP_NONE; + + return 0; +} + static u16 vp_config_vector(struct virtio_pci_device *vp_dev, u16 vector) { return vp_modern_config_vector(&vp_dev->mdev, vector); @@ -407,6 +486,8 @@ static const struct virtio_config_ops virtio_pci_config_nodev_ops = { .set_vq_affinity = vp_set_vq_affinity, .get_vq_affinity = vp_get_vq_affinity, .get_shm_region = vp_get_shm_region, + .reset_vq = vp_modern_reset_vq, + .enable_reset_vq = vp_modern_enable_reset_vq, }; static const struct virtio_config_ops virtio_pci_config_ops = { @@ -425,6 +506,8 @@ static const struct virtio_config_ops virtio_pci_config_ops = { .set_vq_affinity = vp_set_vq_affinity, .get_vq_affinity = vp_get_vq_affinity, .get_shm_region = vp_get_shm_region, + .reset_vq = vp_modern_reset_vq, + .enable_reset_vq = vp_modern_enable_reset_vq, }; /* the PCI probing function */