From patchwork Thu Jan 10 14:12:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Thierry X-Patchwork-Id: 10755873 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B6D0A14DE for ; Thu, 10 Jan 2019 14:13:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A70BB298C2 for ; Thu, 10 Jan 2019 14:13:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9B0CA29944; Thu, 10 Jan 2019 14:13:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 43A78298C2 for ; Thu, 10 Jan 2019 14:13:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727751AbfAJOM7 (ORCPT ); Thu, 10 Jan 2019 09:12:59 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:36388 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727621AbfAJOM6 (ORCPT ); Thu, 10 Jan 2019 09:12:58 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7EE1D15BE; Thu, 10 Jan 2019 06:12:58 -0800 (PST) Received: from e112298-lin.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4321F3F5AF; Thu, 10 Jan 2019 06:12:57 -0800 (PST) From: Julien Thierry To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Cc: jean-philippe.brucker@arm.com, will.deacon@arm.com, kraxel@redhat.com, Julien Thierry Subject: [PATCH kvmtool v2 01/13] ioeventfd: Fix removal of ioeventfd Date: Thu, 10 Jan 2019 14:12:38 +0000 Message-Id: <1547129570-14351-2-git-send-email-julien.thierry@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1547129570-14351-1-git-send-email-julien.thierry@arm.com> References: <1547129570-14351-1-git-send-email-julien.thierry@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jean-Philippe Brucker Fix three bugs that prevent removal of ioeventfds in KVM. Store the flags in the right structure, check the datamatch parameter, and pass the fd to KVM. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Julien Thierry --- ioeventfd.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/ioeventfd.c b/ioeventfd.c index 14453b8..3ae8267 100644 --- a/ioeventfd.c +++ b/ioeventfd.c @@ -172,7 +172,7 @@ int ioeventfd__add_event(struct ioevent *ioevent, int flags) } } - ioevent->flags = kvm_ioevent.flags; + new_ioevent->flags = kvm_ioevent.flags; list_add_tail(&new_ioevent->list, &used_ioevents); return 0; @@ -192,7 +192,8 @@ int ioeventfd__del_event(u64 addr, u64 datamatch) return -ENOSYS; list_for_each_entry(ioevent, &used_ioevents, list) { - if (ioevent->io_addr == addr) { + if (ioevent->io_addr == addr && + ioevent->datamatch == datamatch) { found = 1; break; } @@ -202,6 +203,7 @@ int ioeventfd__del_event(u64 addr, u64 datamatch) return -ENOENT; kvm_ioevent = (struct kvm_ioeventfd) { + .fd = ioevent->fd, .addr = ioevent->io_addr, .len = ioevent->io_len, .datamatch = ioevent->datamatch, From patchwork Thu Jan 10 14:12:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Thierry X-Patchwork-Id: 10755875 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 43E1214DE for ; Thu, 10 Jan 2019 14:13:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3375229933 for ; Thu, 10 Jan 2019 14:13:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 27BF529958; Thu, 10 Jan 2019 14:13:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 08C7F29933 for ; Thu, 10 Jan 2019 14:13:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727775AbfAJONB (ORCPT ); Thu, 10 Jan 2019 09:13:01 -0500 Received: from foss.arm.com ([217.140.101.70]:36396 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727621AbfAJONA (ORCPT ); Thu, 10 Jan 2019 09:13:00 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CC4421596; Thu, 10 Jan 2019 06:12:59 -0800 (PST) Received: from e112298-lin.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id BCCE33F5AF; Thu, 10 Jan 2019 06:12:58 -0800 (PST) From: Julien Thierry To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Cc: jean-philippe.brucker@arm.com, will.deacon@arm.com, kraxel@redhat.com, Julien Thierry Subject: [PATCH kvmtool v2 02/13] virtio: Implement notify_status Date: Thu, 10 Jan 2019 14:12:39 +0000 Message-Id: <1547129570-14351-3-git-send-email-julien.thierry@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1547129570-14351-1-git-send-email-julien.thierry@arm.com> References: <1547129570-14351-1-git-send-email-julien.thierry@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jean-Philippe Brucker Modern virtio require proper status handling and reset. A "notify_status" callback is already present in the virtio ops, but isn't implemented by any device. Instead they currently use "set_guest_feature" to reset the device and deal with endianess. This isn't sufficient for proper device reset, so add the notify_status callback to all devices that need it. To add useful hints like "start" and "stop", extend the status variable to 32-bits. Signed-off-by: Jean-Philippe Brucker [Julien T: Remove VIRTIO_CONFIG_S_NEEDS_RESET from config mask, as it is virtio v1+ macro and kvmtool only implements v0.9, this macro should not be referenced for now] Signed-off-by: Julien Thierry --- include/kvm/virtio.h | 20 +++++++++++++++++++- virtio/9p.c | 5 +++++ virtio/balloon.c | 5 +++++ virtio/blk.c | 5 +++++ virtio/console.c | 5 +++++ virtio/core.c | 23 +++++++++++++++++++++++ virtio/mmio.c | 3 +-- virtio/net.c | 10 ++++++++++ virtio/pci.c | 3 +-- virtio/scsi.c | 5 +++++ 10 files changed, 79 insertions(+), 5 deletions(-) -- 1.9.1 diff --git a/include/kvm/virtio.h b/include/kvm/virtio.h index 72290fc..f143321 100644 --- a/include/kvm/virtio.h +++ b/include/kvm/virtio.h @@ -7,6 +7,7 @@ #include #include +#include #include #include "kvm/barrier.h" @@ -27,6 +28,20 @@ #define VIRTIO_ENDIAN_HOST VIRTIO_ENDIAN_BE #endif +/* Reserved status bits */ +#define VIRTIO_CONFIG_S_MASK \ + (VIRTIO_CONFIG_S_ACKNOWLEDGE | \ + VIRTIO_CONFIG_S_DRIVER | \ + VIRTIO_CONFIG_S_DRIVER_OK | \ + VIRTIO_CONFIG_S_FEATURES_OK | \ + VIRTIO_CONFIG_S_FAILED) + +/* Kvmtool status bits */ +/* Start the device */ +#define VIRTIO__STATUS_START (1 << 8) +/* Stop the device */ +#define VIRTIO__STATUS_STOP (1 << 9) + struct virt_queue { struct vring vring; u32 pfn; @@ -162,6 +177,7 @@ struct virtio_device { struct virtio_ops *ops; u16 endian; u32 features; + u32 status; }; struct virtio_ops { @@ -178,7 +194,7 @@ struct virtio_ops { void (*notify_vq_eventfd)(struct kvm *kvm, void *dev, u32 vq, u32 efd); int (*signal_vq)(struct kvm *kvm, struct virtio_device *vdev, u32 queueid); int (*signal_config)(struct kvm *kvm, struct virtio_device *vdev); - void (*notify_status)(struct kvm *kvm, void *dev, u8 status); + void (*notify_status)(struct kvm *kvm, void *dev, u32 status); int (*init)(struct kvm *kvm, void *dev, struct virtio_device *vdev, int device_id, int subsys_id, int class); int (*exit)(struct kvm *kvm, struct virtio_device *vdev); @@ -204,5 +220,7 @@ static inline void virtio_init_device_vq(struct virtio_device *vdev, void virtio_set_guest_features(struct kvm *kvm, struct virtio_device *vdev, void *dev, u32 features); +void virtio_notify_status(struct kvm *kvm, struct virtio_device *vdev, + void *dev, u8 status); #endif /* KVM__VIRTIO_H */ diff --git a/virtio/9p.c b/virtio/9p.c index 69fdc4b..4b93b4c 100644 --- a/virtio/9p.c +++ b/virtio/9p.c @@ -1382,6 +1382,10 @@ static void set_guest_features(struct kvm *kvm, void *dev, u32 features) conf->tag_len = virtio_host_to_guest_u16(&p9dev->vdev, conf->tag_len); } +static void notify_status(struct kvm *kvm, void *dev, u32 status) +{ +} + static int init_vq(struct kvm *kvm, void *dev, u32 vq, u32 page_size, u32 align, u32 pfn) { @@ -1441,6 +1445,7 @@ struct virtio_ops p9_dev_virtio_ops = { .get_host_features = get_host_features, .set_guest_features = set_guest_features, .init_vq = init_vq, + .notify_status = notify_status, .notify_vq = notify_vq, .get_pfn_vq = get_pfn_vq, .get_size_vq = get_size_vq, diff --git a/virtio/balloon.c b/virtio/balloon.c index 9564aa3..871d6e0 100644 --- a/virtio/balloon.c +++ b/virtio/balloon.c @@ -193,6 +193,10 @@ static void set_guest_features(struct kvm *kvm, void *dev, u32 features) bdev->features = features; } +static void notify_status(struct kvm *kvm, void *dev, u32 status) +{ +} + static int init_vq(struct kvm *kvm, void *dev, u32 vq, u32 page_size, u32 align, u32 pfn) { @@ -244,6 +248,7 @@ struct virtio_ops bln_dev_virtio_ops = { .get_host_features = get_host_features, .set_guest_features = set_guest_features, .init_vq = init_vq, + .notify_status = notify_status, .notify_vq = notify_vq, .get_pfn_vq = get_pfn_vq, .get_size_vq = get_size_vq, diff --git a/virtio/blk.c b/virtio/blk.c index c485e4f..db9f4cc 100644 --- a/virtio/blk.c +++ b/virtio/blk.c @@ -174,6 +174,10 @@ static void set_guest_features(struct kvm *kvm, void *dev, u32 features) conf->opt_io_size = virtio_host_to_guest_u32(&bdev->vdev, conf->opt_io_size); } +static void notify_status(struct kvm *kvm, void *dev, u32 status) +{ +} + static int init_vq(struct kvm *kvm, void *dev, u32 vq, u32 page_size, u32 align, u32 pfn) { @@ -249,6 +253,7 @@ static struct virtio_ops blk_dev_virtio_ops = { .get_host_features = get_host_features, .set_guest_features = set_guest_features, .init_vq = init_vq, + .notify_status = notify_status, .notify_vq = notify_vq, .get_pfn_vq = get_pfn_vq, .get_size_vq = get_size_vq, diff --git a/virtio/console.c b/virtio/console.c index dc0a9c4..b9df5c9 100644 --- a/virtio/console.c +++ b/virtio/console.c @@ -140,6 +140,10 @@ static void set_guest_features(struct kvm *kvm, void *dev, u32 features) conf->max_nr_ports = virtio_host_to_guest_u32(&cdev->vdev, conf->max_nr_ports); } +static void notify_status(struct kvm *kvm, void *dev, u32 status) +{ +} + static int init_vq(struct kvm *kvm, void *dev, u32 vq, u32 page_size, u32 align, u32 pfn) { @@ -203,6 +207,7 @@ static struct virtio_ops con_dev_virtio_ops = { .get_host_features = get_host_features, .set_guest_features = set_guest_features, .init_vq = init_vq, + .notify_status = notify_status, .notify_vq = notify_vq, .get_pfn_vq = get_pfn_vq, .get_size_vq = get_size_vq, diff --git a/virtio/core.c b/virtio/core.c index 0e2646c..9114f27 100644 --- a/virtio/core.c +++ b/virtio/core.c @@ -214,6 +214,29 @@ void virtio_set_guest_features(struct kvm *kvm, struct virtio_device *vdev, vdev->ops->set_guest_features(kvm, dev, features); } +void virtio_notify_status(struct kvm *kvm, struct virtio_device *vdev, + void *dev, u8 status) +{ + u32 ext_status = status; + + vdev->status &= ~VIRTIO_CONFIG_S_MASK; + vdev->status |= status; + + /* Add a few hints to help devices */ + if ((status & VIRTIO_CONFIG_S_DRIVER_OK) && + !(vdev->status & VIRTIO__STATUS_START)) { + vdev->status |= VIRTIO__STATUS_START; + ext_status |= VIRTIO__STATUS_START; + + } else if (!status && (vdev->status & VIRTIO__STATUS_START)) { + vdev->status &= ~VIRTIO__STATUS_START; + ext_status |= VIRTIO__STATUS_STOP; + } + + if (vdev->ops->notify_status) + vdev->ops->notify_status(kvm, dev, ext_status); +} + int virtio_init(struct kvm *kvm, void *dev, struct virtio_device *vdev, struct virtio_ops *ops, enum virtio_trans trans, int device_id, int subsys_id, int class) diff --git a/virtio/mmio.c b/virtio/mmio.c index 8c0f1cc..7a78fef 100644 --- a/virtio/mmio.c +++ b/virtio/mmio.c @@ -162,8 +162,7 @@ static void virtio_mmio_config_out(struct kvm_cpu *vcpu, vmmio->hdr.status = ioport__read32(data); if (!vmmio->hdr.status) /* Sample endianness on reset */ vdev->endian = kvm_cpu__get_endianness(vcpu); - if (vdev->ops->notify_status) - vdev->ops->notify_status(kvm, vmmio->dev, vmmio->hdr.status); + virtio_notify_status(kvm, vdev, vmmio->dev, vmmio->hdr.status); break; case VIRTIO_MMIO_GUEST_FEATURES: if (vmmio->hdr.guest_features_sel == 0) { diff --git a/virtio/net.c b/virtio/net.c index f95258c..619b545 100644 --- a/virtio/net.c +++ b/virtio/net.c @@ -518,7 +518,10 @@ static void set_guest_features(struct kvm *kvm, void *dev, u32 features) conf->status = virtio_host_to_guest_u16(&ndev->vdev, conf->status); conf->max_virtqueue_pairs = virtio_host_to_guest_u16(&ndev->vdev, conf->max_virtqueue_pairs); +} +static void virtio_net_start(struct net_dev *ndev) +{ if (ndev->mode == NET_MODE_TAP) { if (!virtio_net__tap_init(ndev)) die_perror("TAP device initialized failed because"); @@ -534,6 +537,12 @@ static void set_guest_features(struct kvm *kvm, void *dev, u32 features) } } +static void notify_status(struct kvm *kvm, void *dev, u32 status) +{ + if (status & VIRTIO__STATUS_START) + virtio_net_start(dev); +} + static bool is_ctrl_vq(struct net_dev *ndev, u32 vq) { return vq == (u32)(ndev->queue_pairs * 2); @@ -683,6 +692,7 @@ static struct virtio_ops net_dev_virtio_ops = { .notify_vq = notify_vq, .notify_vq_gsi = notify_vq_gsi, .notify_vq_eventfd = notify_vq_eventfd, + .notify_status = notify_status, }; static void virtio_net__vhost_init(struct kvm *kvm, struct net_dev *ndev) diff --git a/virtio/pci.c b/virtio/pci.c index 5a5fc6e..fdeee69 100644 --- a/virtio/pci.c +++ b/virtio/pci.c @@ -285,8 +285,7 @@ static bool virtio_pci__io_out(struct ioport *ioport, struct kvm_cpu *vcpu, u16 vpci->status = ioport__read8(data); if (!vpci->status) /* Sample endianness on reset */ vdev->endian = kvm_cpu__get_endianness(vcpu); - if (vdev->ops->notify_status) - vdev->ops->notify_status(kvm, vpci->dev, vpci->status); + virtio_notify_status(kvm, vdev, vpci->dev, vpci->status); break; default: ret = virtio_pci__specific_io_out(kvm, vdev, port, data, size, offset); diff --git a/virtio/scsi.c b/virtio/scsi.c index a429ac8..788bfa2 100644 --- a/virtio/scsi.c +++ b/virtio/scsi.c @@ -50,6 +50,10 @@ static void set_guest_features(struct kvm *kvm, void *dev, u32 features) sdev->features = features; } +static void notify_status(struct kvm *kvm, void *dev, u32 status) +{ +} + static int init_vq(struct kvm *kvm, void *dev, u32 vq, u32 page_size, u32 align, u32 pfn) { @@ -171,6 +175,7 @@ static struct virtio_ops scsi_dev_virtio_ops = { .get_pfn_vq = get_pfn_vq, .get_size_vq = get_size_vq, .set_size_vq = set_size_vq, + .notify_status = notify_status, .notify_vq = notify_vq, .notify_vq_gsi = notify_vq_gsi, .notify_vq_eventfd = notify_vq_eventfd, From patchwork Thu Jan 10 14:12:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Thierry X-Patchwork-Id: 10755877 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1549713B4 for ; Thu, 10 Jan 2019 14:13:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 054AB29933 for ; Thu, 10 Jan 2019 14:13:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 037132994B; Thu, 10 Jan 2019 14:13:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 75E9829933 for ; Thu, 10 Jan 2019 14:13:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727794AbfAJONC (ORCPT ); Thu, 10 Jan 2019 09:13:02 -0500 Received: from foss.arm.com ([217.140.101.70]:36406 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727621AbfAJONB (ORCPT ); Thu, 10 Jan 2019 09:13:01 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 256AC15BE; Thu, 10 Jan 2019 06:13:01 -0800 (PST) Received: from e112298-lin.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 15F943F5AF; Thu, 10 Jan 2019 06:12:59 -0800 (PST) From: Julien Thierry To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Cc: jean-philippe.brucker@arm.com, will.deacon@arm.com, kraxel@redhat.com, Julien Thierry Subject: [PATCH kvmtool v2 03/13] virtio: Add get_vq_count() callback Date: Thu, 10 Jan 2019 14:12:40 +0000 Message-Id: <1547129570-14351-4-git-send-email-julien.thierry@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1547129570-14351-1-git-send-email-julien.thierry@arm.com> References: <1547129570-14351-1-git-send-email-julien.thierry@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jean-Philippe Brucker Modern virtio requires devices to report how many queues they support. Add an operation to query all devices about their capacities. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Julien Thierry --- include/kvm/virtio.h | 1 + virtio/9p.c | 6 ++++++ virtio/balloon.c | 6 ++++++ virtio/blk.c | 6 ++++++ virtio/console.c | 6 ++++++ virtio/net.c | 8 ++++++++ virtio/rng.c | 6 ++++++ virtio/scsi.c | 6 ++++++ 8 files changed, 45 insertions(+) diff --git a/include/kvm/virtio.h b/include/kvm/virtio.h index f143321..cc49c9d 100644 --- a/include/kvm/virtio.h +++ b/include/kvm/virtio.h @@ -184,6 +184,7 @@ struct virtio_ops { u8 *(*get_config)(struct kvm *kvm, void *dev); u32 (*get_host_features)(struct kvm *kvm, void *dev); void (*set_guest_features)(struct kvm *kvm, void *dev, u32 features); + int (*get_vq_count)(struct kvm *kvm, void *dev); int (*init_vq)(struct kvm *kvm, void *dev, u32 vq, u32 page_size, u32 align, u32 pfn); int (*notify_vq)(struct kvm *kvm, void *dev, u32 vq); diff --git a/virtio/9p.c b/virtio/9p.c index 4b93b4c..94f7a8f 100644 --- a/virtio/9p.c +++ b/virtio/9p.c @@ -1440,6 +1440,11 @@ static int set_size_vq(struct kvm *kvm, void *dev, u32 vq, int size) return size; } +static int get_vq_count(struct kvm *kvm, void *dev) +{ + return NUM_VIRT_QUEUES; +} + struct virtio_ops p9_dev_virtio_ops = { .get_config = get_config, .get_host_features = get_host_features, @@ -1450,6 +1455,7 @@ struct virtio_ops p9_dev_virtio_ops = { .get_pfn_vq = get_pfn_vq, .get_size_vq = get_size_vq, .set_size_vq = set_size_vq, + .get_vq_count = get_vq_count, }; int virtio_9p_rootdir_parser(const struct option *opt, const char *arg, int unset) diff --git a/virtio/balloon.c b/virtio/balloon.c index 871d6e0..2c2e24a 100644 --- a/virtio/balloon.c +++ b/virtio/balloon.c @@ -243,6 +243,11 @@ static int set_size_vq(struct kvm *kvm, void *dev, u32 vq, int size) return size; } +static int get_vq_count(struct kvm *kvm, void *dev) +{ + return NUM_VIRT_QUEUES; +} + struct virtio_ops bln_dev_virtio_ops = { .get_config = get_config, .get_host_features = get_host_features, @@ -253,6 +258,7 @@ struct virtio_ops bln_dev_virtio_ops = { .get_pfn_vq = get_pfn_vq, .get_size_vq = get_size_vq, .set_size_vq = set_size_vq, + .get_vq_count = get_vq_count, }; int virtio_bln__init(struct kvm *kvm) diff --git a/virtio/blk.c b/virtio/blk.c index db9f4cc..6502b8c 100644 --- a/virtio/blk.c +++ b/virtio/blk.c @@ -248,10 +248,16 @@ static int set_size_vq(struct kvm *kvm, void *dev, u32 vq, int size) return size; } +static int get_vq_count(struct kvm *kvm, void *dev) +{ + return NUM_VIRT_QUEUES; +} + static struct virtio_ops blk_dev_virtio_ops = { .get_config = get_config, .get_host_features = get_host_features, .set_guest_features = set_guest_features, + .get_vq_count = get_vq_count, .init_vq = init_vq, .notify_status = notify_status, .notify_vq = notify_vq, diff --git a/virtio/console.c b/virtio/console.c index b9df5c9..c96bc11 100644 --- a/virtio/console.c +++ b/virtio/console.c @@ -202,10 +202,16 @@ static int set_size_vq(struct kvm *kvm, void *dev, u32 vq, int size) return size; } +static int get_vq_count(struct kvm *kvm, void *dev) +{ + return VIRTIO_CONSOLE_NUM_QUEUES; +} + static struct virtio_ops con_dev_virtio_ops = { .get_config = get_config, .get_host_features = get_host_features, .set_guest_features = set_guest_features, + .get_vq_count = get_vq_count, .init_vq = init_vq, .notify_status = notify_status, .notify_vq = notify_vq, diff --git a/virtio/net.c b/virtio/net.c index 619b545..3b08aea 100644 --- a/virtio/net.c +++ b/virtio/net.c @@ -681,10 +681,18 @@ static int set_size_vq(struct kvm *kvm, void *dev, u32 vq, int size) return size; } +static int get_vq_count(struct kvm *kvm, void *dev) +{ + struct net_dev *ndev = dev; + + return ndev->queue_pairs * 2 + 1; +} + static struct virtio_ops net_dev_virtio_ops = { .get_config = get_config, .get_host_features = get_host_features, .set_guest_features = set_guest_features, + .get_vq_count = get_vq_count, .init_vq = init_vq, .get_pfn_vq = get_pfn_vq, .get_size_vq = get_size_vq, diff --git a/virtio/rng.c b/virtio/rng.c index 9b9e128..fc0e320 100644 --- a/virtio/rng.c +++ b/virtio/rng.c @@ -141,6 +141,11 @@ static int set_size_vq(struct kvm *kvm, void *dev, u32 vq, int size) return size; } +static int get_vq_count(struct kvm *kvm, void *dev) +{ + return NUM_VIRT_QUEUES; +} + static struct virtio_ops rng_dev_virtio_ops = { .get_config = get_config, .get_host_features = get_host_features, @@ -150,6 +155,7 @@ static struct virtio_ops rng_dev_virtio_ops = { .get_pfn_vq = get_pfn_vq, .get_size_vq = get_size_vq, .set_size_vq = set_size_vq, + .get_vq_count = get_vq_count, }; int virtio_rng__init(struct kvm *kvm) diff --git a/virtio/scsi.c b/virtio/scsi.c index 788bfa2..e21263c 100644 --- a/virtio/scsi.c +++ b/virtio/scsi.c @@ -167,6 +167,11 @@ static int set_size_vq(struct kvm *kvm, void *dev, u32 vq, int size) return size; } +static int get_vq_count(struct kvm *kvm, void *dev) +{ + return NUM_VIRT_QUEUES; +} + static struct virtio_ops scsi_dev_virtio_ops = { .get_config = get_config, .get_host_features = get_host_features, @@ -179,6 +184,7 @@ static struct virtio_ops scsi_dev_virtio_ops = { .notify_vq = notify_vq, .notify_vq_gsi = notify_vq_gsi, .notify_vq_eventfd = notify_vq_eventfd, + .get_vq_count = get_vq_count, }; static void virtio_scsi_vhost_init(struct kvm *kvm, struct scsi_dev *sdev) From patchwork Thu Jan 10 14:12:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Thierry X-Patchwork-Id: 10755897 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 61D9914E5 for ; Thu, 10 Jan 2019 14:13:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4DED129933 for ; Thu, 10 Jan 2019 14:13:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 41ED02994D; Thu, 10 Jan 2019 14:13:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0933F29941 for ; Thu, 10 Jan 2019 14:13:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727804AbfAJOND (ORCPT ); Thu, 10 Jan 2019 09:13:03 -0500 Received: from foss.arm.com ([217.140.101.70]:36418 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727621AbfAJONC (ORCPT ); Thu, 10 Jan 2019 09:13:02 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 726AA1596; Thu, 10 Jan 2019 06:13:02 -0800 (PST) Received: from e112298-lin.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 632963F5AF; Thu, 10 Jan 2019 06:13:01 -0800 (PST) From: Julien Thierry To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Cc: jean-philippe.brucker@arm.com, will.deacon@arm.com, kraxel@redhat.com, Julien Thierry Subject: [PATCH kvmtool v2 04/13] virtio: Add get_vq() callback Date: Thu, 10 Jan 2019 14:12:41 +0000 Message-Id: <1547129570-14351-5-git-send-email-julien.thierry@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1547129570-14351-1-git-send-email-julien.thierry@arm.com> References: <1547129570-14351-1-git-send-email-julien.thierry@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jean-Philippe Brucker To ease future changes to the core, replace get_pfn_vq() with get_vq(). This way adding new generic operation on virtqueues won't require modifying every virtio device. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Julien Thierry --- include/kvm/virtio.h | 2 +- virtio/9p.c | 6 +++--- virtio/balloon.c | 6 +++--- virtio/blk.c | 6 +++--- virtio/console.c | 6 +++--- virtio/mmio.c | 7 ++++--- virtio/net.c | 6 +++--- virtio/pci.c | 5 +++-- virtio/rng.c | 6 +++--- virtio/scsi.c | 6 +++--- 10 files changed, 29 insertions(+), 27 deletions(-) diff --git a/include/kvm/virtio.h b/include/kvm/virtio.h index cc49c9d..e791298 100644 --- a/include/kvm/virtio.h +++ b/include/kvm/virtio.h @@ -188,7 +188,7 @@ struct virtio_ops { int (*init_vq)(struct kvm *kvm, void *dev, u32 vq, u32 page_size, u32 align, u32 pfn); int (*notify_vq)(struct kvm *kvm, void *dev, u32 vq); - int (*get_pfn_vq)(struct kvm *kvm, void *dev, u32 vq); + struct virt_queue *(*get_vq)(struct kvm *kvm, void *dev, u32 vq); int (*get_size_vq)(struct kvm *kvm, void *dev, u32 vq); int (*set_size_vq)(struct kvm *kvm, void *dev, u32 vq, int size); void (*notify_vq_gsi)(struct kvm *kvm, void *dev, u32 vq, u32 gsi); diff --git a/virtio/9p.c b/virtio/9p.c index 94f7a8f..d9f45cf 100644 --- a/virtio/9p.c +++ b/virtio/9p.c @@ -1422,11 +1422,11 @@ static int notify_vq(struct kvm *kvm, void *dev, u32 vq) return 0; } -static int get_pfn_vq(struct kvm *kvm, void *dev, u32 vq) +static struct virt_queue *get_vq(struct kvm *kvm, void *dev, u32 vq) { struct p9_dev *p9dev = dev; - return p9dev->vqs[vq].pfn; + return &p9dev->vqs[vq]; } static int get_size_vq(struct kvm *kvm, void *dev, u32 vq) @@ -1452,7 +1452,7 @@ struct virtio_ops p9_dev_virtio_ops = { .init_vq = init_vq, .notify_status = notify_status, .notify_vq = notify_vq, - .get_pfn_vq = get_pfn_vq, + .get_vq = get_vq, .get_size_vq = get_size_vq, .set_size_vq = set_size_vq, .get_vq_count = get_vq_count, diff --git a/virtio/balloon.c b/virtio/balloon.c index 2c2e24a..15a9a46 100644 --- a/virtio/balloon.c +++ b/virtio/balloon.c @@ -225,11 +225,11 @@ static int notify_vq(struct kvm *kvm, void *dev, u32 vq) return 0; } -static int get_pfn_vq(struct kvm *kvm, void *dev, u32 vq) +static struct virt_queue *get_vq(struct kvm *kvm, void *dev, u32 vq) { struct bln_dev *bdev = dev; - return bdev->vqs[vq].pfn; + return &bdev->vqs[vq]; } static int get_size_vq(struct kvm *kvm, void *dev, u32 vq) @@ -255,7 +255,7 @@ struct virtio_ops bln_dev_virtio_ops = { .init_vq = init_vq, .notify_status = notify_status, .notify_vq = notify_vq, - .get_pfn_vq = get_pfn_vq, + .get_vq = get_vq, .get_size_vq = get_size_vq, .set_size_vq = set_size_vq, .get_vq_count = get_vq_count, diff --git a/virtio/blk.c b/virtio/blk.c index 6502b8c..6a6b3b7 100644 --- a/virtio/blk.c +++ b/virtio/blk.c @@ -229,11 +229,11 @@ static int notify_vq(struct kvm *kvm, void *dev, u32 vq) return 0; } -static int get_pfn_vq(struct kvm *kvm, void *dev, u32 vq) +static struct virt_queue *get_vq(struct kvm *kvm, void *dev, u32 vq) { struct blk_dev *bdev = dev; - return bdev->vqs[vq].pfn; + return &bdev->vqs[vq]; } static int get_size_vq(struct kvm *kvm, void *dev, u32 vq) @@ -261,7 +261,7 @@ static struct virtio_ops blk_dev_virtio_ops = { .init_vq = init_vq, .notify_status = notify_status, .notify_vq = notify_vq, - .get_pfn_vq = get_pfn_vq, + .get_vq = get_vq, .get_size_vq = get_size_vq, .set_size_vq = set_size_vq, }; diff --git a/virtio/console.c b/virtio/console.c index c96bc11..d2b312c 100644 --- a/virtio/console.c +++ b/virtio/console.c @@ -184,11 +184,11 @@ static int notify_vq(struct kvm *kvm, void *dev, u32 vq) return 0; } -static int get_pfn_vq(struct kvm *kvm, void *dev, u32 vq) +static struct virt_queue *get_vq(struct kvm *kvm, void *dev, u32 vq) { struct con_dev *cdev = dev; - return cdev->vqs[vq].pfn; + return &cdev->vqs[vq]; } static int get_size_vq(struct kvm *kvm, void *dev, u32 vq) @@ -215,7 +215,7 @@ static struct virtio_ops con_dev_virtio_ops = { .init_vq = init_vq, .notify_status = notify_status, .notify_vq = notify_vq, - .get_pfn_vq = get_pfn_vq, + .get_vq = get_vq, .get_size_vq = get_size_vq, .set_size_vq = set_size_vq, }; diff --git a/virtio/mmio.c b/virtio/mmio.c index 7a78fef..70f767e 100644 --- a/virtio/mmio.c +++ b/virtio/mmio.c @@ -111,6 +111,7 @@ static void virtio_mmio_config_in(struct kvm_cpu *vcpu, struct virtio_device *vdev) { struct virtio_mmio *vmmio = vdev->virtio; + struct virt_queue *vq; u32 val = 0; switch (addr) { @@ -129,9 +130,9 @@ static void virtio_mmio_config_in(struct kvm_cpu *vcpu, ioport__write32(data, val); break; case VIRTIO_MMIO_QUEUE_PFN: - val = vdev->ops->get_pfn_vq(vmmio->kvm, vmmio->dev, - vmmio->hdr.queue_sel); - ioport__write32(data, val); + vq = vdev->ops->get_vq(vmmio->kvm, vmmio->dev, + vmmio->hdr.queue_sel); + ioport__write32(data, vq->pfn); break; case VIRTIO_MMIO_QUEUE_NUM_MAX: val = vdev->ops->get_size_vq(vmmio->kvm, vmmio->dev, diff --git a/virtio/net.c b/virtio/net.c index 3b08aea..d65d04e 100644 --- a/virtio/net.c +++ b/virtio/net.c @@ -662,11 +662,11 @@ static int notify_vq(struct kvm *kvm, void *dev, u32 vq) return 0; } -static int get_pfn_vq(struct kvm *kvm, void *dev, u32 vq) +static struct virt_queue *get_vq(struct kvm *kvm, void *dev, u32 vq) { struct net_dev *ndev = dev; - return ndev->vqs[vq].pfn; + return &ndev->vqs[vq]; } static int get_size_vq(struct kvm *kvm, void *dev, u32 vq) @@ -694,7 +694,7 @@ static struct virtio_ops net_dev_virtio_ops = { .set_guest_features = set_guest_features, .get_vq_count = get_vq_count, .init_vq = init_vq, - .get_pfn_vq = get_pfn_vq, + .get_vq = get_vq, .get_size_vq = get_size_vq, .set_size_vq = set_size_vq, .notify_vq = notify_vq, diff --git a/virtio/pci.c b/virtio/pci.c index fdeee69..8add770 100644 --- a/virtio/pci.c +++ b/virtio/pci.c @@ -113,6 +113,7 @@ static bool virtio_pci__io_in(struct ioport *ioport, struct kvm_cpu *vcpu, u16 p bool ret = true; struct virtio_device *vdev; struct virtio_pci *vpci; + struct virt_queue *vq; struct kvm *kvm; u32 val; @@ -127,8 +128,8 @@ static bool virtio_pci__io_in(struct ioport *ioport, struct kvm_cpu *vcpu, u16 p ioport__write32(data, val); break; case VIRTIO_PCI_QUEUE_PFN: - val = vdev->ops->get_pfn_vq(kvm, vpci->dev, vpci->queue_selector); - ioport__write32(data, val); + vq = vdev->ops->get_vq(kvm, vpci->dev, vpci->queue_selector); + ioport__write32(data, vq->pfn); break; case VIRTIO_PCI_QUEUE_NUM: val = vdev->ops->get_size_vq(kvm, vpci->dev, vpci->queue_selector); diff --git a/virtio/rng.c b/virtio/rng.c index fc0e320..9dd757b 100644 --- a/virtio/rng.c +++ b/virtio/rng.c @@ -123,11 +123,11 @@ static int notify_vq(struct kvm *kvm, void *dev, u32 vq) return 0; } -static int get_pfn_vq(struct kvm *kvm, void *dev, u32 vq) +static struct virt_queue *get_vq(struct kvm *kvm, void *dev, u32 vq) { struct rng_dev *rdev = dev; - return rdev->vqs[vq].pfn; + return &rdev->vqs[vq]; } static int get_size_vq(struct kvm *kvm, void *dev, u32 vq) @@ -152,7 +152,7 @@ static struct virtio_ops rng_dev_virtio_ops = { .set_guest_features = set_guest_features, .init_vq = init_vq, .notify_vq = notify_vq, - .get_pfn_vq = get_pfn_vq, + .get_vq = get_vq, .get_size_vq = get_size_vq, .set_size_vq = set_size_vq, .get_vq_count = get_vq_count, diff --git a/virtio/scsi.c b/virtio/scsi.c index e21263c..c8400b6 100644 --- a/virtio/scsi.c +++ b/virtio/scsi.c @@ -150,11 +150,11 @@ static int notify_vq(struct kvm *kvm, void *dev, u32 vq) return 0; } -static int get_pfn_vq(struct kvm *kvm, void *dev, u32 vq) +static struct virt_queue *get_vq(struct kvm *kvm, void *dev, u32 vq) { struct scsi_dev *sdev = dev; - return sdev->vqs[vq].pfn; + return &sdev->vqs[vq]; } static int get_size_vq(struct kvm *kvm, void *dev, u32 vq) @@ -177,7 +177,7 @@ static struct virtio_ops scsi_dev_virtio_ops = { .get_host_features = get_host_features, .set_guest_features = set_guest_features, .init_vq = init_vq, - .get_pfn_vq = get_pfn_vq, + .get_vq = get_vq, .get_size_vq = get_size_vq, .set_size_vq = set_size_vq, .notify_status = notify_status, From patchwork Thu Jan 10 14:12:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Thierry X-Patchwork-Id: 10755895 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3F67D14DE for ; Thu, 10 Jan 2019 14:13:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2F4AE29933 for ; Thu, 10 Jan 2019 14:13:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 23D1C29954; Thu, 10 Jan 2019 14:13:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 35B3829933 for ; Thu, 10 Jan 2019 14:13:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728107AbfAJONV (ORCPT ); Thu, 10 Jan 2019 09:13:21 -0500 Received: from foss.arm.com ([217.140.101.70]:36426 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727819AbfAJONE (ORCPT ); Thu, 10 Jan 2019 09:13:04 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BF7B515BE; Thu, 10 Jan 2019 06:13:03 -0800 (PST) Received: from e112298-lin.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B06603F5AF; Thu, 10 Jan 2019 06:13:02 -0800 (PST) From: Julien Thierry To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Cc: jean-philippe.brucker@arm.com, will.deacon@arm.com, kraxel@redhat.com, Julien Thierry Subject: [PATCH kvmtool v2 05/13] virtio: Add exit_vq() callback Date: Thu, 10 Jan 2019 14:12:42 +0000 Message-Id: <1547129570-14351-6-git-send-email-julien.thierry@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1547129570-14351-1-git-send-email-julien.thierry@arm.com> References: <1547129570-14351-1-git-send-email-julien.thierry@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jean-Philippe Brucker Virtio allows to reset individual virtqueues. For legacy devices, it's done by writing an address of 0 into the PFN register. Modern devices have an "enable" register. Add an exit_vq() callback to all devices. A lot more work is required by each device to clean up their virtqueue state, and by the core to reset things like MSI routes and ioeventfds. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Julien Thierry --- include/kvm/virtio.h | 5 +++++ virtio/core.c | 10 ++++++++++ virtio/mmio.c | 26 ++++++++++++++++++++------ virtio/pci.c | 23 +++++++++++++++++++---- 4 files changed, 54 insertions(+), 10 deletions(-) diff --git a/include/kvm/virtio.h b/include/kvm/virtio.h index e791298..f35c74d 100644 --- a/include/kvm/virtio.h +++ b/include/kvm/virtio.h @@ -51,6 +51,7 @@ struct virt_queue { u16 last_used_signalled; u16 endian; bool use_event_idx; + bool enabled; }; /* @@ -187,6 +188,7 @@ struct virtio_ops { int (*get_vq_count)(struct kvm *kvm, void *dev); int (*init_vq)(struct kvm *kvm, void *dev, u32 vq, u32 page_size, u32 align, u32 pfn); + void (*exit_vq)(struct kvm *kvm, void *dev, u32 vq); int (*notify_vq)(struct kvm *kvm, void *dev, u32 vq); struct virt_queue *(*get_vq)(struct kvm *kvm, void *dev, u32 vq); int (*get_size_vq)(struct kvm *kvm, void *dev, u32 vq); @@ -217,8 +219,11 @@ static inline void virtio_init_device_vq(struct virtio_device *vdev, { vq->endian = vdev->endian; vq->use_event_idx = (vdev->features & VIRTIO_RING_F_EVENT_IDX); + vq->enabled = true; } +void virtio_exit_vq(struct kvm *kvm, struct virtio_device *vdev, void *dev, + int num); void virtio_set_guest_features(struct kvm *kvm, struct virtio_device *vdev, void *dev, u32 features); void virtio_notify_status(struct kvm *kvm, struct virtio_device *vdev, diff --git a/virtio/core.c b/virtio/core.c index 9114f27..67d4114 100644 --- a/virtio/core.c +++ b/virtio/core.c @@ -166,6 +166,16 @@ u16 virt_queue__get_inout_iov(struct kvm *kvm, struct virt_queue *queue, return head; } +void virtio_exit_vq(struct kvm *kvm, struct virtio_device *vdev, + void *dev, int num) +{ + struct virt_queue *vq = vdev->ops->get_vq(kvm, dev, num); + + if (vq->enabled && vdev->ops->exit_vq) + vdev->ops->exit_vq(kvm, dev, num); + memset(vq, 0, sizeof(*vq)); +} + int virtio__get_dev_specific_field(int offset, bool msix, u32 *config_off) { if (msix) { diff --git a/virtio/mmio.c b/virtio/mmio.c index 70f767e..4b8c20e 100644 --- a/virtio/mmio.c +++ b/virtio/mmio.c @@ -79,6 +79,15 @@ int virtio_mmio_signal_vq(struct kvm *kvm, struct virtio_device *vdev, u32 vq) return 0; } +static void virtio_mmio_exit_vq(struct kvm *kvm, struct virtio_device *vdev, + int vq) +{ + struct virtio_mmio *vmmio = vdev->virtio; + + ioeventfd__del_event(vmmio->addr + VIRTIO_MMIO_QUEUE_NOTIFY, vq); + virtio_exit_vq(kvm, vdev, vmmio->dev, vq); +} + int virtio_mmio_signal_config(struct kvm *kvm, struct virtio_device *vdev) { struct virtio_mmio *vmmio = vdev->virtio; @@ -188,12 +197,17 @@ static void virtio_mmio_config_out(struct kvm_cpu *vcpu, break; case VIRTIO_MMIO_QUEUE_PFN: val = ioport__read32(data); - virtio_mmio_init_ioeventfd(vmmio->kvm, vdev, vmmio->hdr.queue_sel); - vdev->ops->init_vq(vmmio->kvm, vmmio->dev, - vmmio->hdr.queue_sel, - vmmio->hdr.guest_page_size, - vmmio->hdr.queue_align, - val); + if (val) { + virtio_mmio_init_ioeventfd(vmmio->kvm, vdev, + vmmio->hdr.queue_sel); + vdev->ops->init_vq(vmmio->kvm, vmmio->dev, + vmmio->hdr.queue_sel, + vmmio->hdr.guest_page_size, + vmmio->hdr.queue_align, + val); + } else { + virtio_mmio_exit_vq(kvm, vdev, vmmio->hdr.queue_sel); + } break; case VIRTIO_MMIO_QUEUE_NOTIFY: val = ioport__read32(data); diff --git a/virtio/pci.c b/virtio/pci.c index 8add770..2da2d3f 100644 --- a/virtio/pci.c +++ b/virtio/pci.c @@ -72,6 +72,16 @@ free_ioport_evt: return r; } +static void virtio_pci_exit_vq(struct kvm *kvm, struct virtio_device *vdev, + int vq) +{ + struct virtio_pci *vpci = vdev->virtio; + + ioeventfd__del_event(vpci->port_addr + VIRTIO_PCI_QUEUE_NOTIFY, vq); + ioeventfd__del_event(vpci->mmio_addr + VIRTIO_PCI_QUEUE_NOTIFY, vq); + virtio_exit_vq(kvm, vdev, vpci->dev, vq); +} + static inline bool virtio_pci__msix_enabled(struct virtio_pci *vpci) { return vpci->pci_hdr.msix.ctrl & cpu_to_le16(PCI_MSIX_FLAGS_ENABLE); @@ -270,10 +280,15 @@ static bool virtio_pci__io_out(struct ioport *ioport, struct kvm_cpu *vcpu, u16 break; case VIRTIO_PCI_QUEUE_PFN: val = ioport__read32(data); - virtio_pci__init_ioeventfd(kvm, vdev, vpci->queue_selector); - vdev->ops->init_vq(kvm, vpci->dev, vpci->queue_selector, - 1 << VIRTIO_PCI_QUEUE_ADDR_SHIFT, - VIRTIO_PCI_VRING_ALIGN, val); + if (val) { + virtio_pci__init_ioeventfd(kvm, vdev, + vpci->queue_selector); + vdev->ops->init_vq(kvm, vpci->dev, vpci->queue_selector, + 1 << VIRTIO_PCI_QUEUE_ADDR_SHIFT, + VIRTIO_PCI_VRING_ALIGN, val); + } else { + virtio_pci_exit_vq(kvm, vdev, vpci->queue_selector); + } break; case VIRTIO_PCI_QUEUE_SEL: vpci->queue_selector = ioport__read16(data); From patchwork Thu Jan 10 14:12:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Thierry X-Patchwork-Id: 10755881 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C571A14E5 for ; Thu, 10 Jan 2019 14:13:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B3FC92995C for ; Thu, 10 Jan 2019 14:13:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B260E29967; Thu, 10 Jan 2019 14:13:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 17EA12995F for ; Thu, 10 Jan 2019 14:13:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727980AbfAJONJ (ORCPT ); Thu, 10 Jan 2019 09:13:09 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:36434 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727933AbfAJONF (ORCPT ); Thu, 10 Jan 2019 09:13:05 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 184A81596; Thu, 10 Jan 2019 06:13:05 -0800 (PST) Received: from e112298-lin.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 093A03F5AF; Thu, 10 Jan 2019 06:13:03 -0800 (PST) From: Julien Thierry To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Cc: jean-philippe.brucker@arm.com, will.deacon@arm.com, kraxel@redhat.com, Julien Thierry Subject: [PATCH kvmtool v2 06/13] virtio: Add reset() callback Date: Thu, 10 Jan 2019 14:12:43 +0000 Message-Id: <1547129570-14351-7-git-send-email-julien.thierry@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1547129570-14351-1-git-send-email-julien.thierry@arm.com> References: <1547129570-14351-1-git-send-email-julien.thierry@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jean-Philippe Brucker When the guest writes a status of 0, the device should be reset. Add a reset() callback for the transport layer to reset all queues and their ioeventfd. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Julien Thierry --- include/kvm/virtio-mmio.h | 1 + include/kvm/virtio-pci.h | 1 + include/kvm/virtio.h | 1 + virtio/core.c | 9 +++++++++ virtio/mmio.c | 16 ++++++++++++---- virtio/pci.c | 20 +++++++++++++------- 6 files changed, 37 insertions(+), 11 deletions(-) diff --git a/include/kvm/virtio-mmio.h b/include/kvm/virtio-mmio.h index 835f421..0528947 100644 --- a/include/kvm/virtio-mmio.h +++ b/include/kvm/virtio-mmio.h @@ -54,6 +54,7 @@ struct virtio_mmio { int virtio_mmio_signal_vq(struct kvm *kvm, struct virtio_device *vdev, u32 vq); int virtio_mmio_signal_config(struct kvm *kvm, struct virtio_device *vdev); int virtio_mmio_exit(struct kvm *kvm, struct virtio_device *vdev); +int virtio_mmio_reset(struct kvm *kvm, struct virtio_device *vdev); int virtio_mmio_init(struct kvm *kvm, void *dev, struct virtio_device *vdev, int device_id, int subsys_id, int class); void virtio_mmio_assign_irq(struct device_header *dev_hdr); diff --git a/include/kvm/virtio-pci.h b/include/kvm/virtio-pci.h index b70cadd..278a259 100644 --- a/include/kvm/virtio-pci.h +++ b/include/kvm/virtio-pci.h @@ -55,6 +55,7 @@ struct virtio_pci { int virtio_pci__signal_vq(struct kvm *kvm, struct virtio_device *vdev, u32 vq); int virtio_pci__signal_config(struct kvm *kvm, struct virtio_device *vdev); int virtio_pci__exit(struct kvm *kvm, struct virtio_device *vdev); +int virtio_pci__reset(struct kvm *kvm, struct virtio_device *vdev); int virtio_pci__init(struct kvm *kvm, void *dev, struct virtio_device *vdev, int device_id, int subsys_id, int class); diff --git a/include/kvm/virtio.h b/include/kvm/virtio.h index f35c74d..19b9137 100644 --- a/include/kvm/virtio.h +++ b/include/kvm/virtio.h @@ -201,6 +201,7 @@ struct virtio_ops { int (*init)(struct kvm *kvm, void *dev, struct virtio_device *vdev, int device_id, int subsys_id, int class); int (*exit)(struct kvm *kvm, struct virtio_device *vdev); + int (*reset)(struct kvm *kvm, struct virtio_device *vdev); }; int virtio_init(struct kvm *kvm, void *dev, struct virtio_device *vdev, diff --git a/virtio/core.c b/virtio/core.c index 67d4114..e10ec36 100644 --- a/virtio/core.c +++ b/virtio/core.c @@ -241,6 +241,13 @@ void virtio_notify_status(struct kvm *kvm, struct virtio_device *vdev, } else if (!status && (vdev->status & VIRTIO__STATUS_START)) { vdev->status &= ~VIRTIO__STATUS_START; ext_status |= VIRTIO__STATUS_STOP; + + /* + * Reset virtqueues and stop all traffic now, so that the device + * can safely reset the backend in notify_status(). + */ + if (ext_status & VIRTIO__STATUS_STOP) + vdev->ops->reset(kvm, vdev); } if (vdev->ops->notify_status) @@ -264,6 +271,7 @@ int virtio_init(struct kvm *kvm, void *dev, struct virtio_device *vdev, vdev->ops->signal_config = virtio_pci__signal_config; vdev->ops->init = virtio_pci__init; vdev->ops->exit = virtio_pci__exit; + vdev->ops->reset = virtio_pci__reset; vdev->ops->init(kvm, dev, vdev, device_id, subsys_id, class); break; case VIRTIO_MMIO: @@ -276,6 +284,7 @@ int virtio_init(struct kvm *kvm, void *dev, struct virtio_device *vdev, vdev->ops->signal_config = virtio_mmio_signal_config; vdev->ops->init = virtio_mmio_init; vdev->ops->exit = virtio_mmio_exit; + vdev->ops->reset = virtio_mmio_reset; vdev->ops->init(kvm, dev, vdev, device_id, subsys_id, class); break; default: diff --git a/virtio/mmio.c b/virtio/mmio.c index 4b8c20e..eff147a 100644 --- a/virtio/mmio.c +++ b/virtio/mmio.c @@ -326,15 +326,23 @@ int virtio_mmio_init(struct kvm *kvm, void *dev, struct virtio_device *vdev, return 0; } +int virtio_mmio_reset(struct kvm *kvm, struct virtio_device *vdev) +{ + int vq; + struct virtio_mmio *vmmio = vdev->virtio; + + for (vq = 0; vq < vdev->ops->get_vq_count(kvm, vmmio->dev); vq++) + virtio_mmio_exit_vq(kvm, vdev, vq); + + return 0; +} + int virtio_mmio_exit(struct kvm *kvm, struct virtio_device *vdev) { struct virtio_mmio *vmmio = vdev->virtio; - int i; + virtio_mmio_reset(kvm, vdev); kvm__deregister_mmio(kvm, vmmio->addr); - for (i = 0; i < VIRTIO_MMIO_MAX_VQ; i++) - ioeventfd__del_event(vmmio->addr + VIRTIO_MMIO_QUEUE_NOTIFY, i); - return 0; } diff --git a/virtio/pci.c b/virtio/pci.c index 2da2d3f..99653ca 100644 --- a/virtio/pci.c +++ b/virtio/pci.c @@ -77,8 +77,8 @@ static void virtio_pci_exit_vq(struct kvm *kvm, struct virtio_device *vdev, { struct virtio_pci *vpci = vdev->virtio; - ioeventfd__del_event(vpci->port_addr + VIRTIO_PCI_QUEUE_NOTIFY, vq); ioeventfd__del_event(vpci->mmio_addr + VIRTIO_PCI_QUEUE_NOTIFY, vq); + ioeventfd__del_event(vpci->port_addr + VIRTIO_PCI_QUEUE_NOTIFY, vq); virtio_exit_vq(kvm, vdev, vpci->dev, vq); } @@ -522,19 +522,25 @@ free_ioport: return r; } +int virtio_pci__reset(struct kvm *kvm, struct virtio_device *vdev) +{ + int vq; + struct virtio_pci *vpci = vdev->virtio; + + for (vq = 0; vq < vdev->ops->get_vq_count(kvm, vpci->dev); vq++) + virtio_pci_exit_vq(kvm, vdev, vq); + + return 0; +} + int virtio_pci__exit(struct kvm *kvm, struct virtio_device *vdev) { struct virtio_pci *vpci = vdev->virtio; - int i; + virtio_pci__reset(kvm, vdev); kvm__deregister_mmio(kvm, vpci->mmio_addr); kvm__deregister_mmio(kvm, vpci->msix_io_block); ioport__unregister(kvm, vpci->port_addr); - for (i = 0; i < VIRTIO_PCI_MAX_VQ; i++) { - ioeventfd__del_event(vpci->port_addr + VIRTIO_PCI_QUEUE_NOTIFY, i); - ioeventfd__del_event(vpci->mmio_addr + VIRTIO_PCI_QUEUE_NOTIFY, i); - } - return 0; } From patchwork Thu Jan 10 14:12:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Thierry X-Patchwork-Id: 10755879 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 99B4614DE for ; Thu, 10 Jan 2019 14:13:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 89D682995C for ; Thu, 10 Jan 2019 14:13:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 75C0D29966; Thu, 10 Jan 2019 14:13:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C5C4629962 for ; Thu, 10 Jan 2019 14:13:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727972AbfAJONI (ORCPT ); Thu, 10 Jan 2019 09:13:08 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:36442 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727923AbfAJONG (ORCPT ); Thu, 10 Jan 2019 09:13:06 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 657B415BE; Thu, 10 Jan 2019 06:13:06 -0800 (PST) Received: from e112298-lin.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 564763F5AF; Thu, 10 Jan 2019 06:13:05 -0800 (PST) From: Julien Thierry To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Cc: jean-philippe.brucker@arm.com, will.deacon@arm.com, kraxel@redhat.com, Julien Thierry Subject: [PATCH kvmtool v2 07/13] net/uip: Add exit function Date: Thu, 10 Jan 2019 14:12:44 +0000 Message-Id: <1547129570-14351-8-git-send-email-julien.thierry@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1547129570-14351-1-git-send-email-julien.thierry@arm.com> References: <1547129570-14351-1-git-send-email-julien.thierry@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jean-Philippe Brucker When resetting the virtio-net queues, the UIP state needs to be reset as well. Stop all threads (one per TCP stream and UDP connection) and free memory. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Julien Thierry --- include/kvm/uip.h | 6 ++++++ net/uip/core.c | 17 +++++++++++++++++ net/uip/dhcp.c | 6 ++++++ net/uip/tcp.c | 54 ++++++++++++++++++++++++++++++++++++++++++++---------- net/uip/udp.c | 41 +++++++++++++++++++++++++++++++++++------ 5 files changed, 108 insertions(+), 16 deletions(-) diff --git a/include/kvm/uip.h b/include/kvm/uip.h index a9a8d85..efa508a 100644 --- a/include/kvm/uip.h +++ b/include/kvm/uip.h @@ -196,6 +196,7 @@ struct uip_info { struct list_head buf_head; struct mutex buf_lock; pthread_t udp_thread; + u8 *udp_buf; int udp_epollfd; int buf_free_nr; int buf_used_nr; @@ -249,6 +250,7 @@ struct uip_tcp_socket { int read_done; u32 dip, sip; u8 *payload; + u8 *buf; int fd; }; @@ -336,6 +338,9 @@ int uip_tx(struct iovec *iov, u16 out, struct uip_info *info); int uip_rx(struct iovec *iov, u16 in, struct uip_info *info); void uip_static_init(struct uip_info *info); int uip_init(struct uip_info *info); +void uip_exit(struct uip_info *info); +void uip_tcp_exit(struct uip_info *info); +void uip_udp_exit(struct uip_info *info); int uip_tx_do_ipv4_udp_dhcp(struct uip_tx_arg *arg); int uip_tx_do_ipv4_icmp(struct uip_tx_arg *arg); @@ -359,4 +364,5 @@ int uip_udp_make_pkg(struct uip_info *info, struct uip_udp_socket *sk, struct ui bool uip_udp_is_dhcp(struct uip_udp *udp); int uip_dhcp_get_dns(struct uip_info *info); +void uip_dhcp_exit(struct uip_info *info); #endif /* KVM__UIP_H */ diff --git a/net/uip/core.c b/net/uip/core.c index e860f3a..977b9b0 100644 --- a/net/uip/core.c +++ b/net/uip/core.c @@ -154,3 +154,20 @@ int uip_init(struct uip_info *info) return 0; } + +void uip_exit(struct uip_info *info) +{ + struct uip_buf *buf, *next; + + uip_udp_exit(info); + uip_tcp_exit(info); + uip_dhcp_exit(info); + + list_for_each_entry_safe(buf, next, &info->buf_head, list) { + free(buf->vnet); + free(buf->eth); + list_del(&buf->list); + free(buf); + } + uip_static_init(info); +} diff --git a/net/uip/dhcp.c b/net/uip/dhcp.c index b17d352..8f01300 100644 --- a/net/uip/dhcp.c +++ b/net/uip/dhcp.c @@ -200,3 +200,9 @@ int uip_tx_do_ipv4_udp_dhcp(struct uip_tx_arg *arg) return 0; } + +void uip_dhcp_exit(struct uip_info *info) +{ + free(info->domain_name); + info->domain_name = NULL; +} diff --git a/net/uip/tcp.c b/net/uip/tcp.c index 3c30ade..8e0ad52 100644 --- a/net/uip/tcp.c +++ b/net/uip/tcp.c @@ -18,6 +18,7 @@ static int uip_tcp_socket_close(struct uip_tcp_socket *sk, int how) list_del(&sk->list); mutex_unlock(sk->lock); + free(sk->buf); free(sk); } @@ -94,6 +95,24 @@ static struct uip_tcp_socket *uip_tcp_socket_alloc(struct uip_tx_arg *arg, u32 s return sk; } +/* Caller holds the sk lock */ +static void uip_tcp_socket_free(struct uip_tcp_socket *sk) +{ + /* + * Here we assume that the virtqueues are already inactive so we don't + * race with uip_tx_do_ipv4_tcp. We are racing with + * uip_tcp_socket_thread though, but holding the sk lock ensures that it + * cannot free data concurrently. + */ + if (sk->thread) { + pthread_cancel(sk->thread); + pthread_join(sk->thread, NULL); + } + + sk->write_done = sk->read_done = 1; + uip_tcp_socket_close(sk, SHUT_RDWR); +} + static int uip_tcp_payload_send(struct uip_tcp_socket *sk, u8 flag, u16 payload_len) { struct uip_info *info; @@ -175,20 +194,16 @@ static void *uip_tcp_socket_thread(void *p) { struct uip_tcp_socket *sk; int len, left, ret; - u8 *payload, *pos; + u8 *pos; kvm__set_thread_name("uip-tcp"); sk = p; - payload = malloc(UIP_MAX_TCP_PAYLOAD); - if (!payload) - goto out; - while (1) { - pos = payload; + pos = sk->buf; - ret = read(sk->fd, payload, UIP_MAX_TCP_PAYLOAD); + ret = read(sk->fd, sk->buf, UIP_MAX_TCP_PAYLOAD); if (ret <= 0 || ret > UIP_MAX_TCP_PAYLOAD) goto out; @@ -224,7 +239,6 @@ out: sk->read_done = 1; - free(payload); pthread_exit(NULL); return NULL; @@ -232,8 +246,18 @@ out: static int uip_tcp_socket_receive(struct uip_tcp_socket *sk) { - if (sk->thread == 0) - return pthread_create(&sk->thread, NULL, uip_tcp_socket_thread, (void *)sk); + int ret; + + if (sk->thread == 0) { + sk->buf = malloc(UIP_MAX_TCP_PAYLOAD); + if (!sk->buf) + return -ENOMEM; + ret = pthread_create(&sk->thread, NULL, uip_tcp_socket_thread, + (void *)sk); + if (ret) + free(sk->buf); + return ret; + } return 0; } @@ -346,3 +370,13 @@ int uip_tx_do_ipv4_tcp(struct uip_tx_arg *arg) out: return 0; } + +void uip_tcp_exit(struct uip_info *info) +{ + struct uip_tcp_socket *sk, *next; + + mutex_lock(&info->tcp_socket_lock); + list_for_each_entry_safe(sk, next, &info->tcp_socket_head, list) + uip_tcp_socket_free(sk); + mutex_unlock(&info->tcp_socket_lock); +} diff --git a/net/uip/udp.c b/net/uip/udp.c index dd288c5..d2580d0 100644 --- a/net/uip/udp.c +++ b/net/uip/udp.c @@ -164,10 +164,7 @@ static void *uip_udp_socket_thread(void *p) kvm__set_thread_name("uip-udp"); info = p; - - do { - payload = malloc(UIP_MAX_UDP_PAYLOAD); - } while (!payload); + payload = info->udp_buf; while (1) { nfds = epoll_wait(info->udp_epollfd, events, UIP_UDP_MAX_EVENTS, -1); @@ -196,7 +193,11 @@ static void *uip_udp_socket_thread(void *p) } } - free(payload); + mutex_lock(&info->udp_socket_lock); + free(info->udp_buf); + info->udp_buf = NULL; + mutex_unlock(&info->udp_socket_lock); + pthread_exit(NULL); return NULL; } @@ -232,8 +233,36 @@ int uip_tx_do_ipv4_udp(struct uip_tx_arg *arg) if (ret) return -1; - if (!info->udp_thread) + if (!info->udp_thread) { + info->udp_buf = malloc(UIP_MAX_UDP_PAYLOAD); + if (!info->udp_buf) + return -1; + pthread_create(&info->udp_thread, NULL, uip_udp_socket_thread, (void *)info); + } return 0; } + +void uip_udp_exit(struct uip_info *info) +{ + struct uip_udp_socket *sk, *next; + + mutex_lock(&info->udp_socket_lock); + if (info->udp_thread) { + pthread_cancel(info->udp_thread); + pthread_join(info->udp_thread, NULL); + info->udp_thread = 0; + free(info->udp_buf); + } + if (info->udp_epollfd > 0) { + close(info->udp_epollfd); + info->udp_epollfd = 0; + } + + list_for_each_entry_safe(sk, next, &info->udp_socket_head, list) { + close(sk->fd); + free(sk); + } + mutex_unlock(&info->udp_socket_lock); +} From patchwork Thu Jan 10 14:12:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Thierry X-Patchwork-Id: 10755885 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7530813B4 for ; Thu, 10 Jan 2019 14:13:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6549629961 for ; Thu, 10 Jan 2019 14:13:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 63B232995C; Thu, 10 Jan 2019 14:13:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B782C29957 for ; Thu, 10 Jan 2019 14:13:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728024AbfAJONL (ORCPT ); Thu, 10 Jan 2019 09:13:11 -0500 Received: from foss.arm.com ([217.140.101.70]:36456 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727963AbfAJONI (ORCPT ); Thu, 10 Jan 2019 09:13:08 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D5A141596; Thu, 10 Jan 2019 06:13:07 -0800 (PST) Received: from e112298-lin.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A37BA3F5AF; Thu, 10 Jan 2019 06:13:06 -0800 (PST) From: Julien Thierry To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Cc: jean-philippe.brucker@arm.com, will.deacon@arm.com, kraxel@redhat.com, Julien Thierry Subject: [PATCH kvmtool v2 08/13] virtio/net: Clean virtqueue state Date: Thu, 10 Jan 2019 14:12:45 +0000 Message-Id: <1547129570-14351-9-git-send-email-julien.thierry@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1547129570-14351-1-git-send-email-julien.thierry@arm.com> References: <1547129570-14351-1-git-send-email-julien.thierry@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jean-Philippe Brucker Currently the virtqueue state is mixed with the netdev state. Move it to a separate structure. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Julien Thierry --- virtio/net.c | 110 +++++++++++++++++++++++++++++++---------------------------- 1 file changed, 57 insertions(+), 53 deletions(-) diff --git a/virtio/net.c b/virtio/net.c index d65d04e..ef8e226 100644 --- a/virtio/net.c +++ b/virtio/net.c @@ -36,18 +36,23 @@ struct net_dev_operations { int (*tx)(struct iovec *iov, u16 in, struct net_dev *ndev); }; +struct net_dev_queue { + int id; + struct net_dev *ndev; + struct virt_queue vq; + pthread_t thread; + struct mutex lock; + pthread_cond_t cond; +}; + struct net_dev { struct mutex mutex; struct virtio_device vdev; struct list_head list; - struct virt_queue vqs[VIRTIO_NET_NUM_QUEUES * 2 + 1]; + struct net_dev_queue queues[VIRTIO_NET_NUM_QUEUES * 2 + 1]; struct virtio_net_config config; - u32 features, rx_vqs, tx_vqs, queue_pairs; - - pthread_t io_thread[VIRTIO_NET_NUM_QUEUES * 2 + 1]; - struct mutex io_lock[VIRTIO_NET_NUM_QUEUES * 2 + 1]; - pthread_cond_t io_cond[VIRTIO_NET_NUM_QUEUES * 2 + 1]; + u32 features, queue_pairs; int vhost_fd; int tap_fd; @@ -92,28 +97,22 @@ static void virtio_net_fix_rx_hdr(struct virtio_net_hdr *hdr, struct net_dev *nd static void *virtio_net_rx_thread(void *p) { struct iovec iov[VIRTIO_NET_QUEUE_SIZE]; - struct virt_queue *vq; + struct net_dev_queue *queue = p; + struct virt_queue *vq = &queue->vq; + struct net_dev *ndev = queue->ndev; struct kvm *kvm; - struct net_dev *ndev = p; u16 out, in; u16 head; int len, copied; - u32 id; - - mutex_lock(&ndev->mutex); - id = ndev->rx_vqs++ * 2; - mutex_unlock(&ndev->mutex); kvm__set_thread_name("virtio-net-rx"); kvm = ndev->kvm; - vq = &ndev->vqs[id]; - while (1) { - mutex_lock(&ndev->io_lock[id]); + mutex_lock(&queue->lock); if (!virt_queue__available(vq)) - pthread_cond_wait(&ndev->io_cond[id], &ndev->io_lock[id].mutex); - mutex_unlock(&ndev->io_lock[id]); + pthread_cond_wait(&queue->cond, &queue->lock.mutex); + mutex_unlock(&queue->lock); while (virt_queue__available(vq)) { unsigned char buffer[MAX_PACKET_SIZE + sizeof(struct virtio_net_hdr_mrg_rxbuf)]; @@ -127,7 +126,7 @@ static void *virtio_net_rx_thread(void *p) len = ndev->ops->rx(&dummy_iov, 1, ndev); if (len < 0) { pr_warning("%s: rx on vq %u failed (%d), exiting thread\n", - __func__, id, len); + __func__, queue->id, len); goto out_err; } @@ -155,7 +154,7 @@ static void *virtio_net_rx_thread(void *p) /* We should interrupt guest right now, otherwise latency is huge. */ if (virtio_queue__should_signal(vq)) - ndev->vdev.ops->signal_vq(kvm, &ndev->vdev, id); + ndev->vdev.ops->signal_vq(kvm, &ndev->vdev, queue->id); } } @@ -168,28 +167,23 @@ out_err: static void *virtio_net_tx_thread(void *p) { struct iovec iov[VIRTIO_NET_QUEUE_SIZE]; - struct virt_queue *vq; + struct net_dev_queue *queue = p; + struct virt_queue *vq = &queue->vq; + struct net_dev *ndev = queue->ndev; struct kvm *kvm; - struct net_dev *ndev = p; u16 out, in; u16 head; int len; - u32 id; - - mutex_lock(&ndev->mutex); - id = ndev->tx_vqs++ * 2 + 1; - mutex_unlock(&ndev->mutex); kvm__set_thread_name("virtio-net-tx"); kvm = ndev->kvm; - vq = &ndev->vqs[id]; while (1) { - mutex_lock(&ndev->io_lock[id]); + mutex_lock(&queue->lock); if (!virt_queue__available(vq)) - pthread_cond_wait(&ndev->io_cond[id], &ndev->io_lock[id].mutex); - mutex_unlock(&ndev->io_lock[id]); + pthread_cond_wait(&queue->cond, &queue->lock.mutex); + mutex_unlock(&queue->lock); while (virt_queue__available(vq)) { struct virtio_net_hdr *hdr; @@ -199,7 +193,7 @@ static void *virtio_net_tx_thread(void *p) len = ndev->ops->tx(iov, out, ndev); if (len < 0) { pr_warning("%s: tx on vq %u failed (%d)\n", - __func__, id, errno); + __func__, queue->id, errno); goto out_err; } @@ -207,7 +201,7 @@ static void *virtio_net_tx_thread(void *p) } if (virtio_queue__should_signal(vq)) - ndev->vdev.ops->signal_vq(kvm, &ndev->vdev, id); + ndev->vdev.ops->signal_vq(kvm, &ndev->vdev, queue->id); } out_err: @@ -224,24 +218,24 @@ static virtio_net_ctrl_ack virtio_net_handle_mq(struct kvm* kvm, struct net_dev static void *virtio_net_ctrl_thread(void *p) { struct iovec iov[VIRTIO_NET_QUEUE_SIZE]; + struct net_dev_queue *queue = p; + struct virt_queue *vq = &queue->vq; + struct net_dev *ndev = queue->ndev; u16 out, in, head; - struct net_dev *ndev = p; struct kvm *kvm = ndev->kvm; - u32 id = ndev->queue_pairs * 2; - struct virt_queue *vq = &ndev->vqs[id]; struct virtio_net_ctrl_hdr *ctrl; virtio_net_ctrl_ack *ack; kvm__set_thread_name("virtio-net-ctrl"); while (1) { - mutex_lock(&ndev->io_lock[id]); + mutex_lock(&queue->lock); if (!virt_queue__available(vq)) - pthread_cond_wait(&ndev->io_cond[id], &ndev->io_lock[id].mutex); - mutex_unlock(&ndev->io_lock[id]); + pthread_cond_wait(&queue->cond, &queue->lock.mutex); + mutex_unlock(&queue->lock); while (virt_queue__available(vq)) { - head = virt_queue__get_iov(&ndev->vqs[id], iov, &out, &in, kvm); + head = virt_queue__get_iov(vq, iov, &out, &in, kvm); ctrl = iov[0].iov_base; ack = iov[out].iov_base; @@ -253,11 +247,11 @@ static void *virtio_net_ctrl_thread(void *p) *ack = VIRTIO_NET_ERR; break; } - virt_queue__set_used_elem(&ndev->vqs[id], head, iov[out].iov_len); + virt_queue__set_used_elem(vq, head, iov[out].iov_len); } - if (virtio_queue__should_signal(&ndev->vqs[id])) - ndev->vdev.ops->signal_vq(kvm, &ndev->vdev, id); + if (virtio_queue__should_signal(vq)) + ndev->vdev.ops->signal_vq(kvm, &ndev->vdev, queue->id); } pthread_exit(NULL); @@ -267,14 +261,16 @@ static void *virtio_net_ctrl_thread(void *p) static void virtio_net_handle_callback(struct kvm *kvm, struct net_dev *ndev, int queue) { + struct net_dev_queue *net_queue = &ndev->queues[queue]; + if ((u32)queue >= (ndev->queue_pairs * 2 + 1)) { pr_warning("Unknown queue index %u", queue); return; } - mutex_lock(&ndev->io_lock[queue]); - pthread_cond_signal(&ndev->io_cond[queue]); - mutex_unlock(&ndev->io_lock[queue]); + mutex_lock(&net_queue->lock); + pthread_cond_signal(&net_queue->cond); + mutex_unlock(&net_queue->lock); } static int virtio_net_request_tap(struct net_dev *ndev, struct ifreq *ifr, @@ -552,6 +548,7 @@ static int init_vq(struct kvm *kvm, void *dev, u32 vq, u32 page_size, u32 align, u32 pfn) { struct vhost_vring_state state = { .index = vq }; + struct net_dev_queue *net_queue; struct vhost_vring_addr addr; struct net_dev *ndev = dev; struct virt_queue *queue; @@ -560,24 +557,30 @@ static int init_vq(struct kvm *kvm, void *dev, u32 vq, u32 page_size, u32 align, compat__remove_message(compat_id); - queue = &ndev->vqs[vq]; + net_queue = &ndev->queues[vq]; + net_queue->id = vq; + net_queue->ndev = ndev; + queue = &net_queue->vq; queue->pfn = pfn; p = virtio_get_vq(kvm, queue->pfn, page_size); vring_init(&queue->vring, VIRTIO_NET_QUEUE_SIZE, p, align); virtio_init_device_vq(&ndev->vdev, queue); - mutex_init(&ndev->io_lock[vq]); - pthread_cond_init(&ndev->io_cond[vq], NULL); + mutex_init(&net_queue->lock); + pthread_cond_init(&net_queue->cond, NULL); if (is_ctrl_vq(ndev, vq)) { - pthread_create(&ndev->io_thread[vq], NULL, virtio_net_ctrl_thread, ndev); + pthread_create(&net_queue->thread, NULL, virtio_net_ctrl_thread, + net_queue); return 0; } else if (ndev->vhost_fd == 0 ) { if (vq & 1) - pthread_create(&ndev->io_thread[vq], NULL, virtio_net_tx_thread, ndev); + pthread_create(&net_queue->thread, NULL, + virtio_net_tx_thread, net_queue); else - pthread_create(&ndev->io_thread[vq], NULL, virtio_net_rx_thread, ndev); + pthread_create(&net_queue->thread, NULL, + virtio_net_rx_thread, net_queue); return 0; } @@ -611,6 +614,7 @@ static int init_vq(struct kvm *kvm, void *dev, u32 vq, u32 page_size, u32 align, static void notify_vq_gsi(struct kvm *kvm, void *dev, u32 vq, u32 gsi) { struct net_dev *ndev = dev; + struct net_dev_queue *queue = &ndev->queues[vq]; struct vhost_vring_file file; int r; @@ -666,7 +670,7 @@ static struct virt_queue *get_vq(struct kvm *kvm, void *dev, u32 vq) { struct net_dev *ndev = dev; - return &ndev->vqs[vq]; + return &ndev->queues[vq].vq; } static int get_size_vq(struct kvm *kvm, void *dev, u32 vq) From patchwork Thu Jan 10 14:12:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Thierry X-Patchwork-Id: 10755883 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 71F3913B4 for ; Thu, 10 Jan 2019 14:13:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 61CC82994B for ; Thu, 10 Jan 2019 14:13:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5FFC529959; Thu, 10 Jan 2019 14:13:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E9E8B29961 for ; Thu, 10 Jan 2019 14:13:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728019AbfAJONL (ORCPT ); Thu, 10 Jan 2019 09:13:11 -0500 Received: from foss.arm.com ([217.140.101.70]:36466 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727983AbfAJONJ (ORCPT ); Thu, 10 Jan 2019 09:13:09 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 51BDA15BE; Thu, 10 Jan 2019 06:13:09 -0800 (PST) Received: from e112298-lin.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1F80C3F5AF; Thu, 10 Jan 2019 06:13:07 -0800 (PST) From: Julien Thierry To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Cc: jean-philippe.brucker@arm.com, will.deacon@arm.com, kraxel@redhat.com, Julien Thierry Subject: [PATCH kvmtool v2 09/13] virtio/net: Implement device and virtqueue reset Date: Thu, 10 Jan 2019 14:12:46 +0000 Message-Id: <1547129570-14351-10-git-send-email-julien.thierry@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1547129570-14351-1-git-send-email-julien.thierry@arm.com> References: <1547129570-14351-1-git-send-email-julien.thierry@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jean-Philippe Brucker On exit_vq(), clean all resources allocated for the queue. When the device is reset, clean the backend. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Julien Thierry --- virtio/net.c | 63 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 63 insertions(+) diff --git a/virtio/net.c b/virtio/net.c index ef8e226..a05f4cd 100644 --- a/virtio/net.c +++ b/virtio/net.c @@ -43,6 +43,8 @@ struct net_dev_queue { pthread_t thread; struct mutex lock; pthread_cond_t cond; + int gsi; + int irqfd; }; struct net_dev { @@ -361,6 +363,23 @@ fail: return 0; } +static void virtio_net__tap_exit(struct net_dev *ndev) +{ + int sock; + struct ifreq ifr; + + if (ndev->params->tapif) + return; + + sock = socket(AF_INET, SOCK_STREAM, 0); + strncpy(ifr.ifr_name, ndev->tap_name, sizeof(ndev->tap_name)); + ioctl(sock, SIOCGIFFLAGS, &ifr); + ifr.ifr_flags &= ~(IFF_UP | IFF_RUNNING); + if (ioctl(sock, SIOCGIFFLAGS, &ifr) < 0) + pr_warning("Count not bring tap device down"); + close(sock); +} + static bool virtio_net__tap_create(struct net_dev *ndev) { int offload; @@ -533,10 +552,21 @@ static void virtio_net_start(struct net_dev *ndev) } } +static void virtio_net_stop(struct net_dev *ndev) +{ + /* Undo whatever start() did */ + if (ndev->mode == NET_MODE_TAP) + virtio_net__tap_exit(ndev); + else + uip_exit(&ndev->info); +} + static void notify_status(struct kvm *kvm, void *dev, u32 status) { if (status & VIRTIO__STATUS_START) virtio_net_start(dev); + else if (status & VIRTIO__STATUS_STOP) + virtio_net_stop(dev); } static bool is_ctrl_vq(struct net_dev *ndev, u32 vq) @@ -611,6 +641,35 @@ static int init_vq(struct kvm *kvm, void *dev, u32 vq, u32 page_size, u32 align, return 0; } +static void exit_vq(struct kvm *kvm, void *dev, u32 vq) +{ + struct net_dev *ndev = dev; + struct net_dev_queue *queue = &ndev->queues[vq]; + + if (!is_ctrl_vq(ndev, vq) && queue->gsi) { + irq__del_irqfd(kvm, queue->gsi, queue->irqfd); + close(queue->irqfd); + queue->gsi = queue->irqfd = 0; + } + + /* + * TODO: vhost reset owner. It's the only way to cleanly stop vhost, but + * we can't restart it at the moment. + */ + if (ndev->vhost_fd && !is_ctrl_vq(ndev, vq)) { + pr_warning("Cannot reset VHOST queue"); + ioctl(ndev->vhost_fd, VHOST_RESET_OWNER); + return; + } + + /* + * Threads are waiting on cancellation points (readv or + * pthread_cond_wait) and should stop gracefully. + */ + pthread_cancel(queue->thread); + pthread_join(queue->thread, NULL); +} + static void notify_vq_gsi(struct kvm *kvm, void *dev, u32 vq, u32 gsi) { struct net_dev *ndev = dev; @@ -630,6 +689,9 @@ static void notify_vq_gsi(struct kvm *kvm, void *dev, u32 vq, u32 gsi) if (r < 0) die_perror("KVM_IRQFD failed"); + queue->irqfd = file.fd; + queue->gsi = gsi; + r = ioctl(ndev->vhost_fd, VHOST_SET_VRING_CALL, &file); if (r < 0) die_perror("VHOST_SET_VRING_CALL failed"); @@ -698,6 +760,7 @@ static struct virtio_ops net_dev_virtio_ops = { .set_guest_features = set_guest_features, .get_vq_count = get_vq_count, .init_vq = init_vq, + .exit_vq = exit_vq, .get_vq = get_vq, .get_size_vq = get_size_vq, .set_size_vq = set_size_vq, From patchwork Thu Jan 10 14:12:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Thierry X-Patchwork-Id: 10755887 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2E9A114DE for ; Thu, 10 Jan 2019 14:13:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1E8AB29949 for ; Thu, 10 Jan 2019 14:13:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1D1D12994D; Thu, 10 Jan 2019 14:13:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A773429955 for ; Thu, 10 Jan 2019 14:13:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728069AbfAJONN (ORCPT ); Thu, 10 Jan 2019 09:13:13 -0500 Received: from foss.arm.com ([217.140.101.70]:36474 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728017AbfAJONL (ORCPT ); Thu, 10 Jan 2019 09:13:11 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C1FF21596; Thu, 10 Jan 2019 06:13:10 -0800 (PST) Received: from e112298-lin.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8FE733F5AF; Thu, 10 Jan 2019 06:13:09 -0800 (PST) From: Julien Thierry To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Cc: jean-philippe.brucker@arm.com, will.deacon@arm.com, kraxel@redhat.com, Julien Thierry Subject: [PATCH kvmtool v2 10/13] virtio/blk: Reset virtqueue Date: Thu, 10 Jan 2019 14:12:47 +0000 Message-Id: <1547129570-14351-11-git-send-email-julien.thierry@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1547129570-14351-1-git-send-email-julien.thierry@arm.com> References: <1547129570-14351-1-git-send-email-julien.thierry@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jean-Philippe Brucker Move pthread creation to init_vq, and kill the thread in exit_vq. Initialize the virtqueue states at runtime. All in-flight I/O is canceled with the virtqueue pthreads, except for AIO threads, but after reading the code I'm not sure if AIO has ever worked anyway. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Julien Thierry --- virtio/blk.c | 67 ++++++++++++++++++++++++++++++++++++++++-------------------- 1 file changed, 45 insertions(+), 22 deletions(-) diff --git a/virtio/blk.c b/virtio/blk.c index 6a6b3b7..a57df2e 100644 --- a/virtio/blk.c +++ b/virtio/blk.c @@ -178,9 +178,29 @@ static void notify_status(struct kvm *kvm, void *dev, u32 status) { } +static void *virtio_blk_thread(void *dev) +{ + struct blk_dev *bdev = dev; + u64 data; + int r; + + kvm__set_thread_name("virtio-blk-io"); + + while (1) { + r = read(bdev->io_efd, &data, sizeof(u64)); + if (r < 0) + continue; + virtio_blk_do_io(bdev->kvm, &bdev->vqs[0], bdev); + } + + pthread_exit(NULL); + return NULL; +} + static int init_vq(struct kvm *kvm, void *dev, u32 vq, u32 page_size, u32 align, u32 pfn) { + unsigned int i; struct blk_dev *bdev = dev; struct virt_queue *queue; void *p; @@ -194,26 +214,37 @@ static int init_vq(struct kvm *kvm, void *dev, u32 vq, u32 page_size, u32 align, vring_init(&queue->vring, VIRTIO_BLK_QUEUE_SIZE, p, align); virtio_init_device_vq(&bdev->vdev, queue); + if (vq != 0) + return 0; + + for (i = 0; i < ARRAY_SIZE(bdev->reqs); i++) { + bdev->reqs[i] = (struct blk_dev_req) { + .bdev = bdev, + .kvm = kvm, + }; + } + + mutex_init(&bdev->mutex); + bdev->io_efd = eventfd(0, 0); + if (bdev->io_efd < 0) + return -errno; + + if (pthread_create(&bdev->io_thread, NULL, virtio_blk_thread, bdev)) + return -errno; + return 0; } -static void *virtio_blk_thread(void *dev) +static void exit_vq(struct kvm *kvm, void *dev, u32 vq) { struct blk_dev *bdev = dev; - u64 data; - int r; - kvm__set_thread_name("virtio-blk-io"); + if (vq != 0) + return; - while (1) { - r = read(bdev->io_efd, &data, sizeof(u64)); - if (r < 0) - continue; - virtio_blk_do_io(bdev->kvm, &bdev->vqs[0], bdev); - } - - pthread_exit(NULL); - return NULL; + close(bdev->io_efd); + pthread_cancel(bdev->io_thread); + pthread_join(bdev->io_thread, NULL); } static int notify_vq(struct kvm *kvm, void *dev, u32 vq) @@ -259,6 +290,7 @@ static struct virtio_ops blk_dev_virtio_ops = { .set_guest_features = set_guest_features, .get_vq_count = get_vq_count, .init_vq = init_vq, + .exit_vq = exit_vq, .notify_status = notify_status, .notify_vq = notify_vq, .get_vq = get_vq, @@ -269,7 +301,6 @@ static struct virtio_ops blk_dev_virtio_ops = { static int virtio_blk__init_one(struct kvm *kvm, struct disk_image *disk) { struct blk_dev *bdev; - unsigned int i; if (!disk) return -EINVAL; @@ -279,13 +310,11 @@ static int virtio_blk__init_one(struct kvm *kvm, struct disk_image *disk) return -ENOMEM; *bdev = (struct blk_dev) { - .mutex = MUTEX_INITIALIZER, .disk = disk, .blk_config = (struct virtio_blk_config) { .capacity = disk->size / SECTOR_SIZE, .seg_max = DISK_SEG_MAX, }, - .io_efd = eventfd(0, 0), .kvm = kvm, }; @@ -295,14 +324,8 @@ static int virtio_blk__init_one(struct kvm *kvm, struct disk_image *disk) list_add_tail(&bdev->list, &bdevs); - for (i = 0; i < ARRAY_SIZE(bdev->reqs); i++) { - bdev->reqs[i].bdev = bdev; - bdev->reqs[i].kvm = kvm; - } - disk_image__set_callback(bdev->disk, virtio_blk_complete); - pthread_create(&bdev->io_thread, NULL, virtio_blk_thread, bdev); if (compat_id == -1) compat_id = virtio_compat_add_message("virtio-blk", "CONFIG_VIRTIO_BLK"); From patchwork Thu Jan 10 14:12:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Thierry X-Patchwork-Id: 10755893 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AC78C14DE for ; Thu, 10 Jan 2019 14:13:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9B8CF29941 for ; Thu, 10 Jan 2019 14:13:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8F7662994D; Thu, 10 Jan 2019 14:13:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 29BA42994B for ; Thu, 10 Jan 2019 14:13:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728057AbfAJONN (ORCPT ); Thu, 10 Jan 2019 09:13:13 -0500 Received: from foss.arm.com ([217.140.101.70]:36488 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727984AbfAJONM (ORCPT ); Thu, 10 Jan 2019 09:13:12 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4745A15BE; Thu, 10 Jan 2019 06:13:12 -0800 (PST) Received: from e112298-lin.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 0BCB23F5AF; Thu, 10 Jan 2019 06:13:10 -0800 (PST) From: Julien Thierry To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Cc: jean-philippe.brucker@arm.com, will.deacon@arm.com, kraxel@redhat.com, Julien Thierry Subject: [PATCH kvmtool v2 11/13] threadpool: Add cancel() function Date: Thu, 10 Jan 2019 14:12:48 +0000 Message-Id: <1547129570-14351-12-git-send-email-julien.thierry@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1547129570-14351-1-git-send-email-julien.thierry@arm.com> References: <1547129570-14351-1-git-send-email-julien.thierry@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jean-Philippe Brucker When resetting a virtqueue, it is often necessary to make sure that the associated threadpool job isn't running anymore. Add a function to cancel a job. A threadpool job has three states: idle, queued and running. A job is queued when it is in the job list. It is running when it is out the list, but its signal count is greater than zero. It is idle when it is both out of the list and its signal count is zero. The cancel() function simply waits for the job to be idle. It is up to the caller to make sure that the job isn't queued concurrently. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Julien Thierry --- include/kvm/threadpool.h | 2 ++ util/threadpool.c | 25 ++++++++++++++++++++++++- 2 files changed, 26 insertions(+), 1 deletion(-) diff --git a/include/kvm/threadpool.h b/include/kvm/threadpool.h index bacb243..880487a 100644 --- a/include/kvm/threadpool.h +++ b/include/kvm/threadpool.h @@ -28,11 +28,13 @@ static inline void thread_pool__init_job(struct thread_pool__job *job, struct kv .data = data, .mutex = MUTEX_INITIALIZER, }; + INIT_LIST_HEAD(&job->queue); } int thread_pool__init(struct kvm *kvm); int thread_pool__exit(struct kvm *kvm); void thread_pool__do_job(struct thread_pool__job *job); +void thread_pool__cancel_job(struct thread_pool__job *job); #endif diff --git a/util/threadpool.c b/util/threadpool.c index e64aa26..1dc3bf7 100644 --- a/util/threadpool.c +++ b/util/threadpool.c @@ -25,7 +25,7 @@ static struct thread_pool__job *thread_pool__job_pop_locked(void) return NULL; job = list_first_entry(&head, struct thread_pool__job, queue); - list_del(&job->queue); + list_del_init(&job->queue); return job; } @@ -173,3 +173,26 @@ void thread_pool__do_job(struct thread_pool__job *job) pthread_cond_signal(&job_cond); mutex_unlock(&job_mutex); } + +void thread_pool__cancel_job(struct thread_pool__job *job) +{ + bool running; + + /* + * If the job is queued but not running, remove it. Otherwise, wait for + * the signalcount to drop to 0, indicating that it has finished + * running. We assume that nobody is queueing this job - + * thread_pool__do_job() isn't called - while this function is running. + */ + do { + mutex_lock(&job_mutex); + if (list_empty(&job->queue)) { + running = job->signalcount > 0; + } else { + list_del_init(&job->queue); + job->signalcount = 0; + running = false; + } + mutex_unlock(&job_mutex); + } while (running); +} From patchwork Thu Jan 10 14:12:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Thierry X-Patchwork-Id: 10755889 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5512314DE for ; Thu, 10 Jan 2019 14:13:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4601029958 for ; Thu, 10 Jan 2019 14:13:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 446FB29955; Thu, 10 Jan 2019 14:13:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E64BE29967 for ; Thu, 10 Jan 2019 14:13:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728081AbfAJONP (ORCPT ); Thu, 10 Jan 2019 09:13:15 -0500 Received: from foss.arm.com ([217.140.101.70]:36500 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728017AbfAJONO (ORCPT ); Thu, 10 Jan 2019 09:13:14 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C15EE1596; Thu, 10 Jan 2019 06:13:13 -0800 (PST) Received: from e112298-lin.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 854E93F5AF; Thu, 10 Jan 2019 06:13:12 -0800 (PST) From: Julien Thierry To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Cc: jean-philippe.brucker@arm.com, will.deacon@arm.com, kraxel@redhat.com, Julien Thierry Subject: [PATCH kvmtool v2 12/13] virtio/p9: Implement reset Date: Thu, 10 Jan 2019 14:12:49 +0000 Message-Id: <1547129570-14351-13-git-send-email-julien.thierry@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1547129570-14351-1-git-send-email-julien.thierry@arm.com> References: <1547129570-14351-1-git-send-email-julien.thierry@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jean-Philippe Brucker The p9 reset cancels all running jobs and closes any open fid. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Julien Thierry --- virtio/9p.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/virtio/9p.c b/virtio/9p.c index d9f45cf..6bae403 100644 --- a/virtio/9p.c +++ b/virtio/9p.c @@ -1384,6 +1384,14 @@ static void set_guest_features(struct kvm *kvm, void *dev, u32 features) static void notify_status(struct kvm *kvm, void *dev, u32 status) { + struct p9_dev *p9dev = dev; + struct p9_fid *pfid, *next; + + if (!(status & VIRTIO__STATUS_STOP)) + return; + + rbtree_postorder_for_each_entry_safe(pfid, next, &p9dev->fids, node) + close_fid(p9dev, pfid->fid); } static int init_vq(struct kvm *kvm, void *dev, u32 vq, u32 page_size, u32 align, @@ -1413,6 +1421,13 @@ static int init_vq(struct kvm *kvm, void *dev, u32 vq, u32 page_size, u32 align, return 0; } +static void exit_vq(struct kvm *kvm, void *dev, u32 vq) +{ + struct p9_dev *p9dev = dev; + + thread_pool__cancel_job(&p9dev->jobs[vq].job_id); +} + static int notify_vq(struct kvm *kvm, void *dev, u32 vq) { struct p9_dev *p9dev = dev; @@ -1450,6 +1465,7 @@ struct virtio_ops p9_dev_virtio_ops = { .get_host_features = get_host_features, .set_guest_features = set_guest_features, .init_vq = init_vq, + .exit_vq = exit_vq, .notify_status = notify_status, .notify_vq = notify_vq, .get_vq = get_vq, From patchwork Thu Jan 10 14:12:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Thierry X-Patchwork-Id: 10755891 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7BF3113B4 for ; Thu, 10 Jan 2019 14:13:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6C7502994D for ; Thu, 10 Jan 2019 14:13:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6B02D29962; Thu, 10 Jan 2019 14:13:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0824629956 for ; Thu, 10 Jan 2019 14:13:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727963AbfAJONQ (ORCPT ); Thu, 10 Jan 2019 09:13:16 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:36512 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728102AbfAJONP (ORCPT ); Thu, 10 Jan 2019 09:13:15 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4772415BE; Thu, 10 Jan 2019 06:13:15 -0800 (PST) Received: from e112298-lin.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 0B1CE3F5AF; Thu, 10 Jan 2019 06:13:13 -0800 (PST) From: Julien Thierry To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Cc: jean-philippe.brucker@arm.com, will.deacon@arm.com, kraxel@redhat.com, Julien Thierry Subject: [PATCH kvmtool v2 13/13] virtio/console: Implement reset Date: Thu, 10 Jan 2019 14:12:50 +0000 Message-Id: <1547129570-14351-14-git-send-email-julien.thierry@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1547129570-14351-1-git-send-email-julien.thierry@arm.com> References: <1547129570-14351-1-git-send-email-julien.thierry@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jean-Philippe Brucker The virtio-console reset cancels all running jobs. Unfortunately we don't have a good way to prevent the term polling thread from getting in the way, read invalid data during reset and cause a segfault. To prevent this, move all handling of the Rx queue in the threadpool job. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Julien Thierry --- virtio/console.c | 32 ++++++++++++++++++++------------ 1 file changed, 20 insertions(+), 12 deletions(-) diff --git a/virtio/console.c b/virtio/console.c index d2b312c..f1be025 100644 --- a/virtio/console.c +++ b/virtio/console.c @@ -35,8 +35,6 @@ struct con_dev { struct virt_queue vqs[VIRTIO_CONSOLE_NUM_QUEUES]; struct virtio_console_config config; u32 features; - - pthread_cond_t poll_cond; int vq_ready; struct thread_pool__job jobs[VIRTIO_CONSOLE_NUM_QUEUES]; @@ -44,7 +42,6 @@ struct con_dev { static struct con_dev cdev = { .mutex = MUTEX_INITIALIZER, - .poll_cond = PTHREAD_COND_INITIALIZER, .vq_ready = 0, @@ -68,16 +65,10 @@ static void virtio_console__inject_interrupt_callback(struct kvm *kvm, void *par u16 head; int len; - if (kvm->cfg.active_console != CONSOLE_VIRTIO) - return; - mutex_lock(&cdev.mutex); vq = param; - if (!cdev.vq_ready) - pthread_cond_wait(&cdev.poll_cond, &cdev.mutex.mutex); - if (term_readable(0) && virt_queue__available(vq)) { head = virt_queue__get_iov(vq, iov, &out, &in, kvm); len = term_getc_iov(kvm, iov, in, 0); @@ -90,8 +81,13 @@ static void virtio_console__inject_interrupt_callback(struct kvm *kvm, void *par void virtio_console__inject_interrupt(struct kvm *kvm) { - virtio_console__inject_interrupt_callback(kvm, - &cdev.vqs[VIRTIO_CONSOLE_RX_QUEUE]); + if (kvm->cfg.active_console != CONSOLE_VIRTIO) + return; + + mutex_lock(&cdev.mutex); + if (cdev.vq_ready) + thread_pool__do_job(&cdev.jobs[VIRTIO_CONSOLE_RX_QUEUE]); + mutex_unlock(&cdev.mutex); } static void virtio_console_handle_callback(struct kvm *kvm, void *param) @@ -168,13 +164,24 @@ static int init_vq(struct kvm *kvm, void *dev, u32 vq, u32 page_size, u32 align, /* Tell the waiting poll thread that we're ready to go */ mutex_lock(&cdev.mutex); cdev.vq_ready = 1; - pthread_cond_signal(&cdev.poll_cond); mutex_unlock(&cdev.mutex); } return 0; } +static void exit_vq(struct kvm *kvm, void *dev, u32 vq) +{ + if (vq == VIRTIO_CONSOLE_RX_QUEUE) { + mutex_lock(&cdev.mutex); + cdev.vq_ready = 0; + mutex_unlock(&cdev.mutex); + thread_pool__cancel_job(&cdev.jobs[vq]); + } else if (vq == VIRTIO_CONSOLE_TX_QUEUE) { + thread_pool__cancel_job(&cdev.jobs[vq]); + } +} + static int notify_vq(struct kvm *kvm, void *dev, u32 vq) { struct con_dev *cdev = dev; @@ -213,6 +220,7 @@ static struct virtio_ops con_dev_virtio_ops = { .set_guest_features = set_guest_features, .get_vq_count = get_vq_count, .init_vq = init_vq, + .exit_vq = exit_vq, .notify_status = notify_status, .notify_vq = notify_vq, .get_vq = get_vq,