From patchwork Wed Jan 5 00:58:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhijian Li (Fujitsu)\" via" X-Patchwork-Id: 12703942 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 776E8C433EF for ; Wed, 5 Jan 2022 01:14:08 +0000 (UTC) Received: from localhost ([::1]:47578 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1n4usR-0000Qw-FH for qemu-devel@archiver.kernel.org; Tue, 04 Jan 2022 20:14:07 -0500 Received: from eggs.gnu.org ([209.51.188.92]:36724) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n4ue5-0007tX-6n for qemu-devel@nongnu.org; Tue, 04 Jan 2022 19:59:17 -0500 Received: from szxga08-in.huawei.com ([45.249.212.255]:3258) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n4ue0-0007vb-Ql for qemu-devel@nongnu.org; Tue, 04 Jan 2022 19:59:16 -0500 Received: from dggpeml500021.china.huawei.com (unknown [172.30.72.55]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4JT9yR2QYrz1DKCP; Wed, 5 Jan 2022 08:55:43 +0800 (CST) Received: from dggpeml100016.china.huawei.com (7.185.36.216) by dggpeml500021.china.huawei.com (7.185.36.21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Wed, 5 Jan 2022 08:59:09 +0800 Received: from DESKTOP-27KDQMV.china.huawei.com (10.174.148.223) by dggpeml100016.china.huawei.com (7.185.36.216) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Wed, 5 Jan 2022 08:59:09 +0800 To: , , , CC: , , , , , , Longpeng Subject: [RFC 09/10] vdpa-dev: implement the set_status interface Date: Wed, 5 Jan 2022 08:58:59 +0800 Message-ID: <20220105005900.860-10-longpeng2@huawei.com> X-Mailer: git-send-email 2.25.0.windows.1 In-Reply-To: <20220105005900.860-1-longpeng2@huawei.com> References: <20220105005900.860-1-longpeng2@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.148.223] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpeml100016.china.huawei.com (7.185.36.216) X-CFilter-Loop: Reflected Received-SPF: pass client-ip=45.249.212.255; envelope-from=longpeng2@huawei.com; helo=szxga08-in.huawei.com X-Spam_score_int: -22 X-Spam_score: -2.3 X-Spam_bar: -- X-Spam_report: (-2.3 / 5.0 requ) RCVD_IN_DNSWL_MED=-2.3, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Reply-to: "Longpeng(Mike)" X-Patchwork-Original-From: "Longpeng(Mike)" via From: "Zhijian Li (Fujitsu)\" via" From: Longpeng Implements the .set_status interface. Signed-off-by: Longpeng --- hw/virtio/vdpa-dev.c | 100 ++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 99 insertions(+), 1 deletion(-) diff --git a/hw/virtio/vdpa-dev.c b/hw/virtio/vdpa-dev.c index 32b3117c4b..64649bfb5a 100644 --- a/hw/virtio/vdpa-dev.c +++ b/hw/virtio/vdpa-dev.c @@ -194,9 +194,107 @@ static uint64_t vhost_vdpa_device_get_features(VirtIODevice *vdev, return backend_features; } +static int vhost_vdpa_device_start(VirtIODevice *vdev, Error **errp) +{ + VhostVdpaDevice *s = VHOST_VDPA_DEVICE(vdev); + BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(vdev))); + VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus); + int i, ret; + + if (!k->set_guest_notifiers) { + error_setg(errp, "binding does not support guest notifiers"); + return -ENOSYS; + } + + ret = vhost_dev_enable_notifiers(&s->dev, vdev); + if (ret < 0) { + error_setg_errno(errp, -ret, "Error enabling host notifiers"); + return ret; + } + + ret = k->set_guest_notifiers(qbus->parent, s->dev.nvqs, true); + if (ret < 0) { + error_setg_errno(errp, -ret, "Error binding guest notifier"); + goto err_host_notifiers; + } + + s->dev.acked_features = vdev->guest_features; + + ret = vhost_dev_start(&s->dev, vdev); + if (ret < 0) { + error_setg_errno(errp, -ret, "Error starting vhost"); + goto err_guest_notifiers; + } + s->started = true; + + /* + * guest_notifier_mask/pending not used yet, so just unmask + * everything here. virtio-pci will do the right thing by + * enabling/disabling irqfd. + */ + for (i = 0; i < s->dev.nvqs; i++) { + vhost_virtqueue_mask(&s->dev, vdev, i, false); + } + + return ret; + +err_guest_notifiers: + k->set_guest_notifiers(qbus->parent, s->dev.nvqs, false); +err_host_notifiers: + vhost_dev_disable_notifiers(&s->dev, vdev); + return ret; +} + +static void vhost_vdpa_device_stop(VirtIODevice *vdev) +{ + VhostVdpaDevice *s = VHOST_VDPA_DEVICE(vdev); + BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(vdev))); + VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus); + int ret; + + if (!s->started) { + return; + } + s->started = false; + + if (!k->set_guest_notifiers) { + return; + } + + vhost_dev_stop(&s->dev, vdev); + + ret = k->set_guest_notifiers(qbus->parent, s->dev.nvqs, false); + if (ret < 0) { + error_report("vhost guest notifier cleanup failed: %d", ret); + return; + } + + vhost_dev_disable_notifiers(&s->dev, vdev); +} + static void vhost_vdpa_device_set_status(VirtIODevice *vdev, uint8_t status) { - return; + VhostVdpaDevice *s = VHOST_VDPA_DEVICE(vdev); + bool should_start = virtio_device_started(vdev, status); + Error *local_err = NULL; + int ret; + + if (!vdev->vm_running) { + should_start = false; + } + + if (s->started == should_start) { + return; + } + + if (should_start) { + ret = vhost_vdpa_device_start(vdev, &local_err); + if (ret < 0) { + error_reportf_err(local_err, "vhost-vdpa-device: start failed: "); + } + } else { + vhost_vdpa_device_stop(vdev); + } } static Property vhost_vdpa_device_properties[] = {