From patchwork Thu Oct 14 00:48:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Longpeng(Mike)" X-Patchwork-Id: 12557595 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B54B1C433EF for ; Thu, 14 Oct 2021 00:51:50 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 21FB161139 for ; Thu, 14 Oct 2021 00:51:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 21FB161139 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=nongnu.org Received: from localhost ([::1]:51290 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1maoyL-0001Su-7O for qemu-devel@archiver.kernel.org; Wed, 13 Oct 2021 20:51:49 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:50156) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1maow5-000768-6B for qemu-devel@nongnu.org; Wed, 13 Oct 2021 20:49:30 -0400 Received: from szxga02-in.huawei.com ([45.249.212.188]:2825) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1maow2-0003NB-Is for qemu-devel@nongnu.org; Wed, 13 Oct 2021 20:49:28 -0400 Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4HV9fG5K1wzYd2n; Thu, 14 Oct 2021 08:44:54 +0800 (CST) Received: from dggpeml100016.china.huawei.com (7.185.36.216) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Thu, 14 Oct 2021 08:49:23 +0800 Received: from DESKTOP-27KDQMV.china.huawei.com (10.174.148.223) by dggpeml100016.china.huawei.com (7.185.36.216) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Thu, 14 Oct 2021 08:49:22 +0800 From: "Longpeng(Mike)" To: , Subject: [PATCH v4 5/6] Revert "vfio: Avoid disabling and enabling vectors repeatedly in VFIO migration" Date: Thu, 14 Oct 2021 08:48:51 +0800 Message-ID: <20211014004852.1293-6-longpeng2@huawei.com> X-Mailer: git-send-email 2.25.0.windows.1 In-Reply-To: <20211014004852.1293-1-longpeng2@huawei.com> References: <20211014004852.1293-1-longpeng2@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.148.223] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpeml100016.china.huawei.com (7.185.36.216) X-CFilter-Loop: Reflected Received-SPF: pass client-ip=45.249.212.188; envelope-from=longpeng2@huawei.com; helo=szxga02-in.huawei.com X-Spam_score_int: -41 X-Spam_score: -4.2 X-Spam_bar: ---- X-Spam_report: (-4.2 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_DNSWL_MED=-2.3, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Longpeng\(Mike\)" , arei.gonglei@huawei.com, qemu-devel@nongnu.org, kvm@vger.kernel.org Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Commit ecebe53fe993 ("vfio: Avoid disabling and enabling vectors repeatedly in VFIO migration") avoids inefficiently disabling and enabling vectors repeatedly and lets the unmasked vectors be enabled one by one. But we want to batch multiple routes and defer the commit, and only commit once outside the loop of setting vector notifiers, so we cannot enable the vectors one by one in the loop now. Revert that commit and we will take another way in the next patch, it can not only avoid disabling/enabling vectors repeatedly, but also satisfy our requirement of defer to commit. Signed-off-by: Longpeng(Mike) --- hw/vfio/pci.c | 20 +++----------------- 1 file changed, 3 insertions(+), 17 deletions(-) diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c index aeeb6cd..0bd832b 100644 --- a/hw/vfio/pci.c +++ b/hw/vfio/pci.c @@ -569,9 +569,6 @@ static void vfio_msix_vector_release(PCIDevice *pdev, unsigned int nr) static void vfio_msix_enable(VFIOPCIDevice *vdev) { - PCIDevice *pdev = &vdev->pdev; - unsigned int nr, max_vec = 0; - vfio_disable_interrupts(vdev); vdev->msi_vectors = g_new0(VFIOMSIVector, vdev->msix->entries); @@ -590,22 +587,11 @@ static void vfio_msix_enable(VFIOPCIDevice *vdev) * triggering to userspace, then immediately release the vector, leaving * the physical device with no vectors enabled, but MSI-X enabled, just * like the guest view. - * If there are already unmasked vectors (in migration resume phase and - * some guest startups) which will be enabled soon, we can allocate all - * of them here to avoid inefficiently disabling and enabling vectors - * repeatedly later. */ - if (!pdev->msix_function_masked) { - for (nr = 0; nr < msix_nr_vectors_allocated(pdev); nr++) { - if (!msix_is_masked(pdev, nr)) { - max_vec = nr; - } - } - } - vfio_msix_vector_do_use(pdev, max_vec, NULL, NULL); - vfio_msix_vector_release(pdev, max_vec); + vfio_msix_vector_do_use(&vdev->pdev, 0, NULL, NULL); + vfio_msix_vector_release(&vdev->pdev, 0); - if (msix_set_vector_notifiers(pdev, vfio_msix_vector_use, + if (msix_set_vector_notifiers(&vdev->pdev, vfio_msix_vector_use, vfio_msix_vector_release, NULL)) { error_report("vfio: msix_set_vector_notifiers failed"); }