From patchwork Wed Mar 2 17:29:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 12766431 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 724C2C433EF for ; Wed, 2 Mar 2022 17:31:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236549AbiCBRcc (ORCPT ); Wed, 2 Mar 2022 12:32:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42802 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240686AbiCBRcY (ORCPT ); Wed, 2 Mar 2022 12:32:24 -0500 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2F349D3ACA; Wed, 2 Mar 2022 09:31:16 -0800 (PST) Received: from fraeml736-chm.china.huawei.com (unknown [172.18.147.200]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4K81Mn2NY8z67tsp; Thu, 3 Mar 2022 01:29:57 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml736-chm.china.huawei.com (10.206.15.217) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 2 Mar 2022 18:31:10 +0100 Received: from A2006125610.china.huawei.com (10.47.91.128) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 2 Mar 2022 17:31:03 +0000 From: Shameer Kolothum To: , , CC: , , , , , , , , , , Subject: [PATCH v7 10/10] hisi_acc_vfio_pci: Use its own PCI reset_done error handler Date: Wed, 2 Mar 2022 17:29:03 +0000 Message-ID: <20220302172903.1995-11-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20220302172903.1995-1-shameerali.kolothum.thodi@huawei.com> References: <20220302172903.1995-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.91.128] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Register private handler for pci_error_handlers.reset_done and update state accordingly. Signed-off-by: Shameer Kolothum --- .../vfio/pci/hisilicon/hisi_acc_vfio_pci.c | 61 +++++++++++++++++-- .../vfio/pci/hisilicon/hisi_acc_vfio_pci.h | 4 +- 2 files changed, 59 insertions(+), 6 deletions(-) diff --git a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c index f733aa0b1a5b..906690e57020 100644 --- a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c +++ b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c @@ -515,6 +515,27 @@ static void hisi_acc_vf_disable_fds(struct hisi_acc_vf_core_device *hisi_acc_vde } } +/* + * This function is called in all state_mutex unlock cases to + * handle a 'deferred_reset' if exists. + */ +static void +hisi_acc_vf_state_mutex_unlock(struct hisi_acc_vf_core_device *hisi_acc_vdev) +{ +again: + spin_lock(&hisi_acc_vdev->reset_lock); + if (hisi_acc_vdev->deferred_reset) { + hisi_acc_vdev->deferred_reset = false; + spin_unlock(&hisi_acc_vdev->reset_lock); + hisi_acc_vdev->vf_qm_state = QM_NOT_READY; + hisi_acc_vdev->mig_state = VFIO_DEVICE_STATE_RUNNING; + hisi_acc_vf_disable_fds(hisi_acc_vdev); + goto again; + } + mutex_unlock(&hisi_acc_vdev->state_mutex); + spin_unlock(&hisi_acc_vdev->reset_lock); +} + static int hisi_acc_vf_load_state(struct hisi_acc_vf_core_device *hisi_acc_vdev, struct hisi_acc_vf_migration_file *migf) { @@ -750,7 +771,7 @@ static long hisi_acc_vf_save_unl_ioctl(struct file *filp, mutex_lock(&hisi_acc_vdev->state_mutex); if (hisi_acc_vdev->mig_state != VFIO_DEVICE_STATE_PRE_COPY) { - mutex_unlock(&hisi_acc_vdev->state_mutex); + hisi_acc_vf_state_mutex_unlock(hisi_acc_vdev); return -EINVAL; } @@ -772,7 +793,7 @@ static long hisi_acc_vf_save_unl_ioctl(struct file *filp, ret = copy_to_user((void __user *)arg, &info, minsz) ? -EFAULT : 0; out: mutex_unlock(&migf->lock); - mutex_unlock(&hisi_acc_vdev->state_mutex); + hisi_acc_vf_state_mutex_unlock(hisi_acc_vdev); return ret; } @@ -967,7 +988,7 @@ hisi_acc_vfio_pci_set_device_state(struct vfio_device *vdev, break; } } - mutex_unlock(&hisi_acc_vdev->state_mutex); + hisi_acc_vf_state_mutex_unlock(hisi_acc_vdev); return res; } @@ -980,10 +1001,35 @@ hisi_acc_vfio_pci_get_device_state(struct vfio_device *vdev, mutex_lock(&hisi_acc_vdev->state_mutex); *curr_state = hisi_acc_vdev->mig_state; - mutex_unlock(&hisi_acc_vdev->state_mutex); + hisi_acc_vf_state_mutex_unlock(hisi_acc_vdev); return 0; } +static void hisi_acc_vf_pci_aer_reset_done(struct pci_dev *pdev) +{ + struct hisi_acc_vf_core_device *hisi_acc_vdev = dev_get_drvdata(&pdev->dev); + + if (hisi_acc_vdev->core_device.vdev.migration_flags != + VFIO_MIGRATION_STOP_COPY) + return; + + /* + * As the higher VFIO layers are holding locks across reset and using + * those same locks with the mm_lock we need to prevent ABBA deadlock + * with the state_mutex and mm_lock. + * In case the state_mutex was taken already we defer the cleanup work + * to the unlock flow of the other running context. + */ + spin_lock(&hisi_acc_vdev->reset_lock); + hisi_acc_vdev->deferred_reset = true; + if (!mutex_trylock(&hisi_acc_vdev->state_mutex)) { + spin_unlock(&hisi_acc_vdev->reset_lock); + return; + } + spin_unlock(&hisi_acc_vdev->reset_lock); + hisi_acc_vf_state_mutex_unlock(hisi_acc_vdev); +} + static int hisi_acc_vf_qm_init(struct hisi_acc_vf_core_device *hisi_acc_vdev) { struct vfio_pci_core_device *vdev = &hisi_acc_vdev->core_device; @@ -1302,12 +1348,17 @@ static const struct pci_device_id hisi_acc_vfio_pci_table[] = { MODULE_DEVICE_TABLE(pci, hisi_acc_vfio_pci_table); +static const struct pci_error_handlers hisi_acc_vf_err_handlers = { + .reset_done = hisi_acc_vf_pci_aer_reset_done, + .error_detected = vfio_pci_core_aer_err_detected, +}; + static struct pci_driver hisi_acc_vfio_pci_driver = { .name = KBUILD_MODNAME, .id_table = hisi_acc_vfio_pci_table, .probe = hisi_acc_vfio_pci_probe, .remove = hisi_acc_vfio_pci_remove, - .err_handler = &vfio_pci_core_err_handlers, + .err_handler = &hisi_acc_vf_err_handlers, }; module_pci_driver(hisi_acc_vfio_pci_driver); diff --git a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.h b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.h index 51bc7e92a776..6c18f7c74f34 100644 --- a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.h +++ b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.h @@ -96,6 +96,7 @@ struct hisi_acc_vf_migration_file { struct hisi_acc_vf_core_device { struct vfio_pci_core_device core_device; u8 match_done:1; + u8 deferred_reset:1; /* for migration state */ struct mutex state_mutex; enum vfio_device_mig_state mig_state; @@ -105,7 +106,8 @@ struct hisi_acc_vf_core_device { struct hisi_qm vf_qm; u32 vf_qm_state; int vf_id; - + /* for reset handler */ + spinlock_t reset_lock; struct hisi_acc_vf_migration_file resuming_migf; struct hisi_acc_vf_migration_file saving_migf; };