From patchwork Wed Oct 26 21:29:17 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kirti Wankhede X-Patchwork-Id: 9398431 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6A46860234 for ; Wed, 26 Oct 2016 21:34:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 334F929DB9 for ; Wed, 26 Oct 2016 21:34:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 280D229E22; Wed, 26 Oct 2016 21:34:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 625F629DB9 for ; Wed, 26 Oct 2016 21:34:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S941611AbcJZVci (ORCPT ); Wed, 26 Oct 2016 17:32:38 -0400 Received: from hqemgate15.nvidia.com ([216.228.121.64]:3864 "EHLO hqemgate15.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S941211AbcJZV3x (ORCPT ); Wed, 26 Oct 2016 17:29:53 -0400 Received: from hqnvupgp08.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com id ; Wed, 26 Oct 2016 14:28:49 -0700 Received: from HQMAIL108.nvidia.com ([172.20.13.39]) by hqnvupgp08.nvidia.com (PGP Universal service); Wed, 26 Oct 2016 14:29:12 -0700 X-PGP-Universal: processed; by hqnvupgp08.nvidia.com on Wed, 26 Oct 2016 14:29:12 -0700 Received: from HQMAIL102.nvidia.com (172.18.146.10) by HQMAIL108.nvidia.com (172.18.146.13) with Microsoft SMTP Server (TLS) id 15.0.1210.3; Wed, 26 Oct 2016 21:29:13 +0000 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL102.nvidia.com (172.18.146.10) with Microsoft SMTP Server (TLS) id 15.0.1210.3; Wed, 26 Oct 2016 21:29:12 +0000 Received: from kwankhede-dev.nvidia.com (172.20.13.39) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1210.3 via Frontend Transport; Wed, 26 Oct 2016 21:29:09 +0000 From: Kirti Wankhede To: , , , CC: , , , , , , Kirti Wankhede Subject: [PATCH v10 10/19] vfio iommu: Add blocking notifier to notify DMA_UNMAP Date: Thu, 27 Oct 2016 02:59:17 +0530 Message-ID: <1477517366-27871-11-git-send-email-kwankhede@nvidia.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1477517366-27871-1-git-send-email-kwankhede@nvidia.com> References: <1477517366-27871-1-git-send-email-kwankhede@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Added blocking notifier to IOMMU TYPE1 driver to notify vendor drivers about DMA_UNMAP. Exported two APIs vfio_register_notifier() and vfio_unregister_notifier(). Vendor driver should register notifer using these APIs. Vendor driver should use VFIO_IOMMU_NOTIFY_DMA_UNMAP action to invalidate mappings. Signed-off-by: Kirti Wankhede Signed-off-by: Neo Jia Change-Id: I5910d0024d6be87f3e8d3e0ca0eaeaaa0b17f271 --- drivers/vfio/vfio.c | 73 +++++++++++++++++++++++++++++++++ drivers/vfio/vfio_iommu_type1.c | 89 ++++++++++++++++++++++++++++++++++++----- include/linux/vfio.h | 11 +++++ 3 files changed, 163 insertions(+), 10 deletions(-) diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c index 28b50ca14c52..ff05ac6b1e90 100644 --- a/drivers/vfio/vfio.c +++ b/drivers/vfio/vfio.c @@ -1891,6 +1891,79 @@ err_unpin_pages: } EXPORT_SYMBOL(vfio_unpin_pages); +int vfio_register_notifier(struct device *dev, struct notifier_block *nb) +{ + struct vfio_container *container; + struct vfio_group *group; + struct vfio_iommu_driver *driver; + ssize_t ret; + + if (!dev || !nb) + return -EINVAL; + + group = vfio_group_get_from_dev(dev); + if (IS_ERR(group)) + return PTR_ERR(group); + + ret = vfio_group_add_container_user(group); + if (ret) + goto err_register_nb; + + container = group->container; + down_read(&container->group_lock); + + driver = container->iommu_driver; + if (likely(driver && driver->ops->register_notifier)) + ret = driver->ops->register_notifier(container->iommu_data, nb); + else + ret = -EINVAL; + + up_read(&container->group_lock); + vfio_group_try_dissolve_container(group); + +err_register_nb: + vfio_group_put(group); + return ret; +} +EXPORT_SYMBOL(vfio_register_notifier); + +int vfio_unregister_notifier(struct device *dev, struct notifier_block *nb) +{ + struct vfio_container *container; + struct vfio_group *group; + struct vfio_iommu_driver *driver; + ssize_t ret; + + if (!dev || !nb) + return -EINVAL; + + group = vfio_group_get_from_dev(dev); + if (IS_ERR(group)) + return PTR_ERR(group); + + ret = vfio_group_add_container_user(group); + if (ret) + goto err_unregister_nb; + + container = group->container; + down_read(&container->group_lock); + + driver = container->iommu_driver; + if (likely(driver && driver->ops->unregister_notifier)) + ret = driver->ops->unregister_notifier(container->iommu_data, + nb); + else + ret = -EINVAL; + + up_read(&container->group_lock); + vfio_group_try_dissolve_container(group); + +err_unregister_nb: + vfio_group_put(group); + return ret; +} +EXPORT_SYMBOL(vfio_unregister_notifier); + /** * Module/class support */ diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 5add11a147e1..a4bd331ac0fd 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -37,6 +37,7 @@ #include #include #include +#include #define DRIVER_VERSION "0.2" #define DRIVER_AUTHOR "Alex Williamson " @@ -59,6 +60,7 @@ struct vfio_iommu { struct vfio_domain *external_domain; /* domain for external user */ struct mutex lock; struct rb_root dma_list; + struct blocking_notifier_head notifier; bool v2; bool nesting; }; @@ -549,7 +551,8 @@ static long vfio_iommu_type1_pin_pages(void *iommu_data, mutex_lock(&iommu->lock); - if (!iommu->external_domain) { + /* Fail if notifier list is empty */ + if ((!iommu->external_domain) || (!iommu->notifier.head)) { ret = -EINVAL; goto pin_done; } @@ -768,6 +771,50 @@ static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu) return bitmap; } +/* + * This function finds pfn in domain->external_addr_space->pfn_list for given + * iova range. If pfn exist, notify pfn to registered notifier list. On + * receiving notifier callback, vendor driver should invalidate the mapping and + * call vfio_unpin_pages() to unpin this pfn. With that vfio_pfn for this pfn + * gets removed from rb tree of pfn_list. That re-arranges rb tree, so while + * searching for next vfio_pfn in rb tree, start search from first node again. + * If any vendor driver doesn't unpin that pfn, vfio_pfn would not get removed + * from rb tree and so in next search vfio_pfn would be same as previous + * vfio_pfn. In that case, exit from loop. + */ +static void vfio_notifier_call_chain(struct vfio_iommu *iommu, + struct vfio_iommu_type1_dma_unmap *unmap) +{ + struct vfio_domain *domain = iommu->external_domain; + struct rb_node *n; + struct vfio_pfn *vpfn = NULL, *prev_vpfn; + + do { + prev_vpfn = vpfn; + mutex_lock(&domain->external_addr_space->pfn_list_lock); + + n = rb_first(&domain->external_addr_space->pfn_list); + + for (; n; n = rb_next(n), vpfn = NULL) { + vpfn = rb_entry(n, struct vfio_pfn, node); + + if ((vpfn->iova >= unmap->iova) && + (vpfn->iova < unmap->iova + unmap->size)) + break; + } + + mutex_unlock(&domain->external_addr_space->pfn_list_lock); + + /* Notify any listeners about DMA_UNMAP */ + if (vpfn) + blocking_notifier_call_chain(&iommu->notifier, + VFIO_IOMMU_NOTIFY_DMA_UNMAP, + &vpfn->pfn); + } while (vpfn && (prev_vpfn != vpfn)); + + WARN_ON(vpfn); +} + static int vfio_dma_do_unmap(struct vfio_iommu *iommu, struct vfio_iommu_type1_dma_unmap *unmap) { @@ -844,6 +891,9 @@ unlock: /* Report how much was unmapped */ unmap->size = unmapped; + if (unmapped && iommu->external_domain) + vfio_notifier_call_chain(iommu, unmap); + return ret; } @@ -1418,6 +1468,7 @@ static void *vfio_iommu_type1_open(unsigned long arg) INIT_LIST_HEAD(&iommu->domain_list); iommu->dma_list = RB_ROOT; mutex_init(&iommu->lock); + BLOCKING_INIT_NOTIFIER_HEAD(&iommu->notifier); return iommu; } @@ -1555,16 +1606,34 @@ static long vfio_iommu_type1_ioctl(void *iommu_data, return -ENOTTY; } +static int vfio_iommu_type1_register_notifier(void *iommu_data, + struct notifier_block *nb) +{ + struct vfio_iommu *iommu = iommu_data; + + return blocking_notifier_chain_register(&iommu->notifier, nb); +} + +static int vfio_iommu_type1_unregister_notifier(void *iommu_data, + struct notifier_block *nb) +{ + struct vfio_iommu *iommu = iommu_data; + + return blocking_notifier_chain_unregister(&iommu->notifier, nb); +} + static const struct vfio_iommu_driver_ops vfio_iommu_driver_ops_type1 = { - .name = "vfio-iommu-type1", - .owner = THIS_MODULE, - .open = vfio_iommu_type1_open, - .release = vfio_iommu_type1_release, - .ioctl = vfio_iommu_type1_ioctl, - .attach_group = vfio_iommu_type1_attach_group, - .detach_group = vfio_iommu_type1_detach_group, - .pin_pages = vfio_iommu_type1_pin_pages, - .unpin_pages = vfio_iommu_type1_unpin_pages, + .name = "vfio-iommu-type1", + .owner = THIS_MODULE, + .open = vfio_iommu_type1_open, + .release = vfio_iommu_type1_release, + .ioctl = vfio_iommu_type1_ioctl, + .attach_group = vfio_iommu_type1_attach_group, + .detach_group = vfio_iommu_type1_detach_group, + .pin_pages = vfio_iommu_type1_pin_pages, + .unpin_pages = vfio_iommu_type1_unpin_pages, + .register_notifier = vfio_iommu_type1_register_notifier, + .unregister_notifier = vfio_iommu_type1_unregister_notifier, }; static int __init vfio_iommu_type1_init(void) diff --git a/include/linux/vfio.h b/include/linux/vfio.h index 0609a2052846..4c91ce8bfaeb 100644 --- a/include/linux/vfio.h +++ b/include/linux/vfio.h @@ -80,6 +80,10 @@ struct vfio_iommu_driver_ops { unsigned long *phys_pfn); long (*unpin_pages)(void *iommu_data, unsigned long *pfn, long npage); + int (*register_notifier)(void *iommu_data, + struct notifier_block *nb); + int (*unregister_notifier)(void *iommu_data, + struct notifier_block *nb); }; extern int vfio_register_iommu_driver(const struct vfio_iommu_driver_ops *ops); @@ -137,6 +141,13 @@ extern long vfio_pin_pages(struct device *dev, unsigned long *user_pfn, extern long vfio_unpin_pages(struct device *dev, unsigned long *pfn, long npage); +#define VFIO_IOMMU_NOTIFY_DMA_UNMAP 1 + +extern int vfio_register_notifier(struct device *dev, + struct notifier_block *nb); + +extern int vfio_unregister_notifier(struct device *dev, + struct notifier_block *nb); /* * IRQfd - generic */