From patchwork Tue Mar 8 11:09:17 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quan Xu X-Patchwork-Id: 8531831 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 1AFA59F46A for ; Tue, 8 Mar 2016 11:16:43 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 4D7D3200DF for ; Tue, 8 Mar 2016 11:16:41 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 609C3200DE for ; Tue, 8 Mar 2016 11:16:39 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xen.org with esmtp (Exim 4.84) (envelope-from ) id 1adFaI-0005Zi-Jn; Tue, 08 Mar 2016 11:13:50 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xen.org with esmtp (Exim 4.84) (envelope-from ) id 1adFaH-0005ZP-0V for xen-devel@lists.xen.org; Tue, 08 Mar 2016 11:13:49 +0000 Received: from [85.158.137.68] by server-4.bemta-3.messagelabs.com id E0/F9-03606-CE3BED65; Tue, 08 Mar 2016 11:13:48 +0000 X-Env-Sender: quan.xu@intel.com X-Msg-Ref: server-7.tower-31.messagelabs.com!1457435625!20168635!1 X-Originating-IP: [134.134.136.20] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n X-StarScan-Received: X-StarScan-Version: 8.11; banners=-,-,- X-VirusChecked: Checked Received: (qmail 17206 invoked from network); 8 Mar 2016 11:13:46 -0000 Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20) by server-7.tower-31.messagelabs.com with SMTP; 8 Mar 2016 11:13:46 -0000 Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP; 08 Mar 2016 03:13:45 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,556,1449561600"; d="scan'208";a="904838943" Received: from xen-commits.sh.intel.com ([10.239.82.178]) by orsmga001.jf.intel.com with ESMTP; 08 Mar 2016 03:13:43 -0800 From: Quan Xu To: xen-devel@lists.xen.org Date: Tue, 8 Mar 2016 19:09:17 +0800 Message-Id: <1457435357-34073-3-git-send-email-quan.xu@intel.com> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1457435357-34073-1-git-send-email-quan.xu@intel.com> References: <1457435357-34073-1-git-send-email-quan.xu@intel.com> Cc: Kevin Tian , Keir Fraser , Jan Beulich , Andrew Cooper , Dario Faggioli , Aravind Gopalakrishnan , Suravee Suthikulpanit , Quan Xu , Feng Wu Subject: [Xen-devel] [PATCH 2/2] IOMMU/spinlock: Make the pcidevs_lock a recursive one X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Quan Xu CC: Keir Fraser CC: Jan Beulich CC: Andrew Cooper CC: Suravee Suthikulpanit CC: Aravind Gopalakrishnan CC: Feng Wu CC: Kevin Tian CC: Dario Faggioli --- xen/arch/x86/domctl.c | 8 +-- xen/arch/x86/hvm/vmsi.c | 4 +- xen/arch/x86/irq.c | 8 +-- xen/arch/x86/msi.c | 16 ++--- xen/arch/x86/pci.c | 4 +- xen/arch/x86/physdev.c | 16 ++--- xen/common/sysctl.c | 4 +- xen/drivers/passthrough/amd/iommu_init.c | 8 +-- xen/drivers/passthrough/amd/iommu_map.c | 2 +- xen/drivers/passthrough/amd/pci_amd_iommu.c | 4 +- xen/drivers/passthrough/pci.c | 96 +++++++++++++++++------------ xen/drivers/passthrough/vtd/iommu.c | 14 ++--- xen/drivers/video/vga.c | 4 +- xen/include/xen/pci.h | 5 +- 14 files changed, 108 insertions(+), 85 deletions(-) diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c index bf62a88..21cc161 100644 --- a/xen/arch/x86/domctl.c +++ b/xen/arch/x86/domctl.c @@ -427,9 +427,9 @@ long arch_do_domctl( ret = -ESRCH; if ( iommu_enabled ) { - spin_lock(&pcidevs_lock); + pcidevs_lock(); ret = pt_irq_create_bind(d, bind); - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); } if ( ret < 0 ) printk(XENLOG_G_ERR "pt_irq_create_bind failed (%ld) for dom%d\n", @@ -452,9 +452,9 @@ long arch_do_domctl( if ( iommu_enabled ) { - spin_lock(&pcidevs_lock); + pcidevs_lock(); ret = pt_irq_destroy_bind(d, bind); - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); } if ( ret < 0 ) printk(XENLOG_G_ERR "pt_irq_destroy_bind failed (%ld) for dom%d\n", diff --git a/xen/arch/x86/hvm/vmsi.c b/xen/arch/x86/hvm/vmsi.c index ac838a9..8e0817b 100644 --- a/xen/arch/x86/hvm/vmsi.c +++ b/xen/arch/x86/hvm/vmsi.c @@ -388,7 +388,7 @@ int msixtbl_pt_register(struct domain *d, struct pirq *pirq, uint64_t gtable) struct msixtbl_entry *entry, *new_entry; int r = -EINVAL; - ASSERT(spin_is_locked(&pcidevs_lock)); + ASSERT(pcidevs_is_locked()); ASSERT(spin_is_locked(&d->event_lock)); /* @@ -443,7 +443,7 @@ void msixtbl_pt_unregister(struct domain *d, struct pirq *pirq) struct pci_dev *pdev; struct msixtbl_entry *entry; - ASSERT(spin_is_locked(&pcidevs_lock)); + ASSERT(pcidevs_is_locked()); ASSERT(spin_is_locked(&d->event_lock)); irq_desc = pirq_spin_lock_irq_desc(pirq, NULL); diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c index bf2e822..68bdf19 100644 --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -1955,7 +1955,7 @@ int map_domain_pirq( struct pci_dev *pdev; unsigned int nr = 0; - ASSERT(spin_is_locked(&pcidevs_lock)); + ASSERT(pcidevs_is_locked()); ret = -ENODEV; if ( !cpu_has_apic ) @@ -2100,7 +2100,7 @@ int unmap_domain_pirq(struct domain *d, int pirq) if ( (pirq < 0) || (pirq >= d->nr_pirqs) ) return -EINVAL; - ASSERT(spin_is_locked(&pcidevs_lock)); + ASSERT(pcidevs_is_locked()); ASSERT(spin_is_locked(&d->event_lock)); info = pirq_info(d, pirq); @@ -2226,7 +2226,7 @@ void free_domain_pirqs(struct domain *d) { int i; - spin_lock(&pcidevs_lock); + pcidevs_lock(); spin_lock(&d->event_lock); for ( i = 0; i < d->nr_pirqs; i++ ) @@ -2234,7 +2234,7 @@ void free_domain_pirqs(struct domain *d) unmap_domain_pirq(d, i); spin_unlock(&d->event_lock); - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); } static void dump_irqs(unsigned char key) diff --git a/xen/arch/x86/msi.c b/xen/arch/x86/msi.c index 3dbb84d..6e5e33e 100644 --- a/xen/arch/x86/msi.c +++ b/xen/arch/x86/msi.c @@ -694,7 +694,7 @@ static int msi_capability_init(struct pci_dev *dev, u8 slot = PCI_SLOT(dev->devfn); u8 func = PCI_FUNC(dev->devfn); - ASSERT(spin_is_locked(&pcidevs_lock)); + ASSERT(pcidevs_is_locked()); pos = pci_find_cap_offset(seg, bus, slot, func, PCI_CAP_ID_MSI); if ( !pos ) return -ENODEV; @@ -852,7 +852,7 @@ static int msix_capability_init(struct pci_dev *dev, u8 func = PCI_FUNC(dev->devfn); bool_t maskall = msix->host_maskall; - ASSERT(spin_is_locked(&pcidevs_lock)); + ASSERT(pcidevs_is_locked()); control = pci_conf_read16(seg, bus, slot, func, msix_control_reg(pos)); /* @@ -1042,7 +1042,7 @@ static int __pci_enable_msi(struct msi_info *msi, struct msi_desc **desc) struct pci_dev *pdev; struct msi_desc *old_desc; - ASSERT(spin_is_locked(&pcidevs_lock)); + ASSERT(pcidevs_is_locked()); pdev = pci_get_pdev(msi->seg, msi->bus, msi->devfn); if ( !pdev ) return -ENODEV; @@ -1103,7 +1103,7 @@ static int __pci_enable_msix(struct msi_info *msi, struct msi_desc **desc) u8 func = PCI_FUNC(msi->devfn); struct msi_desc *old_desc; - ASSERT(spin_is_locked(&pcidevs_lock)); + ASSERT(pcidevs_is_locked()); pdev = pci_get_pdev(msi->seg, msi->bus, msi->devfn); pos = pci_find_cap_offset(msi->seg, msi->bus, slot, func, PCI_CAP_ID_MSIX); if ( !pdev || !pos ) @@ -1205,7 +1205,7 @@ int pci_prepare_msix(u16 seg, u8 bus, u8 devfn, bool_t off) if ( !pos ) return -ENODEV; - spin_lock(&pcidevs_lock); + pcidevs_lock(); pdev = pci_get_pdev(seg, bus, devfn); if ( !pdev ) rc = -ENODEV; @@ -1224,7 +1224,7 @@ int pci_prepare_msix(u16 seg, u8 bus, u8 devfn, bool_t off) rc = msix_capability_init(pdev, pos, NULL, NULL, multi_msix_capable(control)); } - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); return rc; } @@ -1235,7 +1235,7 @@ int pci_prepare_msix(u16 seg, u8 bus, u8 devfn, bool_t off) */ int pci_enable_msi(struct msi_info *msi, struct msi_desc **desc) { - ASSERT(spin_is_locked(&pcidevs_lock)); + ASSERT(pcidevs_is_locked()); if ( !use_msi ) return -EPERM; @@ -1351,7 +1351,7 @@ int pci_restore_msi_state(struct pci_dev *pdev) unsigned int type = 0, pos = 0; u16 control = 0; - ASSERT(spin_is_locked(&pcidevs_lock)); + ASSERT(pcidevs_is_locked()); if ( !use_msi ) return -EOPNOTSUPP; diff --git a/xen/arch/x86/pci.c b/xen/arch/x86/pci.c index 5bcecbb..892278d 100644 --- a/xen/arch/x86/pci.c +++ b/xen/arch/x86/pci.c @@ -82,13 +82,13 @@ int pci_conf_write_intercept(unsigned int seg, unsigned int bdf, if ( reg < 64 || reg >= 256 ) return 0; - spin_lock(&pcidevs_lock); + pcidevs_lock(); pdev = pci_get_pdev(seg, PCI_BUS(bdf), PCI_DEVFN2(bdf)); if ( pdev ) rc = pci_msi_conf_write_intercept(pdev, reg, size, data); - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); return rc; } diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c index 57b7800..1cb9b58 100644 --- a/xen/arch/x86/physdev.c +++ b/xen/arch/x86/physdev.c @@ -167,7 +167,7 @@ int physdev_map_pirq(domid_t domid, int type, int *index, int *pirq_p, goto free_domain; } - spin_lock(&pcidevs_lock); + pcidevs_lock(); /* Verify or get pirq. */ spin_lock(&d->event_lock); pirq = domain_irq_to_pirq(d, irq); @@ -237,7 +237,7 @@ int physdev_map_pirq(domid_t domid, int type, int *index, int *pirq_p, done: spin_unlock(&d->event_lock); - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); if ( ret != 0 ) switch ( type ) { @@ -275,11 +275,11 @@ int physdev_unmap_pirq(domid_t domid, int pirq) goto free_domain; } - spin_lock(&pcidevs_lock); + pcidevs_lock(); spin_lock(&d->event_lock); ret = unmap_domain_pirq(d, pirq); spin_unlock(&d->event_lock); - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); free_domain: rcu_unlock_domain(d); @@ -689,10 +689,10 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg) if ( copy_from_guest(&restore_msi, arg, 1) != 0 ) break; - spin_lock(&pcidevs_lock); + pcidevs_lock(); pdev = pci_get_pdev(0, restore_msi.bus, restore_msi.devfn); ret = pdev ? pci_restore_msi_state(pdev) : -ENODEV; - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); break; } @@ -704,10 +704,10 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg) if ( copy_from_guest(&dev, arg, 1) != 0 ) break; - spin_lock(&pcidevs_lock); + pcidevs_lock(); pdev = pci_get_pdev(dev.seg, dev.bus, dev.devfn); ret = pdev ? pci_restore_msi_state(pdev) : -ENODEV; - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); break; } diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c index 85e853f..401edb9 100644 --- a/xen/common/sysctl.c +++ b/xen/common/sysctl.c @@ -426,7 +426,7 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl) break; } - spin_lock(&pcidevs_lock); + pcidevs_lock(); pdev = pci_get_pdev(dev.seg, dev.bus, dev.devfn); if ( !pdev ) node = XEN_INVALID_DEV; @@ -434,7 +434,7 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl) node = XEN_INVALID_NODE_ID; else node = pdev->node; - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); if ( copy_to_guest_offset(ti->nodes, i, &node, 1) ) { diff --git a/xen/drivers/passthrough/amd/iommu_init.c b/xen/drivers/passthrough/amd/iommu_init.c index a400497..4536106 100644 --- a/xen/drivers/passthrough/amd/iommu_init.c +++ b/xen/drivers/passthrough/amd/iommu_init.c @@ -673,9 +673,9 @@ void parse_ppr_log_entry(struct amd_iommu *iommu, u32 entry[]) bus = PCI_BUS(device_id); devfn = PCI_DEVFN2(device_id); - spin_lock(&pcidevs_lock); + pcidevs_lock(); pdev = pci_get_real_pdev(iommu->seg, bus, devfn); - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); if ( pdev ) guest_iommu_add_ppr_log(pdev->domain, entry); @@ -787,10 +787,10 @@ static bool_t __init set_iommu_interrupt_handler(struct amd_iommu *iommu) return 0; } - spin_lock(&pcidevs_lock); + pcidevs_lock(); iommu->msi.dev = pci_get_pdev(iommu->seg, PCI_BUS(iommu->bdf), PCI_DEVFN2(iommu->bdf)); - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); if ( !iommu->msi.dev ) { AMD_IOMMU_DEBUG("IOMMU: no pdev for %04x:%02x:%02x.%u\n", diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c index 78862c9..9d91dfc 100644 --- a/xen/drivers/passthrough/amd/iommu_map.c +++ b/xen/drivers/passthrough/amd/iommu_map.c @@ -593,7 +593,7 @@ static int update_paging_mode(struct domain *d, unsigned long gfn) hd->arch.paging_mode = level; hd->arch.root_table = new_root; - if ( !spin_is_locked(&pcidevs_lock) ) + if ( !pcidevs_is_locked() ) AMD_IOMMU_DEBUG("%s Try to access pdev_list " "without aquiring pcidevs_lock.\n", __func__); diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c index c1c0b6b..dc3bfab 100644 --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c @@ -158,7 +158,7 @@ static void amd_iommu_setup_domain_device( spin_unlock_irqrestore(&iommu->lock, flags); - ASSERT(spin_is_locked(&pcidevs_lock)); + ASSERT(pcidevs_is_locked()); if ( pci_ats_device(iommu->seg, bus, pdev->devfn) && !pci_ats_enabled(iommu->seg, bus, pdev->devfn) ) @@ -345,7 +345,7 @@ void amd_iommu_disable_domain_device(struct domain *domain, } spin_unlock_irqrestore(&iommu->lock, flags); - ASSERT(spin_is_locked(&pcidevs_lock)); + ASSERT(pcidevs_is_locked()); if ( devfn == pdev->devfn && pci_ats_device(iommu->seg, bus, devfn) && diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c index 27b3ca7..539518c 100644 --- a/xen/drivers/passthrough/pci.c +++ b/xen/drivers/passthrough/pci.c @@ -48,7 +48,7 @@ struct pci_seg { } bus2bridge[MAX_BUSES]; }; -spinlock_t pcidevs_lock = SPIN_LOCK_UNLOCKED; +static spinlock_t _pcidevs_lock = SPIN_LOCK_UNLOCKED; static struct radix_tree_root pci_segments; static inline struct pci_seg *get_pseg(u16 seg) @@ -103,6 +103,26 @@ static int pci_segments_iterate( return rc; } +void pcidevs_lock(void) +{ + spin_lock_recursive(&_pcidevs_lock); +} + +void pcidevs_unlock(void) +{ + spin_unlock_recursive(&_pcidevs_lock); +} + +int pcidevs_is_locked(void) +{ + return spin_is_locked(&_pcidevs_lock); +} + +int pcidevs_trylock(void) +{ + return spin_trylock_recursive(&_pcidevs_lock); +} + void __init pt_pci_init(void) { radix_tree_init(&pci_segments); @@ -412,14 +432,14 @@ int __init pci_hide_device(int bus, int devfn) struct pci_dev *pdev; int rc = -ENOMEM; - spin_lock(&pcidevs_lock); + pcidevs_lock(); pdev = alloc_pdev(get_pseg(0), bus, devfn); if ( pdev ) { _pci_hide_device(pdev); rc = 0; } - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); return rc; } @@ -456,7 +476,7 @@ struct pci_dev *pci_get_pdev(int seg, int bus, int devfn) struct pci_seg *pseg = get_pseg(seg); struct pci_dev *pdev = NULL; - ASSERT(spin_is_locked(&pcidevs_lock)); + ASSERT(pcidevs_is_locked()); ASSERT(seg != -1 || bus == -1); ASSERT(bus != -1 || devfn == -1); @@ -581,9 +601,9 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn, pdev_type = "extended function"; else if (info->is_virtfn) { - spin_lock(&pcidevs_lock); + pcidevs_lock(); pdev = pci_get_pdev(seg, info->physfn.bus, info->physfn.devfn); - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); if ( !pdev ) pci_add_device(seg, info->physfn.bus, info->physfn.devfn, NULL, node); @@ -601,7 +621,7 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn, ret = -ENOMEM; - spin_lock(&pcidevs_lock); + pcidevs_lock(); pseg = alloc_pseg(seg); if ( !pseg ) goto out; @@ -703,7 +723,7 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn, pci_enable_acs(pdev); out: - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); if ( !ret ) { printk(XENLOG_DEBUG "PCI add %s %04x:%02x:%02x.%u\n", pdev_type, @@ -735,7 +755,7 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn) if ( !pseg ) return -ENODEV; - spin_lock(&pcidevs_lock); + pcidevs_lock(); list_for_each_entry ( pdev, &pseg->alldevs_list, alldevs_list ) if ( pdev->bus == bus && pdev->devfn == devfn ) { @@ -749,7 +769,7 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn) break; } - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); return ret; } @@ -807,11 +827,11 @@ int pci_release_devices(struct domain *d) u8 bus, devfn; int ret; - spin_lock(&pcidevs_lock); + pcidevs_lock(); ret = pci_clean_dpci_irqs(d); if ( ret ) { - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); return ret; } while ( (pdev = pci_get_pdev_by_domain(d, -1, -1, -1)) ) @@ -823,7 +843,7 @@ int pci_release_devices(struct domain *d) d->domain_id, pdev->seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn)); } - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); return 0; } @@ -920,7 +940,7 @@ void pci_check_disable_device(u16 seg, u8 bus, u8 devfn) s_time_t now = NOW(); u16 cword; - spin_lock(&pcidevs_lock); + pcidevs_lock(); pdev = pci_get_real_pdev(seg, bus, devfn); if ( pdev ) { @@ -931,7 +951,7 @@ void pci_check_disable_device(u16 seg, u8 bus, u8 devfn) if ( ++pdev->fault.count < PT_FAULT_THRESHOLD ) pdev = NULL; } - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); if ( !pdev ) return; @@ -988,9 +1008,9 @@ int __init scan_pci_devices(void) { int ret; - spin_lock(&pcidevs_lock); + pcidevs_lock(); ret = pci_segments_iterate(_scan_pci_devices, NULL); - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); return ret; } @@ -1054,17 +1074,17 @@ static int __hwdom_init _setup_hwdom_pci_devices(struct pci_seg *pseg, void *arg if ( iommu_verbose ) { - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); process_pending_softirqs(); - spin_lock(&pcidevs_lock); + pcidevs_lock(); } } if ( !iommu_verbose ) { - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); process_pending_softirqs(); - spin_lock(&pcidevs_lock); + pcidevs_lock(); } } @@ -1076,9 +1096,9 @@ void __hwdom_init setup_hwdom_pci_devices( { struct setup_hwdom ctxt = { .d = d, .handler = handler }; - spin_lock(&pcidevs_lock); + pcidevs_lock(); pci_segments_iterate(_setup_hwdom_pci_devices, &ctxt); - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); } #ifdef CONFIG_ACPI @@ -1206,9 +1226,9 @@ static int _dump_pci_devices(struct pci_seg *pseg, void *arg) static void dump_pci_devices(unsigned char ch) { printk("==== PCI devices ====\n"); - spin_lock(&pcidevs_lock); + pcidevs_lock(); pci_segments_iterate(_dump_pci_devices, NULL); - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); } struct keyhandler dump_pci_devices_keyhandler = { @@ -1248,7 +1268,7 @@ int iommu_add_device(struct pci_dev *pdev) if ( !pdev->domain ) return -EINVAL; - ASSERT(spin_is_locked(&pcidevs_lock)); + ASSERT(pcidevs_is_locked()); hd = domain_hvm_iommu(pdev->domain); if ( !iommu_enabled || !hd->platform_ops ) @@ -1277,7 +1297,7 @@ int iommu_enable_device(struct pci_dev *pdev) if ( !pdev->domain ) return -EINVAL; - ASSERT(spin_is_locked(&pcidevs_lock)); + ASSERT(pcidevs_is_locked()); hd = domain_hvm_iommu(pdev->domain); if ( !iommu_enabled || !hd->platform_ops || @@ -1326,9 +1346,9 @@ static int device_assigned(u16 seg, u8 bus, u8 devfn) { struct pci_dev *pdev; - spin_lock(&pcidevs_lock); + pcidevs_lock(); pdev = pci_get_pdev_by_domain(hardware_domain, seg, bus, devfn); - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); return pdev ? 0 : -EBUSY; } @@ -1350,13 +1370,13 @@ static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag) p2m_get_hostp2m(d)->global_logdirty)) ) return -EXDEV; - if ( !spin_trylock(&pcidevs_lock) ) + if ( !pcidevs_trylock() ) return -ERESTART; rc = iommu_construct(d); if ( rc ) { - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); return rc; } @@ -1387,7 +1407,7 @@ static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag) done: if ( !has_arch_pdevs(d) && need_iommu(d) ) iommu_teardown(d); - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); return rc; } @@ -1402,7 +1422,7 @@ int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn) if ( !iommu_enabled || !hd->platform_ops ) return -EINVAL; - ASSERT(spin_is_locked(&pcidevs_lock)); + ASSERT(pcidevs_is_locked()); pdev = pci_get_pdev_by_domain(d, seg, bus, devfn); if ( !pdev ) return -ENODEV; @@ -1457,7 +1477,7 @@ static int iommu_get_device_group( group_id = ops->get_device_group_id(seg, bus, devfn); - spin_lock(&pcidevs_lock); + pcidevs_lock(); for_each_pdev( d, pdev ) { if ( (pdev->seg != seg) || @@ -1476,14 +1496,14 @@ static int iommu_get_device_group( if ( unlikely(copy_to_guest_offset(buf, i, &bdf, 1)) ) { - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); return -1; } i++; } } - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); return i; } @@ -1611,9 +1631,9 @@ int iommu_do_pci_domctl( bus = PCI_BUS(machine_sbdf); devfn = PCI_DEVFN2(machine_sbdf); - spin_lock(&pcidevs_lock); + pcidevs_lock(); ret = deassign_device(d, seg, bus, devfn); - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); if ( ret ) printk(XENLOG_G_ERR "deassign %04x:%02x:%02x.%u from dom%d failed (%d)\n", diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c index dd13865..30f007e 100644 --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -1286,7 +1286,7 @@ int domain_context_mapping_one( u16 seg = iommu->intel->drhd->segment; int agaw; - ASSERT(spin_is_locked(&pcidevs_lock)); + ASSERT(pcidevs_is_locked()); spin_lock(&iommu->lock); maddr = bus_to_context_maddr(iommu, bus); context_entries = (struct context_entry *)map_vtd_domain_page(maddr); @@ -1428,7 +1428,7 @@ static int domain_context_mapping( if ( !drhd ) return -ENODEV; - ASSERT(spin_is_locked(&pcidevs_lock)); + ASSERT(pcidevs_is_locked()); switch ( pdev->type ) { @@ -1510,7 +1510,7 @@ int domain_context_unmap_one( u64 maddr; int iommu_domid; - ASSERT(spin_is_locked(&pcidevs_lock)); + ASSERT(pcidevs_is_locked()); spin_lock(&iommu->lock); maddr = bus_to_context_maddr(iommu, bus); @@ -1820,7 +1820,7 @@ static int rmrr_identity_mapping(struct domain *d, bool_t map, struct mapped_rmrr *mrmrr; struct hvm_iommu *hd = domain_hvm_iommu(d); - ASSERT(spin_is_locked(&pcidevs_lock)); + ASSERT(pcidevs_is_locked()); ASSERT(rmrr->base_address < rmrr->end_address); /* @@ -1885,7 +1885,7 @@ static int intel_iommu_add_device(u8 devfn, struct pci_dev *pdev) u16 bdf; int ret, i; - ASSERT(spin_is_locked(&pcidevs_lock)); + ASSERT(pcidevs_is_locked()); if ( !pdev->domain ) return -EINVAL; @@ -2113,7 +2113,7 @@ static void __hwdom_init setup_hwdom_rmrr(struct domain *d) u16 bdf; int ret, i; - spin_lock(&pcidevs_lock); + pcidevs_lock(); for_each_rmrr_device ( rmrr, bdf, i ) { /* @@ -2127,7 +2127,7 @@ static void __hwdom_init setup_hwdom_rmrr(struct domain *d) dprintk(XENLOG_ERR VTDPREFIX, "IOMMU: mapping reserved region failed\n"); } - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); } int __init intel_vtd_setup(void) diff --git a/xen/drivers/video/vga.c b/xen/drivers/video/vga.c index 40e5963..61c6b13 100644 --- a/xen/drivers/video/vga.c +++ b/xen/drivers/video/vga.c @@ -117,9 +117,9 @@ void __init video_endboot(void) const struct pci_dev *pdev; u8 b = bus, df = devfn, sb; - spin_lock(&pcidevs_lock); + pcidevs_lock(); pdev = pci_get_pdev(0, bus, devfn); - spin_unlock(&pcidevs_lock); + pcidevs_unlock(); if ( !pdev || pci_conf_read16(0, bus, PCI_SLOT(devfn), PCI_FUNC(devfn), diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h index a5aef55..b87571d 100644 --- a/xen/include/xen/pci.h +++ b/xen/include/xen/pci.h @@ -94,7 +94,10 @@ struct pci_dev { * interrupt handling related (the mask bit register). */ -extern spinlock_t pcidevs_lock; +void pcidevs_lock(void); +void pcidevs_unlock(void); +int pcidevs_is_locked(void); +int pcidevs_trylock(void); bool_t pci_known_segment(u16 seg); bool_t pci_device_detect(u16 seg, u8 bus, u8 dev, u8 func);