From patchwork Wed Sep 4 13:27:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11130275 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B920013B1 for ; Wed, 4 Sep 2019 13:29:11 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9F85A22CEA for ; Wed, 4 Sep 2019 13:29:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9F85A22CEA Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5VJn-0001Sx-3e; Wed, 04 Sep 2019 13:27:27 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5VJm-0001Sp-2Z for xen-devel@lists.xenproject.org; Wed, 04 Sep 2019 13:27:26 +0000 X-Inumbo-ID: ba94ad4a-cf17-11e9-978d-bc764e2007e4 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id ba94ad4a-cf17-11e9-978d-bc764e2007e4; Wed, 04 Sep 2019 13:27:23 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 11D65ACEF; Wed, 4 Sep 2019 13:27:23 +0000 (UTC) From: Jan Beulich To: "xen-devel@lists.xenproject.org" References: <050de29e-5a10-8b4a-44f1-0241f4b33ee2@suse.com> Message-ID: Date: Wed, 4 Sep 2019 15:27:29 +0200 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: <050de29e-5a10-8b4a-44f1-0241f4b33ee2@suse.com> Content-Language: en-US Subject: [Xen-devel] [PATCH 1/3] VT-d: tidy _to_() functions X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Kevin Tian Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Drop iommu_to_drhd() altogether - there's no need for a loop here, the corresponding DRHD is a field in struct intel_iommu. Constify drhd_to_rhsa()'s parameter and adjust style. Signed-off-by: Jan Beulich Reviewed-by: Kevin Tian --- a/xen/drivers/passthrough/vtd/dmar.c +++ b/xen/drivers/passthrough/vtd/dmar.c @@ -128,7 +128,7 @@ static int acpi_ioapic_device_match( return 0; } -struct acpi_drhd_unit * ioapic_to_drhd(unsigned int apic_id) +struct acpi_drhd_unit *ioapic_to_drhd(unsigned int apic_id) { struct acpi_drhd_unit *drhd; list_for_each_entry( drhd, &acpi_drhd_units, list ) @@ -137,21 +137,7 @@ struct acpi_drhd_unit * ioapic_to_drhd(u return NULL; } -struct acpi_drhd_unit * iommu_to_drhd(struct iommu *iommu) -{ - struct acpi_drhd_unit *drhd; - - if ( iommu == NULL ) - return NULL; - - list_for_each_entry( drhd, &acpi_drhd_units, list ) - if ( drhd->iommu == iommu ) - return drhd; - - return NULL; -} - -struct iommu * ioapic_to_iommu(unsigned int apic_id) +struct iommu *ioapic_to_iommu(unsigned int apic_id) { struct acpi_drhd_unit *drhd; @@ -265,7 +251,7 @@ struct acpi_atsr_unit *acpi_find_matched return all_ports; } -struct acpi_rhsa_unit * drhd_to_rhsa(struct acpi_drhd_unit *drhd) +struct acpi_rhsa_unit *drhd_to_rhsa(const struct acpi_drhd_unit *drhd) { struct acpi_rhsa_unit *rhsa; --- a/xen/drivers/passthrough/vtd/extern.h +++ b/xen/drivers/passthrough/vtd/extern.h @@ -52,12 +52,11 @@ int iommu_flush_iec_global(struct iommu int iommu_flush_iec_index(struct iommu *iommu, u8 im, u16 iidx); void clear_fault_bits(struct iommu *iommu); -struct iommu * ioapic_to_iommu(unsigned int apic_id); -struct iommu * hpet_to_iommu(unsigned int hpet_id); -struct acpi_drhd_unit * ioapic_to_drhd(unsigned int apic_id); -struct acpi_drhd_unit * hpet_to_drhd(unsigned int hpet_id); -struct acpi_drhd_unit * iommu_to_drhd(struct iommu *iommu); -struct acpi_rhsa_unit * drhd_to_rhsa(struct acpi_drhd_unit *drhd); +struct iommu *ioapic_to_iommu(unsigned int apic_id); +struct iommu *hpet_to_iommu(unsigned int hpet_id); +struct acpi_drhd_unit *ioapic_to_drhd(unsigned int apic_id); +struct acpi_drhd_unit *hpet_to_drhd(unsigned int hpet_id); +struct acpi_rhsa_unit *drhd_to_rhsa(const struct acpi_drhd_unit *drhd); struct acpi_drhd_unit * find_ats_dev_drhd(struct iommu *iommu); --- a/xen/drivers/passthrough/vtd/intremap.c +++ b/xen/drivers/passthrough/vtd/intremap.c @@ -760,7 +760,6 @@ int __init intel_setup_hpet_msi(struct m int enable_intremap(struct iommu *iommu, int eim) { - struct acpi_drhd_unit *drhd; struct ir_ctrl *ir_ctrl; u32 sts, gcmd; unsigned long flags; @@ -796,8 +795,8 @@ int enable_intremap(struct iommu *iommu, if ( ir_ctrl->iremap_maddr == 0 ) { - drhd = iommu_to_drhd(iommu); - ir_ctrl->iremap_maddr = alloc_pgtable_maddr(drhd, IREMAP_ARCH_PAGE_NR); + ir_ctrl->iremap_maddr = alloc_pgtable_maddr(iommu->intel->drhd, + IREMAP_ARCH_PAGE_NR); if ( ir_ctrl->iremap_maddr == 0 ) { dprintk(XENLOG_WARNING VTDPREFIX, --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -224,7 +224,6 @@ void free_pgtable_maddr(u64 maddr) /* context entry handling */ static u64 bus_to_context_maddr(struct iommu *iommu, u8 bus) { - struct acpi_drhd_unit *drhd; struct root_entry *root, *root_entries; u64 maddr; @@ -233,8 +232,7 @@ static u64 bus_to_context_maddr(struct i root = &root_entries[bus]; if ( !root_present(*root) ) { - drhd = iommu_to_drhd(iommu); - maddr = alloc_pgtable_maddr(drhd, 1); + maddr = alloc_pgtable_maddr(iommu->intel->drhd, 1); if ( maddr == 0 ) { unmap_vtd_domain_page(root_entries); --- a/xen/drivers/passthrough/vtd/qinval.c +++ b/xen/drivers/passthrough/vtd/qinval.c @@ -397,7 +397,6 @@ static int __must_check flush_iotlb_qi(v int enable_qinval(struct iommu *iommu) { - struct acpi_drhd_unit *drhd; struct qi_ctrl *qi_ctrl; struct iommu_flush *flush; u32 sts; @@ -416,8 +415,8 @@ int enable_qinval(struct iommu *iommu) if ( qi_ctrl->qinval_maddr == 0 ) { - drhd = iommu_to_drhd(iommu); - qi_ctrl->qinval_maddr = alloc_pgtable_maddr(drhd, QINVAL_ARCH_PAGE_NR); + qi_ctrl->qinval_maddr = alloc_pgtable_maddr(iommu->intel->drhd, + QINVAL_ARCH_PAGE_NR); if ( qi_ctrl->qinval_maddr == 0 ) { dprintk(XENLOG_WARNING VTDPREFIX, From patchwork Wed Sep 4 13:27:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11130273 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0EB6D14DE for ; Wed, 4 Sep 2019 13:29:11 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E806022CED for ; Wed, 4 Sep 2019 13:29:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E806022CED Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5VK8-0001VN-DO; Wed, 04 Sep 2019 13:27:48 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5VK6-0001VA-Oj for xen-devel@lists.xenproject.org; Wed, 04 Sep 2019 13:27:46 +0000 X-Inumbo-ID: c7784148-cf17-11e9-978d-bc764e2007e4 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id c7784148-cf17-11e9-978d-bc764e2007e4; Wed, 04 Sep 2019 13:27:45 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 44A63B776; Wed, 4 Sep 2019 13:27:44 +0000 (UTC) From: Jan Beulich To: "xen-devel@lists.xenproject.org" References: <050de29e-5a10-8b4a-44f1-0241f4b33ee2@suse.com> Message-ID: <53d3e1ed-93c4-56de-dbb8-2517feaa93bb@suse.com> Date: Wed, 4 Sep 2019 15:27:50 +0200 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: <050de29e-5a10-8b4a-44f1-0241f4b33ee2@suse.com> Content-Language: en-US Subject: [Xen-devel] [PATCH 2/3] VT-d: avoid PCI device lookup X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Kevin Tian Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" The two uses of pci_get_pdev_by_domain() lack proper locking, but are also only used to get hold of a NUMA node ID. Calculate and store the node ID earlier on and remove the lookups (in lieu of fixing the locking). While doing this it became apparent that iommu_alloc()'s use of alloc_pgtable_maddr() would occur before RHSAs would have been parsed: iommu_alloc() gets called from the DRHD parsing routine, which - on spec conforming platforms - happens strictly before RHSA parsing. Defer the allocation until after all ACPI table parsing has finished, established the node ID there first. Suggested-by: Kevin Tian Signed-off-by: Jan Beulich Reviewed-by: Kevin Tian --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -151,6 +151,10 @@ int iommu_domain_init(struct domain *d) struct domain_iommu *hd = dom_iommu(d); int ret = 0; +#ifdef CONFIG_NUMA + hd->node = NUMA_NO_NODE; +#endif + ret = arch_iommu_domain_init(d); if ( ret ) return ret; --- a/xen/drivers/passthrough/vtd/dmar.c +++ b/xen/drivers/passthrough/vtd/dmar.c @@ -965,6 +965,7 @@ int __init acpi_dmar_init(void) { acpi_physical_address dmar_addr; acpi_native_uint dmar_len; + const struct acpi_drhd_unit *drhd; int ret; if ( ACPI_SUCCESS(acpi_get_table_phys(ACPI_SIG_DMAR, 0, @@ -978,6 +979,21 @@ int __init acpi_dmar_init(void) ret = parse_dmar_table(acpi_parse_dmar); + for_each_drhd_unit ( drhd ) + { + const struct acpi_rhsa_unit *rhsa = drhd_to_rhsa(drhd); + struct iommu *iommu = drhd->iommu; + + if ( ret ) + break; + + if ( rhsa ) + iommu->intel->node = pxm_to_node(rhsa->proximity_domain); + + if ( !(iommu->root_maddr = alloc_pgtable_maddr(1, iommu->intel->node)) ) + ret = -ENOMEM; + } + if ( !ret ) { iommu_init_ops = &intel_iommu_init_ops; --- a/xen/drivers/passthrough/vtd/extern.h +++ b/xen/drivers/passthrough/vtd/extern.h @@ -73,7 +73,7 @@ unsigned int get_cache_line_size(void); void cacheline_flush(char *); void flush_all_cache(void); -u64 alloc_pgtable_maddr(struct acpi_drhd_unit *drhd, unsigned long npages); +uint64_t alloc_pgtable_maddr(unsigned long npages, nodeid_t node); void free_pgtable_maddr(u64 maddr); void *map_vtd_domain_page(u64 maddr); void unmap_vtd_domain_page(void *va); --- a/xen/drivers/passthrough/vtd/intremap.c +++ b/xen/drivers/passthrough/vtd/intremap.c @@ -795,8 +795,8 @@ int enable_intremap(struct iommu *iommu, if ( ir_ctrl->iremap_maddr == 0 ) { - ir_ctrl->iremap_maddr = alloc_pgtable_maddr(iommu->intel->drhd, - IREMAP_ARCH_PAGE_NR); + ir_ctrl->iremap_maddr = alloc_pgtable_maddr(IREMAP_ARCH_PAGE_NR, + iommu->intel->node); if ( ir_ctrl->iremap_maddr == 0 ) { dprintk(XENLOG_WARNING VTDPREFIX, --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -184,18 +184,12 @@ void iommu_flush_cache_page(void *addr, } /* Allocate page table, return its machine address */ -u64 alloc_pgtable_maddr(struct acpi_drhd_unit *drhd, unsigned long npages) +uint64_t alloc_pgtable_maddr(unsigned long npages, nodeid_t node) { - struct acpi_rhsa_unit *rhsa; struct page_info *pg, *cur_pg; u64 *vaddr; - nodeid_t node = NUMA_NO_NODE; unsigned int i; - rhsa = drhd_to_rhsa(drhd); - if ( rhsa ) - node = pxm_to_node(rhsa->proximity_domain); - pg = alloc_domheap_pages(NULL, get_order_from_pages(npages), (node == NUMA_NO_NODE) ? 0 : MEMF_node(node)); if ( !pg ) @@ -232,7 +226,7 @@ static u64 bus_to_context_maddr(struct i root = &root_entries[bus]; if ( !root_present(*root) ) { - maddr = alloc_pgtable_maddr(iommu->intel->drhd, 1); + maddr = alloc_pgtable_maddr(1, iommu->intel->node); if ( maddr == 0 ) { unmap_vtd_domain_page(root_entries); @@ -249,8 +243,6 @@ static u64 bus_to_context_maddr(struct i static u64 addr_to_dma_page_maddr(struct domain *domain, u64 addr, int alloc) { - struct acpi_drhd_unit *drhd; - struct pci_dev *pdev; struct domain_iommu *hd = dom_iommu(domain); int addr_width = agaw_to_width(hd->arch.agaw); struct dma_pte *parent, *pte = NULL; @@ -260,17 +252,10 @@ static u64 addr_to_dma_page_maddr(struct addr &= (((u64)1) << addr_width) - 1; ASSERT(spin_is_locked(&hd->arch.mapping_lock)); - if ( hd->arch.pgd_maddr == 0 ) - { - /* - * just get any passthrough device in the domainr - assume user - * assigns only devices from same node to a given guest. - */ - pdev = pci_get_pdev_by_domain(domain, -1, -1, -1); - drhd = acpi_find_matched_drhd_unit(pdev); - if ( !alloc || ((hd->arch.pgd_maddr = alloc_pgtable_maddr(drhd, 1)) == 0) ) - goto out; - } + if ( !hd->arch.pgd_maddr && + (!alloc || + ((hd->arch.pgd_maddr = alloc_pgtable_maddr(1, hd->node)) == 0)) ) + goto out; parent = (struct dma_pte *)map_vtd_domain_page(hd->arch.pgd_maddr); while ( level > 1 ) @@ -284,9 +269,7 @@ static u64 addr_to_dma_page_maddr(struct if ( !alloc ) break; - pdev = pci_get_pdev_by_domain(domain, -1, -1, -1); - drhd = acpi_find_matched_drhd_unit(pdev); - pte_maddr = alloc_pgtable_maddr(drhd, 1); + pte_maddr = alloc_pgtable_maddr(1, hd->node); if ( !pte_maddr ) break; @@ -1190,11 +1173,9 @@ int __init iommu_alloc(struct acpi_drhd_ return -ENOMEM; } iommu->intel->drhd = drhd; + iommu->intel->node = NUMA_NO_NODE; drhd->iommu = iommu; - if ( !(iommu->root_maddr = alloc_pgtable_maddr(drhd, 1)) ) - return -ENOMEM; - iommu->reg = ioremap(drhd->address, PAGE_SIZE); if ( !iommu->reg ) return -ENOMEM; @@ -1488,6 +1469,17 @@ static int domain_context_mapping(struct if ( !drhd ) return -ENODEV; + /* + * Generally we assume only devices from one node to get assigned to a + * given guest. But even if not, by replacing the prior value here we + * guarantee that at least some basic allocations for the device being + * added will get done against its node. Any further allocations for + * this or other devices may be penalized then, but some would also be + * if we left other than NUMA_NO_NODE untouched here. + */ + if ( drhd->iommu->intel->node != NUMA_NO_NODE ) + dom_iommu(domain)->node = drhd->iommu->intel->node; + ASSERT(pcidevs_locked()); switch ( pdev->type ) --- a/xen/drivers/passthrough/vtd/iommu.h +++ b/xen/drivers/passthrough/vtd/iommu.h @@ -530,6 +530,7 @@ struct intel_iommu { struct ir_ctrl ir_ctrl; struct iommu_flush flush; struct acpi_drhd_unit *drhd; + nodeid_t node; }; struct iommu { --- a/xen/drivers/passthrough/vtd/qinval.c +++ b/xen/drivers/passthrough/vtd/qinval.c @@ -415,8 +415,8 @@ int enable_qinval(struct iommu *iommu) if ( qi_ctrl->qinval_maddr == 0 ) { - qi_ctrl->qinval_maddr = alloc_pgtable_maddr(iommu->intel->drhd, - QINVAL_ARCH_PAGE_NR); + qi_ctrl->qinval_maddr = alloc_pgtable_maddr(QINVAL_ARCH_PAGE_NR, + iommu->intel->node); if ( qi_ctrl->qinval_maddr == 0 ) { dprintk(XENLOG_WARNING VTDPREFIX, --- a/xen/include/xen/iommu.h +++ b/xen/include/xen/iommu.h @@ -266,6 +266,11 @@ struct domain_iommu { struct list_head dt_devices; #endif +#ifdef CONFIG_NUMA + /* NUMA node to do IOMMU related allocations against. */ + nodeid_t node; +#endif + /* Features supported by the IOMMU */ DECLARE_BITMAP(features, IOMMU_FEAT_count); From patchwork Wed Sep 4 13:28:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11130277 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5041A14DE for ; Wed, 4 Sep 2019 13:29:25 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3657622CEA for ; Wed, 4 Sep 2019 13:29:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3657622CEA Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5VKS-0001Zm-4T; Wed, 04 Sep 2019 13:28:08 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5VKR-0001ZR-01 for xen-devel@lists.xenproject.org; Wed, 04 Sep 2019 13:28:07 +0000 X-Inumbo-ID: d35957b8-cf17-11e9-abb6-12813bfff9fa Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id d35957b8-cf17-11e9-abb6-12813bfff9fa; Wed, 04 Sep 2019 13:28:05 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 93F93B0E5; Wed, 4 Sep 2019 13:28:04 +0000 (UTC) From: Jan Beulich To: "xen-devel@lists.xenproject.org" References: <050de29e-5a10-8b4a-44f1-0241f4b33ee2@suse.com> Message-ID: <959e3395-4637-6e9b-74dc-9982acf10dec@suse.com> Date: Wed, 4 Sep 2019 15:28:10 +0200 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: <050de29e-5a10-8b4a-44f1-0241f4b33ee2@suse.com> Content-Language: en-US Subject: [Xen-devel] [PATCH 3/3] VT-d/ATS: tidy device_in_domain() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Kevin Tian Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Use appropriate types. Drop unnecessary casts. Check for failures which can (at least in theory because of non-obvious breakage elsewhere) occur, instead of ones which really can't (map_domain_page() won't return NULL). Signed-off-by: Jan Beulich Reviewed-by: Kevin Tian --- a/xen/drivers/passthrough/vtd/x86/ats.c +++ b/xen/drivers/passthrough/vtd/x86/ats.c @@ -71,23 +71,25 @@ int ats_device(const struct pci_dev *pde return pos; } -static int device_in_domain(const struct iommu *iommu, - const struct pci_dev *pdev, u16 did) +static bool device_in_domain(const struct iommu *iommu, + const struct pci_dev *pdev, uint16_t did) { - struct root_entry *root_entry = NULL; + struct root_entry *root_entry; struct context_entry *ctxt_entry = NULL; - int tt, found = 0; + unsigned int tt; + bool found = false; - root_entry = (struct root_entry *) map_vtd_domain_page(iommu->root_maddr); - if ( !root_entry || !root_present(root_entry[pdev->bus]) ) - goto out; - - ctxt_entry = (struct context_entry *) - map_vtd_domain_page(root_entry[pdev->bus].val); + if ( unlikely(!iommu->root_maddr) ) + { + ASSERT_UNREACHABLE(); + return false; + } - if ( ctxt_entry == NULL ) + root_entry = map_vtd_domain_page(iommu->root_maddr); + if ( !root_present(root_entry[pdev->bus]) ) goto out; + ctxt_entry = map_vtd_domain_page(root_entry[pdev->bus].val); if ( context_domain_id(ctxt_entry[pdev->devfn]) != did ) goto out; @@ -95,7 +97,7 @@ static int device_in_domain(const struct if ( tt != CONTEXT_TT_DEV_IOTLB ) goto out; - found = 1; + found = true; out: if ( root_entry ) unmap_vtd_domain_page(root_entry);