From patchwork Wed Apr 19 15:11:26 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 9687779 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3CFC16037F for ; Wed, 19 Apr 2017 15:14:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2AD5428451 for ; Wed, 19 Apr 2017 15:14:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1CE262844C; Wed, 19 Apr 2017 15:14:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 77BA42844C for ; Wed, 19 Apr 2017 15:14:01 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1d0rGc-0007Vu-Te; Wed, 19 Apr 2017 15:11:38 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1d0rGc-0007Vn-77 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2017 15:11:38 +0000 Received: from [85.158.143.35] by server-11.bemta-6.messagelabs.com id 8F/33-03642-92E77F85; Wed, 19 Apr 2017 15:11:37 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprDIsWRWlGSWpSXmKPExsXitHRDpK5G3fc Ig91HbCy+b5nM5MDocfjDFZYAxijWzLyk/IoE1ox3W++zFPwNqzjRspetgfGIaRcjJ4eEgL9E /52VzCA2m4COxMW5O9m6GDk4RARUJG7vNQAJMwskS2zYsIENxBYWCJa4uvYOWDmLgKrEutc32 UFsXgFLifaGP0wQI/Uk3k58wQhicwpYSVyZ1wLWKwRUs/joTGaIekGJkzOfsEDM15Ro3f6bHc KWl2jeOpsZol5Ron/eA7YJjHyzkLTMQtIyC0nLAkbmVYwaxalFZalFukYWeklFmekZJbmJmTm 6hgZmermpxcWJ6ak5iUnFesn5uZsYgaHGAAQ7GM+vDTzEKMnBpCTKK1P1PUKILyk/pTIjsTgj vqg0J7X4EKMMB4eSBG9ILVBOsCg1PbUiLTMHGPQwaQkOHiURXl2QNG9xQWJucWY6ROoUo6KUO O9CkIQASCKjNA+uDRZplxhlpYR5GYEOEeIpSC3KzSxBlX/FKM7BqCTMuwRkCk9mXgnc9FdAi5 mAFkcEfAFZXJKIkJJqYFR4lLzHZnbkg/ztL0WSql6I6W279qVcMWa/Tv6ro1K8dU9er52vuyX tw/s/4VLczH/POa3ImPL1Q57LX9VjBht+Cuhd3aiadySfs3zPsvtVPJ9snzptrhY9ab+lhmfS PRu200xLxfkcZZ6kz55nIPR6k0CbHuNxx9nn/t/+e/P8q0//+lzylYWVWIozEg21mIuKEwHBW /ukrwIAAA== X-Env-Sender: prvs=27595a24e=roger.pau@citrix.com X-Msg-Ref: server-15.tower-21.messagelabs.com!1492614695!64233539!1 X-Originating-IP: [66.165.176.89] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 9.4.12; banners=-,-,- X-VirusChecked: Checked Received: (qmail 20029 invoked from network); 19 Apr 2017 15:11:36 -0000 Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89) by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP; 19 Apr 2017 15:11:36 -0000 X-IronPort-AV: E=Sophos;i="5.37,221,1488844800"; d="scan'208";a="419863674" From: Roger Pau Monne To: Date: Wed, 19 Apr 2017 16:11:26 +0100 Message-ID: <20170419151128.87416-2-roger.pau@citrix.com> X-Mailer: git-send-email 2.11.0 (Apple Git-81) In-Reply-To: <20170419151128.87416-1-roger.pau@citrix.com> References: <20170419151128.87416-1-roger.pau@citrix.com> MIME-Version: 1.0 Cc: boris.ostrovsky@oracle.com, Roger Pau Monne Subject: [Xen-devel] [PATCH v2 1/3] x86/physdev: factor out the code to allocate and map a pirq X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Move the code to allocate and map a domain pirq (either GSI or MSI) into the x86 irq code base, so that it can be used outside of the physdev ops. This change shouldn't affect the functionality of the already existing physdev ops. Signed-off-by: Roger Pau Monné --- Jan Beulich Andrew Cooper --- Changes since v1: - New in this version. --- xen/arch/x86/irq.c | 175 ++++++++++++++++++++++++++++++++++++++++++++++ xen/arch/x86/physdev.c | 124 ++------------------------------ xen/include/asm-x86/irq.h | 5 ++ 3 files changed, 185 insertions(+), 119 deletions(-) diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c index 676ba5216f..8f6a91994c 100644 --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -2537,3 +2537,178 @@ bool_t hvm_domain_use_pirq(const struct domain *d, const struct pirq *pirq) return is_hvm_domain(d) && pirq && pirq->arch.hvm.emuirq != IRQ_UNBOUND; } + +int allocate_and_map_gsi_pirq(struct domain *d, int *index, int *pirq_p) +{ + int irq, pirq, ret; + + if ( *index < 0 || *index >= nr_irqs_gsi ) + { + dprintk(XENLOG_G_ERR, "dom%d: map invalid irq %d\n", d->domain_id, + *index); + return -EINVAL; + } + + irq = domain_pirq_to_irq(current->domain, *index); + if ( irq <= 0 ) + { + if ( is_hardware_domain(current->domain) ) + irq = *index; + else { + dprintk(XENLOG_G_ERR, "dom%d: map pirq with incorrect irq!\n", + d->domain_id); + return -EINVAL; + } + } + + pcidevs_lock(); + /* Verify or get pirq. */ + spin_lock(&d->event_lock); + pirq = domain_irq_to_pirq(d, irq); + + if ( *pirq_p < 0 ) + { + if ( pirq ) + { + dprintk(XENLOG_G_ERR, "dom%d: %d:%d already mapped to %d\n", + d->domain_id, *index, *pirq_p, pirq); + if ( pirq < 0 ) + { + ret = -EBUSY; + goto done; + } + } + else + { + pirq = get_free_pirq(d, MAP_PIRQ_TYPE_GSI); + if ( pirq < 0 ) + { + dprintk(XENLOG_G_ERR, "dom%d: no free pirq\n", d->domain_id); + ret = pirq; + goto done; + } + } + } + else + { + if ( pirq && pirq != *pirq_p ) + { + dprintk(XENLOG_G_ERR, "dom%d: pirq %d conflicts with irq %d\n", + d->domain_id, *index, *pirq_p); + ret = -EEXIST; + goto done; + } + else + pirq = *pirq_p; + } + + ret = map_domain_pirq(d, pirq, irq, MAP_PIRQ_TYPE_GSI, NULL); + if ( ret == 0 ) + *pirq_p = pirq; + + done: + spin_unlock(&d->event_lock); + pcidevs_unlock(); + return ret; +} + +int allocate_and_map_msi_pirq(struct domain *d, int *index, int *pirq_p, + struct msi_info *msi) +{ + int irq, pirq, ret, type; + + irq = *index; + if ( irq == -1 || msi->entry_nr > 1 ) + irq = create_irq(NUMA_NO_NODE); + + if ( irq < nr_irqs_gsi || irq >= nr_irqs ) + { + dprintk(XENLOG_G_ERR, "dom%d: can't create irq for msi!\n", + d->domain_id); + ret = -EINVAL; + goto free_irq; + } + + msi->irq = irq; + type = (msi->entry_nr > 1 && !msi->table_base) ? MAP_PIRQ_TYPE_MULTI_MSI + : MAP_PIRQ_TYPE_MSI; + + pcidevs_lock(); + /* Verify or get pirq. */ + spin_lock(&d->event_lock); + pirq = domain_irq_to_pirq(d, irq); + if ( *pirq_p < 0 ) + { + if ( pirq ) + { + dprintk(XENLOG_G_ERR, "dom%d: %d:%d already mapped to %d\n", + d->domain_id, *index, *pirq_p, pirq); + if ( pirq < 0 ) + { + ret = -EBUSY; + goto done; + } + } + else if ( msi->entry_nr > 1 && !msi->table_base ) + { + if ( msi->entry_nr <= 0 || msi->entry_nr > 32 ) + ret = -EDOM; + else if ( msi->entry_nr != 1 && !iommu_intremap ) + ret = -EOPNOTSUPP; + else + { + while ( msi->entry_nr & (msi->entry_nr - 1) ) + msi->entry_nr += msi->entry_nr & -msi->entry_nr; + pirq = get_free_pirqs(d, msi->entry_nr); + ret = 0; + if ( pirq < 0 ) + { + while ( (msi->entry_nr >>= 1) > 1 ) + if ( get_free_pirqs(d, msi->entry_nr) > 0 ) + break; + dprintk(XENLOG_G_ERR, "dom%d: no block of %d free pirqs\n", + d->domain_id, msi->entry_nr << 1); + ret = pirq; + } + } + if ( ret < 0 ) + goto done; + } + else + { + pirq = get_free_pirq(d, type); + if ( pirq < 0 ) + { + dprintk(XENLOG_G_ERR, "dom%d: no free pirq\n", d->domain_id); + ret = pirq; + goto done; + } + } + } + else + { + if ( pirq && pirq != *pirq_p ) + { + dprintk(XENLOG_G_ERR, "dom%d: pirq %d conflicts with irq %d\n", + d->domain_id, *index, *pirq_p); + ret = -EEXIST; + goto done; + } + else + pirq = *pirq_p; + } + + ret = map_domain_pirq(d, pirq, irq, type, msi); + if ( ret == 0 ) + *pirq_p = pirq; + + done: + spin_unlock(&d->event_lock); + pcidevs_unlock(); + free_irq: + if ( ret != 0 && + ((msi->entry_nr > 1 && !msi->table_base) || *index == -1) ) + destroy_irq(irq); + return ret; +} + diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c index eec4a41231..b5eded4be9 100644 --- a/xen/arch/x86/physdev.c +++ b/xen/arch/x86/physdev.c @@ -92,8 +92,7 @@ int physdev_map_pirq(domid_t domid, int type, int *index, int *pirq_p, struct msi_info *msi) { struct domain *d = current->domain; - int pirq, irq, ret = 0; - void *map_data = NULL; + int ret; if ( domid == DOMID_SELF && is_hvm_domain(d) && has_pirq(d) ) { @@ -119,135 +118,22 @@ int physdev_map_pirq(domid_t domid, int type, int *index, int *pirq_p, switch ( type ) { case MAP_PIRQ_TYPE_GSI: - if ( *index < 0 || *index >= nr_irqs_gsi ) - { - dprintk(XENLOG_G_ERR, "dom%d: map invalid irq %d\n", - d->domain_id, *index); - ret = -EINVAL; - goto free_domain; - } - - irq = domain_pirq_to_irq(current->domain, *index); - if ( irq <= 0 ) - { - if ( is_hardware_domain(current->domain) ) - irq = *index; - else { - dprintk(XENLOG_G_ERR, "dom%d: map pirq with incorrect irq!\n", - d->domain_id); - ret = -EINVAL; - goto free_domain; - } - } + ret = allocate_and_map_gsi_pirq(d, index, pirq_p); break; - case MAP_PIRQ_TYPE_MSI: if ( !msi->table_base ) msi->entry_nr = 1; - irq = *index; - if ( irq == -1 ) + /* fallthrough */ case MAP_PIRQ_TYPE_MULTI_MSI: - irq = create_irq(NUMA_NO_NODE); - - if ( irq < nr_irqs_gsi || irq >= nr_irqs ) - { - dprintk(XENLOG_G_ERR, "dom%d: can't create irq for msi!\n", - d->domain_id); - ret = -EINVAL; - goto free_domain; - } - - msi->irq = irq; - map_data = msi; + ret = allocate_and_map_msi_pirq(d, index, pirq_p, msi); break; - default: dprintk(XENLOG_G_ERR, "dom%d: wrong map_pirq type %x\n", d->domain_id, type); ret = -EINVAL; - goto free_domain; - } - - pcidevs_lock(); - /* Verify or get pirq. */ - spin_lock(&d->event_lock); - pirq = domain_irq_to_pirq(d, irq); - if ( *pirq_p < 0 ) - { - if ( pirq ) - { - dprintk(XENLOG_G_ERR, "dom%d: %d:%d already mapped to %d\n", - d->domain_id, *index, *pirq_p, pirq); - if ( pirq < 0 ) - { - ret = -EBUSY; - goto done; - } - } - else if ( type == MAP_PIRQ_TYPE_MULTI_MSI ) - { - if ( msi->entry_nr <= 0 || msi->entry_nr > 32 ) - ret = -EDOM; - else if ( msi->entry_nr != 1 && !iommu_intremap ) - ret = -EOPNOTSUPP; - else - { - while ( msi->entry_nr & (msi->entry_nr - 1) ) - msi->entry_nr += msi->entry_nr & -msi->entry_nr; - pirq = get_free_pirqs(d, msi->entry_nr); - if ( pirq < 0 ) - { - while ( (msi->entry_nr >>= 1) > 1 ) - if ( get_free_pirqs(d, msi->entry_nr) > 0 ) - break; - dprintk(XENLOG_G_ERR, "dom%d: no block of %d free pirqs\n", - d->domain_id, msi->entry_nr << 1); - ret = pirq; - } - } - if ( ret < 0 ) - goto done; - } - else - { - pirq = get_free_pirq(d, type); - if ( pirq < 0 ) - { - dprintk(XENLOG_G_ERR, "dom%d: no free pirq\n", d->domain_id); - ret = pirq; - goto done; - } - } - } - else - { - if ( pirq && pirq != *pirq_p ) - { - dprintk(XENLOG_G_ERR, "dom%d: pirq %d conflicts with irq %d\n", - d->domain_id, *index, *pirq_p); - ret = -EEXIST; - goto done; - } - else - pirq = *pirq_p; + break; } - ret = map_domain_pirq(d, pirq, irq, type, map_data); - if ( ret == 0 ) - *pirq_p = pirq; - - done: - spin_unlock(&d->event_lock); - pcidevs_unlock(); - if ( ret != 0 ) - switch ( type ) - { - case MAP_PIRQ_TYPE_MSI: - if ( *index == -1 ) - case MAP_PIRQ_TYPE_MULTI_MSI: - destroy_irq(irq); - break; - } free_domain: rcu_unlock_domain(d); return ret; diff --git a/xen/include/asm-x86/irq.h b/xen/include/asm-x86/irq.h index ef625ebb13..2b1701e67b 100644 --- a/xen/include/asm-x86/irq.h +++ b/xen/include/asm-x86/irq.h @@ -200,4 +200,9 @@ bool_t cpu_has_pending_apic_eoi(void); static inline void arch_move_irqs(struct vcpu *v) { } +struct msi_info; +int allocate_and_map_gsi_pirq(struct domain *d, int *index, int *pirq_p); +int allocate_and_map_msi_pirq(struct domain *d, int *index, int *pirq_p, + struct msi_info *msi); + #endif /* _ASM_HW_IRQ_H */