From patchwork Tue Oct 28 08:26:25 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiang Liu X-Patchwork-Id: 5172871 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id A9D5BC11AC for ; Tue, 28 Oct 2014 08:28:54 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id DAFEC202B8 for ; Tue, 28 Oct 2014 08:28:53 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 07B3D2017D for ; Tue, 28 Oct 2014 08:28:53 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Xj26W-0006ZM-0T; Tue, 28 Oct 2014 08:26:12 +0000 Received: from mga02.intel.com ([134.134.136.20]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Xj25m-0004P9-Jr for linux-arm-kernel@lists.infradead.org; Tue, 28 Oct 2014 08:25:27 +0000 Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP; 28 Oct 2014 01:25:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.04,801,1406617200"; d="scan'208";a="597493871" Received: from gerry-dev.bj.intel.com ([10.238.158.215]) by orsmga001.jf.intel.com with ESMTP; 28 Oct 2014 01:25:18 -0700 From: Jiang Liu To: Benjamin Herrenschmidt , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , "Rafael J. Wysocki" , Bjorn Helgaas , Randy Dunlap , Yinghai Lu , Borislav Petkov , Grant Likely , Marc Zyngier , Yingjoe Chen , x86@kernel.org, Matthias Brugger , Jiang Liu Subject: [Patch Part2 v3 07/24] x86, uv: Use new irqdomain interfaces to allocate/free IRQ Date: Tue, 28 Oct 2014 16:26:25 +0800 Message-Id: <1414484803-10311-8-git-send-email-jiang.liu@linux.intel.com> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1414484803-10311-1-git-send-email-jiang.liu@linux.intel.com> References: <1414484803-10311-1-git-send-email-jiang.liu@linux.intel.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141028_012527_032356_951CD75C X-CRM114-Status: GOOD ( 16.39 ) X-Spam-Score: -5.6 (-----) Cc: Tony Luck , Konrad Rzeszutek Wilk , Greg Kroah-Hartman , Joerg Roedel , linux-kernel@vger.kernel.org, linux-acpi@vger.kernel.org, linux-pci@vger.kernel.org, Andrew Morton , linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Use new irqdomain interfaces to allocate/free IRQ, so we could kill GENERIC_IRQ_LEGACY_ALLOC_HWIRQ later. Signed-off-by: Jiang Liu --- arch/x86/platform/uv/uv_irq.c | 27 +++++++++++---------------- 1 file changed, 11 insertions(+), 16 deletions(-) diff --git a/arch/x86/platform/uv/uv_irq.c b/arch/x86/platform/uv/uv_irq.c index 0ce673645432..474912d03f40 100644 --- a/arch/x86/platform/uv/uv_irq.c +++ b/arch/x86/platform/uv/uv_irq.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include @@ -130,24 +131,14 @@ static int arch_enable_uv_irq(char *irq_name, unsigned int irq, int cpu, int mmr_blade, unsigned long mmr_offset, int limit) { - const struct cpumask *eligible_cpu = cpumask_of(cpu); struct irq_cfg *cfg = irq_cfg(irq); unsigned long mmr_value; struct uv_IO_APIC_route_entry *entry; - int mmr_pnode, err; - unsigned int dest; + int mmr_pnode; BUILD_BUG_ON(sizeof(struct uv_IO_APIC_route_entry) != sizeof(unsigned long)); - err = assign_irq_vector(irq, cfg, eligible_cpu); - if (err != 0) - return err; - - err = apic->cpu_mask_to_apicid_and(eligible_cpu, eligible_cpu, &dest); - if (err != 0) - return err; - if (limit == UV_AFFINITY_CPU) irq_set_status_flags(irq, IRQ_NO_BALANCING); else @@ -164,7 +155,7 @@ arch_enable_uv_irq(char *irq_name, unsigned int irq, int cpu, int mmr_blade, entry->polarity = 0; entry->trigger = 0; entry->mask = 0; - entry->dest = dest; + entry->dest = cfg->dest_apicid; mmr_pnode = uv_blade_to_pnode(mmr_blade); uv_write_global_mmr64(mmr_pnode, mmr_offset, mmr_value); @@ -238,9 +229,13 @@ uv_set_irq_affinity(struct irq_data *data, const struct cpumask *mask, int uv_setup_irq(char *irq_name, int cpu, int mmr_blade, unsigned long mmr_offset, int limit) { - int ret, irq = irq_alloc_hwirq(uv_blade_to_memory_nid(mmr_blade)); + int ret, irq; + struct irq_alloc_info info; - if (!irq) + init_irq_alloc_info(&info, cpumask_of(cpu)); + irq = irq_domain_alloc_irqs(NULL, 1, uv_blade_to_memory_nid(mmr_blade), + &info); + if (irq <= 0) return -EBUSY; ret = arch_enable_uv_irq(irq_name, irq, cpu, mmr_blade, mmr_offset, @@ -248,7 +243,7 @@ int uv_setup_irq(char *irq_name, int cpu, int mmr_blade, if (ret == irq) uv_set_irq_2_mmr_info(irq, mmr_offset, mmr_blade); else - irq_free_hwirq(irq); + irq_domain_free_irqs(irq, 1); return ret; } @@ -283,6 +278,6 @@ void uv_teardown_irq(unsigned int irq) n = n->rb_right; } spin_unlock_irqrestore(&uv_irq_lock, irqflags); - irq_free_hwirq(irq); + irq_domain_free_irqs(irq, 1); } EXPORT_SYMBOL_GPL(uv_teardown_irq);