From patchwork Tue Nov 4 12:01:44 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiang Liu X-Patchwork-Id: 5226661 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 41842C11AC for ; Tue, 4 Nov 2014 12:05:23 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 6BF4D20160 for ; Tue, 4 Nov 2014 12:05:22 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 92A9520103 for ; Tue, 4 Nov 2014 12:05:21 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Xlcoo-00064Q-K5; Tue, 04 Nov 2014 12:02:38 +0000 Received: from mga14.intel.com ([192.55.52.115]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XlcoP-0005Wu-Is for linux-arm-kernel@lists.infradead.org; Tue, 04 Nov 2014 12:02:14 +0000 Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga103.fm.intel.com with ESMTP; 04 Nov 2014 03:55:38 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.07,313,1413270000"; d="scan'208";a="616766597" Received: from gerry-dev.bj.intel.com ([10.238.158.52]) by fmsmga001.fm.intel.com with ESMTP; 04 Nov 2014 04:01:08 -0800 From: Jiang Liu To: Benjamin Herrenschmidt , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , "Rafael J. Wysocki" , Bjorn Helgaas , Randy Dunlap , Yinghai Lu , Borislav Petkov , Grant Likely , Marc Zyngier , Yingjoe Chen , x86@kernel.org, Matthias Brugger , Jiang Liu Subject: [Patch Part2 v4 10/31] x86, uv: Use new irqdomain interfaces to allocate/free IRQ Date: Tue, 4 Nov 2014 20:01:44 +0800 Message-Id: <1415102525-9898-11-git-send-email-jiang.liu@linux.intel.com> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1415102525-9898-1-git-send-email-jiang.liu@linux.intel.com> References: <1415102525-9898-1-git-send-email-jiang.liu@linux.intel.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141104_040214_379394_D3FE68F1 X-CRM114-Status: GOOD ( 14.36 ) X-Spam-Score: -5.6 (-----) Cc: Tony Luck , Konrad Rzeszutek Wilk , Greg Kroah-Hartman , Joerg Roedel , linux-kernel@vger.kernel.org, linux-acpi@vger.kernel.org, linux-pci@vger.kernel.org, Andrew Morton , linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Use new irqdomain interfaces to allocate/free IRQ, so we could kill GENERIC_IRQ_LEGACY_ALLOC_HWIRQ later. Signed-off-by: Jiang Liu --- arch/x86/platform/uv/uv_irq.c | 27 +++++++++++---------------- 1 file changed, 11 insertions(+), 16 deletions(-) diff --git a/arch/x86/platform/uv/uv_irq.c b/arch/x86/platform/uv/uv_irq.c index 0ce673645432..474912d03f40 100644 --- a/arch/x86/platform/uv/uv_irq.c +++ b/arch/x86/platform/uv/uv_irq.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include @@ -130,24 +131,14 @@ static int arch_enable_uv_irq(char *irq_name, unsigned int irq, int cpu, int mmr_blade, unsigned long mmr_offset, int limit) { - const struct cpumask *eligible_cpu = cpumask_of(cpu); struct irq_cfg *cfg = irq_cfg(irq); unsigned long mmr_value; struct uv_IO_APIC_route_entry *entry; - int mmr_pnode, err; - unsigned int dest; + int mmr_pnode; BUILD_BUG_ON(sizeof(struct uv_IO_APIC_route_entry) != sizeof(unsigned long)); - err = assign_irq_vector(irq, cfg, eligible_cpu); - if (err != 0) - return err; - - err = apic->cpu_mask_to_apicid_and(eligible_cpu, eligible_cpu, &dest); - if (err != 0) - return err; - if (limit == UV_AFFINITY_CPU) irq_set_status_flags(irq, IRQ_NO_BALANCING); else @@ -164,7 +155,7 @@ arch_enable_uv_irq(char *irq_name, unsigned int irq, int cpu, int mmr_blade, entry->polarity = 0; entry->trigger = 0; entry->mask = 0; - entry->dest = dest; + entry->dest = cfg->dest_apicid; mmr_pnode = uv_blade_to_pnode(mmr_blade); uv_write_global_mmr64(mmr_pnode, mmr_offset, mmr_value); @@ -238,9 +229,13 @@ uv_set_irq_affinity(struct irq_data *data, const struct cpumask *mask, int uv_setup_irq(char *irq_name, int cpu, int mmr_blade, unsigned long mmr_offset, int limit) { - int ret, irq = irq_alloc_hwirq(uv_blade_to_memory_nid(mmr_blade)); + int ret, irq; + struct irq_alloc_info info; - if (!irq) + init_irq_alloc_info(&info, cpumask_of(cpu)); + irq = irq_domain_alloc_irqs(NULL, 1, uv_blade_to_memory_nid(mmr_blade), + &info); + if (irq <= 0) return -EBUSY; ret = arch_enable_uv_irq(irq_name, irq, cpu, mmr_blade, mmr_offset, @@ -248,7 +243,7 @@ int uv_setup_irq(char *irq_name, int cpu, int mmr_blade, if (ret == irq) uv_set_irq_2_mmr_info(irq, mmr_offset, mmr_blade); else - irq_free_hwirq(irq); + irq_domain_free_irqs(irq, 1); return ret; } @@ -283,6 +278,6 @@ void uv_teardown_irq(unsigned int irq) n = n->rb_right; } spin_unlock_irqrestore(&uv_irq_lock, irqflags); - irq_free_hwirq(irq); + irq_domain_free_irqs(irq, 1); } EXPORT_SYMBOL_GPL(uv_teardown_irq);