From patchwork Fri May 17 10:44:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 10947715 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 888B614C0 for ; Fri, 17 May 2019 10:46:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7508626E3D for ; Fri, 17 May 2019 10:46:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6936326E4F; Fri, 17 May 2019 10:46:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id E08A326E3D for ; Fri, 17 May 2019 10:46:15 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hRaM0-0005Bm-DU; Fri, 17 May 2019 10:44:44 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hRaLz-0005Bg-Ca for xen-devel@lists.xenproject.org; Fri, 17 May 2019 10:44:43 +0000 X-Inumbo-ID: c676a7aa-7890-11e9-8980-bc764e045a96 Received: from prv1-mh.provo.novell.com (unknown [137.65.248.33]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id c676a7aa-7890-11e9-8980-bc764e045a96; Fri, 17 May 2019 10:44:41 +0000 (UTC) Received: from INET-PRV1-MTA by prv1-mh.provo.novell.com with Novell_GroupWise; Fri, 17 May 2019 04:44:40 -0600 Message-Id: <5CDE90950200007800230069@prv1-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 18.1.0 Date: Fri, 17 May 2019 04:44:37 -0600 From: "Jan Beulich" To: "xen-devel" References: <5CC6DD090200007800229E80@prv1-mh.provo.novell.com> <5CDE8F5B020000780023005F@prv1-mh.provo.novell.com> In-Reply-To: <5CDE8F5B020000780023005F@prv1-mh.provo.novell.com> Mime-Version: 1.0 Content-Disposition: inline Subject: [Xen-devel] [PATCH v3 01/15] x86/IRQ: deal with move-in-progress state in fixup_irqs() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP The flag being set may prevent affinity changes, as these often imply assignment of a new vector. When there's no possible destination left for the IRQ, the clearing of the flag needs to happen right from fixup_irqs(). Additionally _assign_irq_vector() needs to avoid setting the flag when there's no online CPU left in what gets put into ->arch.old_cpu_mask. The old vector can be released right away in this case. Also extend the log message about broken affinity to include the new affinity as well, allowing to notice issues with affinity changes not actually having taken place. Swap the if/else-if order there at the same time to reduce the amount of conditions checked. At the same time replace two open coded instances of the new helper function. Signed-off-by: Jan Beulich Reviewed-by: Roger Pau Monné --- v3: Move release_old_vec() further up (so a later patch won't need to). Re-base. v2: Add/use valid_irq_vector(). v1b: Also update vector_irq[] in the code added to fixup_irqs(). --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -99,6 +99,27 @@ void unlock_vector_lock(void) spin_unlock(&vector_lock); } +static inline bool valid_irq_vector(unsigned int vector) +{ + return vector >= FIRST_DYNAMIC_VECTOR && vector <= LAST_HIPRIORITY_VECTOR; +} + +static void release_old_vec(struct irq_desc *desc) +{ + unsigned int vector = desc->arch.old_vector; + + desc->arch.old_vector = IRQ_VECTOR_UNASSIGNED; + cpumask_clear(desc->arch.old_cpu_mask); + + if ( !valid_irq_vector(vector) ) + ASSERT_UNREACHABLE(); + else if ( desc->arch.used_vectors ) + { + ASSERT(test_bit(vector, desc->arch.used_vectors)); + clear_bit(vector, desc->arch.used_vectors); + } +} + static void trace_irq_mask(uint32_t event, int irq, int vector, const cpumask_t *mask) { @@ -288,14 +309,7 @@ static void __clear_irq_vector(int irq) per_cpu(vector_irq, cpu)[old_vector] = ~irq; } - desc->arch.old_vector = IRQ_VECTOR_UNASSIGNED; - cpumask_clear(desc->arch.old_cpu_mask); - - if ( desc->arch.used_vectors ) - { - ASSERT(test_bit(old_vector, desc->arch.used_vectors)); - clear_bit(old_vector, desc->arch.used_vectors); - } + release_old_vec(desc); desc->arch.move_in_progress = 0; } @@ -520,12 +534,21 @@ next: /* Found one! */ current_vector = vector; current_offset = offset; - if (old_vector > 0) { - desc->arch.move_in_progress = 1; - cpumask_copy(desc->arch.old_cpu_mask, desc->arch.cpu_mask); + + if ( old_vector > 0 ) + { + cpumask_and(desc->arch.old_cpu_mask, desc->arch.cpu_mask, + &cpu_online_map); desc->arch.old_vector = desc->arch.vector; + if ( !cpumask_empty(desc->arch.old_cpu_mask) ) + desc->arch.move_in_progress = 1; + else + /* This can happen while offlining a CPU. */ + release_old_vec(desc); } + trace_irq_mask(TRC_HW_IRQ_ASSIGN_VECTOR, irq, vector, &tmp_mask); + for_each_cpu(new_cpu, &tmp_mask) per_cpu(vector_irq, new_cpu)[vector] = irq; desc->arch.vector = vector; @@ -694,14 +717,8 @@ void irq_move_cleanup_interrupt(struct c if ( desc->arch.move_cleanup_count == 0 ) { - desc->arch.old_vector = IRQ_VECTOR_UNASSIGNED; - cpumask_clear(desc->arch.old_cpu_mask); - - if ( desc->arch.used_vectors ) - { - ASSERT(test_bit(vector, desc->arch.used_vectors)); - clear_bit(vector, desc->arch.used_vectors); - } + ASSERT(vector == desc->arch.old_vector); + release_old_vec(desc); } unlock: spin_unlock(&desc->lock); @@ -2400,6 +2417,33 @@ void fixup_irqs(const cpumask_t *mask, b continue; } + /* + * In order for the affinity adjustment below to be successful, we + * need __assign_irq_vector() to succeed. This in particular means + * clearing desc->arch.move_in_progress if this would otherwise + * prevent the function from succeeding. Since there's no way for the + * flag to get cleared anymore when there's no possible destination + * left (the only possibility then would be the IRQs enabled window + * after this loop), there's then also no race with us doing it here. + * + * Therefore the logic here and there need to remain in sync. + */ + if ( desc->arch.move_in_progress && + !cpumask_intersects(mask, desc->arch.cpu_mask) ) + { + unsigned int cpu; + + cpumask_and(&affinity, desc->arch.old_cpu_mask, &cpu_online_map); + + spin_lock(&vector_lock); + for_each_cpu(cpu, &affinity) + per_cpu(vector_irq, cpu)[desc->arch.old_vector] = ~irq; + spin_unlock(&vector_lock); + + release_old_vec(desc); + desc->arch.move_in_progress = 0; + } + cpumask_and(&affinity, &affinity, mask); if ( cpumask_empty(&affinity) ) { @@ -2418,15 +2462,18 @@ void fixup_irqs(const cpumask_t *mask, b if ( desc->handler->enable ) desc->handler->enable(desc); + cpumask_copy(&affinity, desc->affinity); + spin_unlock(&desc->lock); if ( !verbose ) continue; - if ( break_affinity && set_affinity ) - printk("Broke affinity for irq %i\n", irq); - else if ( !set_affinity ) - printk("Cannot set affinity for irq %i\n", irq); + if ( !set_affinity ) + printk("Cannot set affinity for IRQ%u\n", irq); + else if ( break_affinity ) + printk("Broke affinity for IRQ%u, new: %*pb\n", + irq, nr_cpu_ids, &affinity); } /* That doesn't seem sufficient. Give it 1ms. */ From patchwork Fri May 17 10:45:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 10947719 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0F8771395 for ; Fri, 17 May 2019 10:47:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F079826E3D for ; Fri, 17 May 2019 10:47:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E50E126E4A; Fri, 17 May 2019 10:47:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7C28826E3D for ; Fri, 17 May 2019 10:47:04 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hRaMV-0005Dy-ON; Fri, 17 May 2019 10:45:15 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hRaMV-0005Ds-9z for xen-devel@lists.xenproject.org; Fri, 17 May 2019 10:45:15 +0000 X-Inumbo-ID: d89850ea-7890-11e9-8a18-83d48ffa0521 Received: from prv1-mh.provo.novell.com (unknown [137.65.248.33]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id d89850ea-7890-11e9-8a18-83d48ffa0521; Fri, 17 May 2019 10:45:12 +0000 (UTC) Received: from INET-PRV1-MTA by prv1-mh.provo.novell.com with Novell_GroupWise; Fri, 17 May 2019 04:45:10 -0600 Message-Id: <5CDE90B8020000780023006C@prv1-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 18.1.0 Date: Fri, 17 May 2019 04:45:12 -0600 From: "Jan Beulich" To: "xen-devel" References: <5CC6DD090200007800229E80@prv1-mh.provo.novell.com> <5CDE8F5B020000780023005F@prv1-mh.provo.novell.com> In-Reply-To: <5CDE8F5B020000780023005F@prv1-mh.provo.novell.com> Mime-Version: 1.0 Content-Disposition: inline Subject: [Xen-devel] [PATCH v3 02/15] x86/IRQ: deal with move cleanup count state in fixup_irqs() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP The cleanup IPI may get sent immediately before a CPU gets removed from the online map. In such a case the IPI would get handled on the CPU being offlined no earlier than in the interrupts disabled window after fixup_irqs()' main loop. This is too late, however, because a possible affinity change may incur the need for vector assignment, which will fail when the IRQ's move cleanup count is still non-zero. To fix this - record the set of CPUs the cleanup IPIs gets actually sent to alongside setting their count, - adjust the count in fixup_irqs(), accounting for all CPUs that the cleanup IPI was sent to, but that are no longer online, - bail early from the cleanup IPI handler when the CPU is no longer online, to prevent double accounting. Signed-off-by: Jan Beulich Reviewed-by: Roger Pau Monné Acked-by: Andrew Cooper --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -668,6 +668,9 @@ void irq_move_cleanup_interrupt(struct c ack_APIC_irq(); me = smp_processor_id(); + if ( !cpu_online(me) ) + return; + for ( vector = FIRST_DYNAMIC_VECTOR; vector <= LAST_HIPRIORITY_VECTOR; vector++) { @@ -727,11 +730,14 @@ unlock: static void send_cleanup_vector(struct irq_desc *desc) { - cpumask_t cleanup_mask; + cpumask_and(desc->arch.old_cpu_mask, desc->arch.old_cpu_mask, + &cpu_online_map); + desc->arch.move_cleanup_count = cpumask_weight(desc->arch.old_cpu_mask); - cpumask_and(&cleanup_mask, desc->arch.old_cpu_mask, &cpu_online_map); - desc->arch.move_cleanup_count = cpumask_weight(&cleanup_mask); - send_IPI_mask(&cleanup_mask, IRQ_MOVE_CLEANUP_VECTOR); + if ( desc->arch.move_cleanup_count ) + send_IPI_mask(desc->arch.old_cpu_mask, IRQ_MOVE_CLEANUP_VECTOR); + else + release_old_vec(desc); desc->arch.move_in_progress = 0; } @@ -2410,6 +2416,16 @@ void fixup_irqs(const cpumask_t *mask, b vector <= LAST_HIPRIORITY_VECTOR ) cpumask_and(desc->arch.cpu_mask, desc->arch.cpu_mask, mask); + if ( desc->arch.move_cleanup_count ) + { + /* The cleanup IPI may have got sent while we were still online. */ + cpumask_andnot(&affinity, desc->arch.old_cpu_mask, + &cpu_online_map); + desc->arch.move_cleanup_count -= cpumask_weight(&affinity); + if ( !desc->arch.move_cleanup_count ) + release_old_vec(desc); + } + cpumask_copy(&affinity, desc->affinity); if ( !desc->action || cpumask_subset(&affinity, mask) ) { From patchwork Fri May 17 10:46:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 10947721 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8230314C0 for ; Fri, 17 May 2019 10:47:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6FD5A26E47 for ; Fri, 17 May 2019 10:47:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6413F26E4F; Fri, 17 May 2019 10:47:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id F0CF026E47 for ; Fri, 17 May 2019 10:47:28 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hRaNR-0005Ke-5d; Fri, 17 May 2019 10:46:13 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hRaNQ-0005KT-2Y for xen-devel@lists.xenproject.org; Fri, 17 May 2019 10:46:12 +0000 X-Inumbo-ID: fbef76d3-7890-11e9-8980-bc764e045a96 Received: from prv1-mh.provo.novell.com (unknown [137.65.248.33]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id fbef76d3-7890-11e9-8980-bc764e045a96; Fri, 17 May 2019 10:46:11 +0000 (UTC) Received: from INET-PRV1-MTA by prv1-mh.provo.novell.com with Novell_GroupWise; Fri, 17 May 2019 04:46:10 -0600 Message-Id: <5CDE90F2020000780023006F@prv1-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 18.1.0 Date: Fri, 17 May 2019 04:46:10 -0600 From: "Jan Beulich" To: "xen-devel" References: <5CC6DD090200007800229E80@prv1-mh.provo.novell.com> <5CDE8F5B020000780023005F@prv1-mh.provo.novell.com> In-Reply-To: <5CDE8F5B020000780023005F@prv1-mh.provo.novell.com> Mime-Version: 1.0 Content-Disposition: inline Subject: [Xen-devel] [PATCH v3 03/15] x86/IRQ: improve dump_irqs() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Don't log a stray trailing comma. Shorten a few fields. Signed-off-by: Jan Beulich Reviewed-by: Roger Pau Monné Acked-by: Andrew Cooper --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -2334,7 +2334,7 @@ static void dump_irqs(unsigned char key) spin_lock_irqsave(&desc->lock, flags); - printk(" IRQ:%4d affinity:%*pb vec:%02x type=%-15s status=%08x ", + printk(" IRQ:%4d aff:%*pb vec:%02x %-15s status=%03x ", irq, nr_cpu_ids, cpumask_bits(desc->affinity), desc->arch.vector, desc->handler->typename, desc->status); @@ -2345,23 +2345,21 @@ static void dump_irqs(unsigned char key) { action = (irq_guest_action_t *)desc->action; - printk("in-flight=%d domain-list=", action->in_flight); + printk("in-flight=%d%c", + action->in_flight, action->nr_guests ? ' ' : '\n'); - for ( i = 0; i < action->nr_guests; i++ ) + for ( i = 0; i < action->nr_guests; ) { - d = action->guest[i]; + d = action->guest[i++]; pirq = domain_irq_to_pirq(d, irq); info = pirq_info(d, pirq); - printk("%u:%3d(%c%c%c)", + printk("d%d:%3d(%c%c%c)%c", d->domain_id, pirq, evtchn_port_is_pending(d, info->evtchn) ? 'P' : '-', evtchn_port_is_masked(d, info->evtchn) ? 'M' : '-', - (info->masked ? 'M' : '-')); - if ( i != action->nr_guests ) - printk(","); + info->masked ? 'M' : '-', + i < action->nr_guests ? ',' : '\n'); } - - printk("\n"); } else if ( desc->action ) printk("%ps()\n", desc->action->handler); From patchwork Fri May 17 10:46:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 10947725 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8FD6F1395 for ; Fri, 17 May 2019 10:48:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 795C926E47 for ; Fri, 17 May 2019 10:48:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 680D526E4F; Fri, 17 May 2019 10:48:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id EB9D526E47 for ; Fri, 17 May 2019 10:48:21 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hRaNv-0005Pd-GN; Fri, 17 May 2019 10:46:43 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hRaNu-0005PW-AO for xen-devel@lists.xenproject.org; Fri, 17 May 2019 10:46:42 +0000 X-Inumbo-ID: 0c224254-7891-11e9-901e-f78256517d3a Received: from prv1-mh.provo.novell.com (unknown [137.65.248.33]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 0c224254-7891-11e9-901e-f78256517d3a; Fri, 17 May 2019 10:46:38 +0000 (UTC) Received: from INET-PRV1-MTA by prv1-mh.provo.novell.com with Novell_GroupWise; Fri, 17 May 2019 04:46:37 -0600 Message-Id: <5CDE910E0200007800230072@prv1-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 18.1.0 Date: Fri, 17 May 2019 04:46:38 -0600 From: "Jan Beulich" To: "xen-devel" References: <5CC6DD090200007800229E80@prv1-mh.provo.novell.com> <5CDE8F5B020000780023005F@prv1-mh.provo.novell.com> In-Reply-To: <5CDE8F5B020000780023005F@prv1-mh.provo.novell.com> Mime-Version: 1.0 Content-Disposition: inline Subject: [Xen-devel] [PATCH v3 04/15] x86/IRQ: desc->affinity should strictly represent the requested value X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP desc->arch.cpu_mask reflects the actual set of target CPUs. Don't ever fiddle with desc->affinity itself, except to store caller requested values. Note that assign_irq_vector() now takes a NULL incoming CPU mask to mean "all CPUs" now, rather than just "all currently online CPUs". This way no further affinity adjustment is needed after onlining further CPUs. This renders both set_native_irq_info() uses (which weren't using proper locking anyway) redundant - drop the function altogether. Signed-off-by: Jan Beulich Reviewed-by: Roger Pau Monné --- a/xen/arch/x86/io_apic.c +++ b/xen/arch/x86/io_apic.c @@ -1039,7 +1039,6 @@ static void __init setup_IO_APIC_irqs(vo SET_DEST(entry, logical, cpu_mask_to_apicid(TARGET_CPUS)); spin_lock_irqsave(&ioapic_lock, flags); __ioapic_write_entry(apic, pin, 0, entry); - set_native_irq_info(irq, TARGET_CPUS); spin_unlock_irqrestore(&ioapic_lock, flags); } } @@ -2248,7 +2247,6 @@ int io_apic_set_pci_routing (int ioapic, spin_lock_irqsave(&ioapic_lock, flags); __ioapic_write_entry(ioapic, pin, 0, entry); - set_native_irq_info(irq, TARGET_CPUS); spin_unlock(&ioapic_lock); spin_lock(&desc->lock); --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -582,11 +582,16 @@ int assign_irq_vector(int irq, const cpu spin_lock_irqsave(&vector_lock, flags); ret = __assign_irq_vector(irq, desc, mask ?: TARGET_CPUS); - if (!ret) { + if ( !ret ) + { ret = desc->arch.vector; - cpumask_copy(desc->affinity, desc->arch.cpu_mask); + if ( mask ) + cpumask_copy(desc->affinity, mask); + else + cpumask_setall(desc->affinity); } spin_unlock_irqrestore(&vector_lock, flags); + return ret; } @@ -2334,9 +2339,10 @@ static void dump_irqs(unsigned char key) spin_lock_irqsave(&desc->lock, flags); - printk(" IRQ:%4d aff:%*pb vec:%02x %-15s status=%03x ", - irq, nr_cpu_ids, cpumask_bits(desc->affinity), desc->arch.vector, - desc->handler->typename, desc->status); + printk(" IRQ:%4d aff:%*pb/%*pb vec:%02x %-15s status=%03x ", + irq, nr_cpu_ids, cpumask_bits(desc->affinity), + nr_cpu_ids, cpumask_bits(desc->arch.cpu_mask), + desc->arch.vector, desc->handler->typename, desc->status); if ( ssid ) printk("Z=%-25s ", ssid); @@ -2424,8 +2430,7 @@ void fixup_irqs(const cpumask_t *mask, b release_old_vec(desc); } - cpumask_copy(&affinity, desc->affinity); - if ( !desc->action || cpumask_subset(&affinity, mask) ) + if ( !desc->action || cpumask_subset(desc->affinity, mask) ) { spin_unlock(&desc->lock); continue; @@ -2458,12 +2463,13 @@ void fixup_irqs(const cpumask_t *mask, b desc->arch.move_in_progress = 0; } - cpumask_and(&affinity, &affinity, mask); - if ( cpumask_empty(&affinity) ) + if ( !cpumask_intersects(mask, desc->affinity) ) { break_affinity = true; - cpumask_copy(&affinity, mask); + cpumask_setall(&affinity); } + else + cpumask_copy(&affinity, desc->affinity); if ( desc->handler->disable ) desc->handler->disable(desc); --- a/xen/include/xen/irq.h +++ b/xen/include/xen/irq.h @@ -162,11 +162,6 @@ extern irq_desc_t *domain_spin_lock_irq_ extern irq_desc_t *pirq_spin_lock_irq_desc( const struct pirq *, unsigned long *pflags); -static inline void set_native_irq_info(unsigned int irq, const cpumask_t *mask) -{ - cpumask_copy(irq_to_desc(irq)->affinity, mask); -} - unsigned int set_desc_affinity(struct irq_desc *, const cpumask_t *); #ifndef arch_hwdom_irqs From patchwork Fri May 17 10:47:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 10947727 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9376814C0 for ; Fri, 17 May 2019 10:48:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7E05D26E47 for ; Fri, 17 May 2019 10:48:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7152526E4F; Fri, 17 May 2019 10:48:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C77D226E47 for ; Fri, 17 May 2019 10:48:49 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hRaOX-0005Vy-Rg; Fri, 17 May 2019 10:47:21 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hRaOW-0005Vn-IA for xen-devel@lists.xenproject.org; Fri, 17 May 2019 10:47:20 +0000 X-Inumbo-ID: 2423d8cb-7891-11e9-8980-bc764e045a96 Received: from prv1-mh.provo.novell.com (unknown [137.65.248.33]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 2423d8cb-7891-11e9-8980-bc764e045a96; Fri, 17 May 2019 10:47:18 +0000 (UTC) Received: from INET-PRV1-MTA by prv1-mh.provo.novell.com with Novell_GroupWise; Fri, 17 May 2019 04:47:17 -0600 Message-Id: <5CDE91360200007800230075@prv1-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 18.1.0 Date: Fri, 17 May 2019 04:47:18 -0600 From: "Jan Beulich" To: "xen-devel" References: <5CC6DD090200007800229E80@prv1-mh.provo.novell.com> <5CDE8F5B020000780023005F@prv1-mh.provo.novell.com> In-Reply-To: <5CDE8F5B020000780023005F@prv1-mh.provo.novell.com> Mime-Version: 1.0 Content-Disposition: inline Subject: [Xen-devel] [PATCH v3 05/15] x86/IRQ: consolidate use of ->arch.cpu_mask X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Mixed meaning was implied so far by different pieces of code - disagreement was in particular about whether to expect offline CPUs' bits to possibly be set. Switch to a mostly consistent meaning (exception being high priority interrupts, which would perhaps better be switched to the same model as well in due course). Use the field to record the vector allocation mask, i.e. potentially including bits of offline (parked) CPUs. This implies that before passing the mask to certain functions (most notably cpu_mask_to_apicid()) it needs to be further reduced to the online subset. The exception of high priority interrupts is also why for the moment _bind_irq_vector() is left as is, despite looking wrong: It's used exclusively for IRQ0, which isn't supposed to move off CPU0 at any time. The prior lack of restricting to online CPUs in set_desc_affinity() before calling cpu_mask_to_apicid() in particular allowed (in x2APIC clustered mode) offlined CPUs to end up enabled in an IRQ's destination field. (I wonder whether vector_allocation_cpumask_flat() shouldn't follow a similar model, using cpu_present_map in favor of cpu_online_map.) For IO-APIC code it was definitely wrong to potentially store, as a fallback, TARGET_CPUS (i.e. all online ones) into the field, as that would have caused problems when determining on which CPUs to release vectors when they've gone out of use. Disable interrupts instead when no valid target CPU can be established (which code elsewhere should guarantee to never happen), and log a message in such an unlikely event. Signed-off-by: Jan Beulich Reviewed-by: Roger Pau Monné Acked-by: Andrew Cooper --- v2: New. --- a/xen/arch/x86/io_apic.c +++ b/xen/arch/x86/io_apic.c @@ -680,7 +680,7 @@ void /*__init*/ setup_ioapic_dest(void) continue; irq = pin_2_irq(irq_entry, ioapic, pin); desc = irq_to_desc(irq); - BUG_ON(cpumask_empty(desc->arch.cpu_mask)); + BUG_ON(!cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map)); set_ioapic_affinity_irq(desc, desc->arch.cpu_mask); } @@ -2194,7 +2194,6 @@ int io_apic_set_pci_routing (int ioapic, { struct irq_desc *desc = irq_to_desc(irq); struct IO_APIC_route_entry entry; - cpumask_t mask; unsigned long flags; int vector; @@ -2229,11 +2228,17 @@ int io_apic_set_pci_routing (int ioapic, return vector; entry.vector = vector; - cpumask_copy(&mask, TARGET_CPUS); - /* Don't chance ending up with an empty mask. */ - if (cpumask_intersects(&mask, desc->arch.cpu_mask)) - cpumask_and(&mask, &mask, desc->arch.cpu_mask); - SET_DEST(entry, logical, cpu_mask_to_apicid(&mask)); + if (cpumask_intersects(desc->arch.cpu_mask, TARGET_CPUS)) { + cpumask_t *mask = this_cpu(scratch_cpumask); + + cpumask_and(mask, desc->arch.cpu_mask, TARGET_CPUS); + SET_DEST(entry, logical, cpu_mask_to_apicid(mask)); + } else { + printk(XENLOG_ERR "IRQ%d: no target CPU (%*pb vs %*pb)\n", + irq, nr_cpu_ids, cpumask_bits(desc->arch.cpu_mask), + nr_cpu_ids, cpumask_bits(TARGET_CPUS)); + desc->status |= IRQ_DISABLED; + } apic_printk(APIC_DEBUG, KERN_DEBUG "IOAPIC[%d]: Set PCI routing entry " "(%d-%d -> %#x -> IRQ %d Mode:%i Active:%i)\n", ioapic, @@ -2419,7 +2424,21 @@ int ioapic_guest_write(unsigned long phy /* Set the vector field to the real vector! */ rte.vector = desc->arch.vector; - SET_DEST(rte, logical, cpu_mask_to_apicid(desc->arch.cpu_mask)); + if ( cpumask_intersects(desc->arch.cpu_mask, TARGET_CPUS) ) + { + cpumask_t *mask = this_cpu(scratch_cpumask); + + cpumask_and(mask, desc->arch.cpu_mask, TARGET_CPUS); + SET_DEST(rte, logical, cpu_mask_to_apicid(mask)); + } + else + { + gprintk(XENLOG_ERR, "IRQ%d: no target CPU (%*pb vs %*pb)\n", + irq, nr_cpu_ids, cpumask_bits(desc->arch.cpu_mask), + nr_cpu_ids, cpumask_bits(TARGET_CPUS)); + desc->status |= IRQ_DISABLED; + rte.mask = 1; + } __ioapic_write_entry(apic, pin, 0, rte); --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -471,11 +471,13 @@ static int __assign_irq_vector( */ static int current_vector = FIRST_DYNAMIC_VECTOR, current_offset = 0; int cpu, err, old_vector; - cpumask_t tmp_mask; vmask_t *irq_used_vectors = NULL; old_vector = irq_to_vector(irq); - if (old_vector > 0) { + if ( old_vector > 0 ) + { + cpumask_t tmp_mask; + cpumask_and(&tmp_mask, mask, &cpu_online_map); if (cpumask_intersects(&tmp_mask, desc->arch.cpu_mask)) { desc->arch.vector = old_vector; @@ -498,7 +500,9 @@ static int __assign_irq_vector( else irq_used_vectors = irq_get_used_vector_mask(irq); - for_each_cpu(cpu, mask) { + for_each_cpu(cpu, mask) + { + const cpumask_t *vec_mask; int new_cpu; int vector, offset; @@ -506,8 +510,7 @@ static int __assign_irq_vector( if (!cpu_online(cpu)) continue; - cpumask_and(&tmp_mask, vector_allocation_cpumask(cpu), - &cpu_online_map); + vec_mask = vector_allocation_cpumask(cpu); vector = current_vector; offset = current_offset; @@ -528,7 +531,7 @@ next: && test_bit(vector, irq_used_vectors) ) goto next; - for_each_cpu(new_cpu, &tmp_mask) + for_each_cpu(new_cpu, vec_mask) if (per_cpu(vector_irq, new_cpu)[vector] >= 0) goto next; /* Found one! */ @@ -547,12 +550,12 @@ next: release_old_vec(desc); } - trace_irq_mask(TRC_HW_IRQ_ASSIGN_VECTOR, irq, vector, &tmp_mask); + trace_irq_mask(TRC_HW_IRQ_ASSIGN_VECTOR, irq, vector, vec_mask); - for_each_cpu(new_cpu, &tmp_mask) + for_each_cpu(new_cpu, vec_mask) per_cpu(vector_irq, new_cpu)[vector] = irq; desc->arch.vector = vector; - cpumask_copy(desc->arch.cpu_mask, &tmp_mask); + cpumask_copy(desc->arch.cpu_mask, vec_mask); desc->arch.used = IRQ_USED; ASSERT((desc->arch.used_vectors == NULL) @@ -783,6 +786,7 @@ unsigned int set_desc_affinity(struct ir cpumask_copy(desc->affinity, mask); cpumask_and(&dest_mask, mask, desc->arch.cpu_mask); + cpumask_and(&dest_mask, &dest_mask, &cpu_online_map); return cpu_mask_to_apicid(&dest_mask); } --- a/xen/include/asm-x86/irq.h +++ b/xen/include/asm-x86/irq.h @@ -32,6 +32,12 @@ struct irq_desc; struct arch_irq_desc { s16 vector; /* vector itself is only 8 bits, */ s16 old_vector; /* but we use -1 for unassigned */ + /* + * Except for high priority interrupts @cpu_mask may have bits set for + * offline CPUs. Consumers need to be careful to mask this down to + * online ones as necessary. There is supposed to always be a non- + * empty intersection with cpu_online_map. + */ cpumask_var_t cpu_mask; cpumask_var_t old_cpu_mask; cpumask_var_t pending_mask; From patchwork Fri May 17 10:47:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 10947735 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AE7D71395 for ; Fri, 17 May 2019 10:49:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9A8BA26E47 for ; Fri, 17 May 2019 10:49:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8EE5326E4F; Fri, 17 May 2019 10:49:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C028326E47 for ; Fri, 17 May 2019 10:49:53 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hRaP3-0005bW-BI; Fri, 17 May 2019 10:47:53 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hRaP2-0005bK-Go for xen-devel@lists.xenproject.org; Fri, 17 May 2019 10:47:52 +0000 X-Inumbo-ID: 35856cc0-7891-11e9-b094-a7d66cc25ea1 Received: from prv1-mh.provo.novell.com (unknown [137.65.248.33]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 35856cc0-7891-11e9-b094-a7d66cc25ea1; Fri, 17 May 2019 10:47:48 +0000 (UTC) Received: from INET-PRV1-MTA by prv1-mh.provo.novell.com with Novell_GroupWise; Fri, 17 May 2019 04:47:46 -0600 Message-Id: <5CDE91530200007800230078@prv1-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 18.1.0 Date: Fri, 17 May 2019 04:47:47 -0600 From: "Jan Beulich" To: "xen-devel" References: <5CC6DD090200007800229E80@prv1-mh.provo.novell.com> <5CDE8F5B020000780023005F@prv1-mh.provo.novell.com> In-Reply-To: <5CDE8F5B020000780023005F@prv1-mh.provo.novell.com> Mime-Version: 1.0 Content-Disposition: inline Subject: [Xen-devel] [PATCH v3 06/15] x86/IRQ: fix locking around vector management X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP All of __{assign,bind,clear}_irq_vector() manipulate struct irq_desc fields, and hence ought to be called with the descriptor lock held in addition to vector_lock. This is currently the case for only set_desc_affinity() (in the common case) and destroy_irq(), which also clarifies what the nesting behavior between the locks has to be. Reflect the new expectation by having these functions all take a descriptor as parameter instead of an interrupt number. Also take care of the two special cases of calls to set_desc_affinity(): set_ioapic_affinity_irq() and VT-d's dma_msi_set_affinity() get called directly as well, and in these cases the descriptor locks hadn't got acquired till now. For set_ioapic_affinity_irq() this means acquiring / releasing of the IO-APIC lock can be plain spin_{,un}lock() then. Drop one of the two leading underscores from all three functions at the same time. There's one case left where descriptors get manipulated with just vector_lock held: setup_vector_irq() assumes its caller to acquire vector_lock, and hence can't itself acquire the descriptor locks (wrong lock order). I don't currently see how to address this. Signed-off-by: Jan Beulich Reviewed-by: Kevin Tian [VT-d] Reviewed-by: Roger Pau Monné --- v3: Also drop one leading underscore from a comment. Re-base. v2: Also adjust set_ioapic_affinity_irq() and VT-d's dma_msi_set_affinity(). --- a/xen/arch/x86/io_apic.c +++ b/xen/arch/x86/io_apic.c @@ -550,14 +550,14 @@ static void clear_IO_APIC (void) static void set_ioapic_affinity_irq(struct irq_desc *desc, const cpumask_t *mask) { - unsigned long flags; unsigned int dest; int pin, irq; struct irq_pin_list *entry; irq = desc->irq; - spin_lock_irqsave(&ioapic_lock, flags); + spin_lock(&ioapic_lock); + dest = set_desc_affinity(desc, mask); if (dest != BAD_APICID) { if ( !x2apic_enabled ) @@ -580,8 +580,8 @@ set_ioapic_affinity_irq(struct irq_desc entry = irq_2_pin + entry->next; } } - spin_unlock_irqrestore(&ioapic_lock, flags); + spin_unlock(&ioapic_lock); } /* @@ -674,16 +674,19 @@ void /*__init*/ setup_ioapic_dest(void) for (ioapic = 0; ioapic < nr_ioapics; ioapic++) { for (pin = 0; pin < nr_ioapic_entries[ioapic]; pin++) { struct irq_desc *desc; + unsigned long flags; irq_entry = find_irq_entry(ioapic, pin, mp_INT); if (irq_entry == -1) continue; irq = pin_2_irq(irq_entry, ioapic, pin); desc = irq_to_desc(irq); + + spin_lock_irqsave(&desc->lock, flags); BUG_ON(!cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map)); set_ioapic_affinity_irq(desc, desc->arch.cpu_mask); + spin_unlock_irqrestore(&desc->lock, flags); } - } } --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -27,6 +27,7 @@ #include static int parse_irq_vector_map_param(const char *s); +static void _clear_irq_vector(struct irq_desc *desc); /* opt_noirqbalance: If true, software IRQ balancing/affinity is disabled. */ bool __read_mostly opt_noirqbalance; @@ -136,13 +137,12 @@ static void trace_irq_mask(uint32_t even trace_var(event, 1, sizeof(d), &d); } -static int __init __bind_irq_vector(int irq, int vector, const cpumask_t *cpu_mask) +static int __init _bind_irq_vector(struct irq_desc *desc, int vector, + const cpumask_t *cpu_mask) { cpumask_t online_mask; int cpu; - struct irq_desc *desc = irq_to_desc(irq); - BUG_ON((unsigned)irq >= nr_irqs); BUG_ON((unsigned)vector >= NR_VECTORS); cpumask_and(&online_mask, cpu_mask, &cpu_online_map); @@ -153,9 +153,9 @@ static int __init __bind_irq_vector(int return 0; if ( desc->arch.vector != IRQ_VECTOR_UNASSIGNED ) return -EBUSY; - trace_irq_mask(TRC_HW_IRQ_BIND_VECTOR, irq, vector, &online_mask); + trace_irq_mask(TRC_HW_IRQ_BIND_VECTOR, desc->irq, vector, &online_mask); for_each_cpu(cpu, &online_mask) - per_cpu(vector_irq, cpu)[vector] = irq; + per_cpu(vector_irq, cpu)[vector] = desc->irq; desc->arch.vector = vector; cpumask_copy(desc->arch.cpu_mask, &online_mask); if ( desc->arch.used_vectors ) @@ -169,12 +169,18 @@ static int __init __bind_irq_vector(int int __init bind_irq_vector(int irq, int vector, const cpumask_t *cpu_mask) { + struct irq_desc *desc = irq_to_desc(irq); unsigned long flags; int ret; - spin_lock_irqsave(&vector_lock, flags); - ret = __bind_irq_vector(irq, vector, cpu_mask); - spin_unlock_irqrestore(&vector_lock, flags); + BUG_ON((unsigned)irq >= nr_irqs); + + spin_lock_irqsave(&desc->lock, flags); + spin_lock(&vector_lock); + ret = _bind_irq_vector(desc, vector, cpu_mask); + spin_unlock(&vector_lock); + spin_unlock_irqrestore(&desc->lock, flags); + return ret; } @@ -259,18 +265,20 @@ void destroy_irq(unsigned int irq) spin_lock_irqsave(&desc->lock, flags); desc->handler = &no_irq_type; - clear_irq_vector(irq); + spin_lock(&vector_lock); + _clear_irq_vector(desc); + spin_unlock(&vector_lock); desc->arch.used_vectors = NULL; spin_unlock_irqrestore(&desc->lock, flags); xfree(action); } -static void __clear_irq_vector(int irq) +static void _clear_irq_vector(struct irq_desc *desc) { - int cpu, vector, old_vector; + unsigned int cpu; + int vector, old_vector, irq = desc->irq; cpumask_t tmp_mask; - struct irq_desc *desc = irq_to_desc(irq); BUG_ON(!desc->arch.vector); @@ -316,11 +324,14 @@ static void __clear_irq_vector(int irq) void clear_irq_vector(int irq) { + struct irq_desc *desc = irq_to_desc(irq); unsigned long flags; - spin_lock_irqsave(&vector_lock, flags); - __clear_irq_vector(irq); - spin_unlock_irqrestore(&vector_lock, flags); + spin_lock_irqsave(&desc->lock, flags); + spin_lock(&vector_lock); + _clear_irq_vector(desc); + spin_unlock(&vector_lock); + spin_unlock_irqrestore(&desc->lock, flags); } int irq_to_vector(int irq) @@ -455,8 +466,7 @@ static vmask_t *irq_get_used_vector_mask return ret; } -static int __assign_irq_vector( - int irq, struct irq_desc *desc, const cpumask_t *mask) +static int _assign_irq_vector(struct irq_desc *desc, const cpumask_t *mask) { /* * NOTE! The local APIC isn't very good at handling @@ -470,7 +480,8 @@ static int __assign_irq_vector( * 0x80, because int 0x80 is hm, kind of importantish. ;) */ static int current_vector = FIRST_DYNAMIC_VECTOR, current_offset = 0; - int cpu, err, old_vector; + unsigned int cpu; + int err, old_vector, irq = desc->irq; vmask_t *irq_used_vectors = NULL; old_vector = irq_to_vector(irq); @@ -583,8 +594,12 @@ int assign_irq_vector(int irq, const cpu BUG_ON(irq >= nr_irqs || irq <0); - spin_lock_irqsave(&vector_lock, flags); - ret = __assign_irq_vector(irq, desc, mask ?: TARGET_CPUS); + spin_lock_irqsave(&desc->lock, flags); + + spin_lock(&vector_lock); + ret = _assign_irq_vector(desc, mask ?: TARGET_CPUS); + spin_unlock(&vector_lock); + if ( !ret ) { ret = desc->arch.vector; @@ -593,7 +608,8 @@ int assign_irq_vector(int irq, const cpu else cpumask_setall(desc->affinity); } - spin_unlock_irqrestore(&vector_lock, flags); + + spin_unlock_irqrestore(&desc->lock, flags); return ret; } @@ -767,7 +783,6 @@ void irq_complete_move(struct irq_desc * unsigned int set_desc_affinity(struct irq_desc *desc, const cpumask_t *mask) { - unsigned int irq; int ret; unsigned long flags; cpumask_t dest_mask; @@ -775,10 +790,8 @@ unsigned int set_desc_affinity(struct ir if (!cpumask_intersects(mask, &cpu_online_map)) return BAD_APICID; - irq = desc->irq; - spin_lock_irqsave(&vector_lock, flags); - ret = __assign_irq_vector(irq, desc, mask); + ret = _assign_irq_vector(desc, mask); spin_unlock_irqrestore(&vector_lock, flags); if (ret < 0) @@ -2442,7 +2455,7 @@ void fixup_irqs(const cpumask_t *mask, b /* * In order for the affinity adjustment below to be successful, we - * need __assign_irq_vector() to succeed. This in particular means + * need _assign_irq_vector() to succeed. This in particular means * clearing desc->arch.move_in_progress if this would otherwise * prevent the function from succeeding. Since there's no way for the * flag to get cleared anymore when there's no possible destination --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -2134,11 +2134,16 @@ static void adjust_irq_affinity(struct a unsigned int node = rhsa ? pxm_to_node(rhsa->proximity_domain) : NUMA_NO_NODE; const cpumask_t *cpumask = &cpu_online_map; + struct irq_desc *desc; if ( node < MAX_NUMNODES && node_online(node) && cpumask_intersects(&node_to_cpumask(node), cpumask) ) cpumask = &node_to_cpumask(node); - dma_msi_set_affinity(irq_to_desc(drhd->iommu->msi.irq), cpumask); + + desc = irq_to_desc(drhd->iommu->msi.irq); + spin_lock_irq(&desc->lock); + dma_msi_set_affinity(desc, cpumask); + spin_unlock_irq(&desc->lock); } static int adjust_vtd_irq_affinities(void) From patchwork Fri May 17 10:48:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 10947733 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 96DA114C0 for ; Fri, 17 May 2019 10:49:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8385426E47 for ; Fri, 17 May 2019 10:49:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7826026E4F; Fri, 17 May 2019 10:49:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2E17926E47 for ; Fri, 17 May 2019 10:49:46 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hRaPY-0005hN-Mw; Fri, 17 May 2019 10:48:24 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hRaPW-0005h1-WB for xen-devel@lists.xenproject.org; Fri, 17 May 2019 10:48:23 +0000 X-Inumbo-ID: 49ed0f52-7891-11e9-8980-bc764e045a96 Received: from prv1-mh.provo.novell.com (unknown [137.65.248.33]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 49ed0f52-7891-11e9-8980-bc764e045a96; Fri, 17 May 2019 10:48:22 +0000 (UTC) Received: from INET-PRV1-MTA by prv1-mh.provo.novell.com with Novell_GroupWise; Fri, 17 May 2019 04:48:21 -0600 Message-Id: <5CDE917502000078002300A8@prv1-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 18.1.0 Date: Fri, 17 May 2019 04:48:21 -0600 From: "Jan Beulich" To: "xen-devel" References: <5CC6DD090200007800229E80@prv1-mh.provo.novell.com> <5CDE8F5B020000780023005F@prv1-mh.provo.novell.com> In-Reply-To: <5CDE8F5B020000780023005F@prv1-mh.provo.novell.com> Mime-Version: 1.0 Content-Disposition: inline Subject: [Xen-devel] [PATCH v3 07/15] x86/IRQ: target online CPUs when binding guest IRQ X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP fixup_irqs() skips interrupts without action. Hence such interrupts can retain affinity to just offline CPUs. With "noirqbalance" in effect, pirq_guest_bind() so far would have left them alone, resulting in a non- working interrupt. Signed-off-by: Jan Beulich Reviewed-by: Roger Pau Monné Acked-by: Andrew Cooper --- v3: New. --- I've not observed this problem in practice - the change is just the result of code inspection after having noticed action-less IRQs in 'i' debug key output pointing at all parked/offline CPUs. --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -1683,9 +1683,27 @@ int pirq_guest_bind(struct vcpu *v, stru desc->status |= IRQ_GUEST; - /* Attempt to bind the interrupt target to the correct CPU. */ - if ( !opt_noirqbalance && (desc->handler->set_affinity != NULL) ) - desc->handler->set_affinity(desc, cpumask_of(v->processor)); + /* + * Attempt to bind the interrupt target to the correct (or at least + * some online) CPU. + */ + if ( desc->handler->set_affinity ) + { + const cpumask_t *affinity = NULL; + + if ( !opt_noirqbalance ) + affinity = cpumask_of(v->processor); + else if ( !cpumask_intersects(desc->affinity, &cpu_online_map) ) + { + cpumask_setall(desc->affinity); + affinity = &cpumask_all; + } + else if ( !cpumask_intersects(desc->arch.cpu_mask, + &cpu_online_map) ) + affinity = desc->affinity; + if ( affinity ) + desc->handler->set_affinity(desc, affinity); + } desc->status &= ~IRQ_DISABLED; desc->handler->startup(desc); From patchwork Fri May 17 10:49:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 10947741 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5ADE21395 for ; Fri, 17 May 2019 10:51:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 48CBB26E47 for ; Fri, 17 May 2019 10:51:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3D37526E4F; Fri, 17 May 2019 10:51:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id EB6B626E47 for ; Fri, 17 May 2019 10:51:05 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hRaQa-0005u5-3U; Fri, 17 May 2019 10:49:28 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hRaQY-0005rZ-Bg for xen-devel@lists.xenproject.org; Fri, 17 May 2019 10:49:26 +0000 X-Inumbo-ID: 6ddd7446-7891-11e9-a68e-9bcd808cf7cf Received: from prv1-mh.provo.novell.com (unknown [137.65.248.33]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 6ddd7446-7891-11e9-a68e-9bcd808cf7cf; Fri, 17 May 2019 10:49:22 +0000 (UTC) Received: from INET-PRV1-MTA by prv1-mh.provo.novell.com with Novell_GroupWise; Fri, 17 May 2019 04:49:21 -0600 Message-Id: <5CDE91B002000078002300AB@prv1-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 18.1.0 Date: Fri, 17 May 2019 04:49:20 -0600 From: "Jan Beulich" To: "xen-devel" References: <5CC6DD090200007800229E80@prv1-mh.provo.novell.com> <5CDE8F5B020000780023005F@prv1-mh.provo.novell.com> In-Reply-To: <5CDE8F5B020000780023005F@prv1-mh.provo.novell.com> Mime-Version: 1.0 Content-Disposition: inline Subject: [Xen-devel] [PATCH v3 08/15] x86/IRQs: correct/tighten vector check in _clear_irq_vector() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP If any particular value was to be checked against, it would need to be IRQ_VECTOR_UNASSIGNED. Reported-by: Roger Pau Monné Be more strict though and use valid_irq_vector() instead. Take the opportunity and also convert local variables to unsigned int. Signed-off-by: Jan Beulich Reviewed-by: Roger Pau Monné Acked-by: Andrew Cooper --- v2: New. --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -276,14 +276,13 @@ void destroy_irq(unsigned int irq) static void _clear_irq_vector(struct irq_desc *desc) { - unsigned int cpu; - int vector, old_vector, irq = desc->irq; + unsigned int cpu, old_vector, irq = desc->irq; + unsigned int vector = desc->arch.vector; cpumask_t tmp_mask; - BUG_ON(!desc->arch.vector); + BUG_ON(!valid_irq_vector(vector)); /* Always clear desc->arch.vector */ - vector = desc->arch.vector; cpumask_and(&tmp_mask, desc->arch.cpu_mask, &cpu_online_map); for_each_cpu(cpu, &tmp_mask) { From patchwork Fri May 17 10:49:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 10947743 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CDD021395 for ; Fri, 17 May 2019 10:51:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BB84626E47 for ; Fri, 17 May 2019 10:51:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AFDC726E4F; Fri, 17 May 2019 10:51:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 6753926E4A for ; Fri, 17 May 2019 10:51:22 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hRaR9-0006AB-FS; Fri, 17 May 2019 10:50:03 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hRaR8-00062y-9k for xen-devel@lists.xenproject.org; Fri, 17 May 2019 10:50:02 +0000 X-Inumbo-ID: 838b4494-7891-11e9-9eaa-d75cfc801f85 Received: from prv1-mh.provo.novell.com (unknown [137.65.248.33]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 838b4494-7891-11e9-9eaa-d75cfc801f85; Fri, 17 May 2019 10:49:59 +0000 (UTC) Received: from INET-PRV1-MTA by prv1-mh.provo.novell.com with Novell_GroupWise; Fri, 17 May 2019 04:49:57 -0600 Message-Id: <5CDE91D602000078002300AE@prv1-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 18.1.0 Date: Fri, 17 May 2019 04:49:58 -0600 From: "Jan Beulich" To: "xen-devel" References: <5CC6DD090200007800229E80@prv1-mh.provo.novell.com> <5CDE8F5B020000780023005F@prv1-mh.provo.novell.com> In-Reply-To: <5CDE8F5B020000780023005F@prv1-mh.provo.novell.com> Mime-Version: 1.0 Content-Disposition: inline Subject: [Xen-devel] [PATCH v3 09/15] x86/IRQ: make fixup_irqs() skip unconnected internally used interrupts X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Since the "Cannot set affinity ..." warning is a one time one, avoid triggering it already at boot time when parking secondary threads and the serial console uses a (still unconnected at that time) PCI IRQ. Signed-off-by: Jan Beulich Reviewed-by: Roger Pau Monné Acked-by: Andrew Cooper --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -2452,8 +2452,20 @@ void fixup_irqs(const cpumask_t *mask, b vector = irq_to_vector(irq); if ( vector >= FIRST_HIPRIORITY_VECTOR && vector <= LAST_HIPRIORITY_VECTOR ) + { cpumask_and(desc->arch.cpu_mask, desc->arch.cpu_mask, mask); + /* + * This can in particular happen when parking secondary threads + * during boot and when the serial console wants to use a PCI IRQ. + */ + if ( desc->handler == &no_irq_type ) + { + spin_unlock(&desc->lock); + continue; + } + } + if ( desc->arch.move_cleanup_count ) { /* The cleanup IPI may have got sent while we were still online. */ From patchwork Fri May 17 10:50:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 10947745 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 23A2E76 for ; Fri, 17 May 2019 10:52:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0E63C26E47 for ; Fri, 17 May 2019 10:52:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0085E26E4F; Fri, 17 May 2019 10:52:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A144526E47 for ; Fri, 17 May 2019 10:52:09 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hRaRe-0006eU-SM; Fri, 17 May 2019 10:50:34 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hRaRd-0006eI-OA for xen-devel@lists.xenproject.org; Fri, 17 May 2019 10:50:33 +0000 X-Inumbo-ID: 97f06b5e-7891-11e9-8980-bc764e045a96 Received: from prv1-mh.provo.novell.com (unknown [137.65.248.33]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 97f06b5e-7891-11e9-8980-bc764e045a96; Fri, 17 May 2019 10:50:33 +0000 (UTC) Received: from INET-PRV1-MTA by prv1-mh.provo.novell.com with Novell_GroupWise; Fri, 17 May 2019 04:50:32 -0600 Message-Id: <5CDE91F802000078002300B1@prv1-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 18.1.0 Date: Fri, 17 May 2019 04:50:32 -0600 From: "Jan Beulich" To: "xen-devel" References: <5CC6DD090200007800229E80@prv1-mh.provo.novell.com> <5CDE8F5B020000780023005F@prv1-mh.provo.novell.com> In-Reply-To: <5CDE8F5B020000780023005F@prv1-mh.provo.novell.com> Mime-Version: 1.0 Content-Disposition: inline Subject: [Xen-devel] [PATCH v3 10/15] x86/IRQ: drop redundant cpumask_empty() from move_masked_irq() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP The subsequent cpumask_intersects() covers the "empty" case quite fine. Signed-off-by: Jan Beulich Reviewed-by: Roger Pau Monné Acked-by: Andrew Cooper --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -650,9 +650,6 @@ void move_masked_irq(struct irq_desc *de desc->status &= ~IRQ_MOVE_PENDING; - if (unlikely(cpumask_empty(pending_mask))) - return; - if (!desc->handler->set_affinity) return; From patchwork Fri May 17 10:51:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 10947747 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4F7FF13AD for ; Fri, 17 May 2019 10:52:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3D4EA26E47 for ; Fri, 17 May 2019 10:52:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 317E326E4F; Fri, 17 May 2019 10:52:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id CF0F526E47 for ; Fri, 17 May 2019 10:52:24 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hRaSE-0006jt-7G; Fri, 17 May 2019 10:51:10 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hRaSC-0006jg-OY for xen-devel@lists.xenproject.org; Fri, 17 May 2019 10:51:08 +0000 X-Inumbo-ID: a9f9b980-7891-11e9-8547-4f5cb1990017 Received: from prv1-mh.provo.novell.com (unknown [137.65.248.33]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id a9f9b980-7891-11e9-8547-4f5cb1990017; Fri, 17 May 2019 10:51:03 +0000 (UTC) Received: from INET-PRV1-MTA by prv1-mh.provo.novell.com with Novell_GroupWise; Fri, 17 May 2019 04:51:02 -0600 Message-Id: <5CDE921602000078002300B4@prv1-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 18.1.0 Date: Fri, 17 May 2019 04:51:02 -0600 From: "Jan Beulich" To: "xen-devel" References: <5CC6DD090200007800229E80@prv1-mh.provo.novell.com> <5CDE8F5B020000780023005F@prv1-mh.provo.novell.com> In-Reply-To: <5CDE8F5B020000780023005F@prv1-mh.provo.novell.com> Mime-Version: 1.0 Content-Disposition: inline Subject: [Xen-devel] [PATCH v3 11/15] x86/IRQ: simplify and rename pirq_acktype() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Its only caller already has the IRQ descriptor in its hands, so there's no need for the function to re-obtain it. As a result the leading p of its name is no longer appropriate and hence gets dropped. Signed-off-by: Jan Beulich Reviewed-by: Roger Pau Monné Acked-by: Andrew Cooper --- v2: New. --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -1556,17 +1556,8 @@ int pirq_guest_unmask(struct domain *d) return 0; } -static int pirq_acktype(struct domain *d, int pirq) +static int irq_acktype(const struct irq_desc *desc) { - struct irq_desc *desc; - int irq; - - irq = domain_pirq_to_irq(d, pirq); - if ( irq <= 0 ) - return ACKTYPE_NONE; - - desc = irq_to_desc(irq); - if ( desc->handler == &no_irq_type ) return ACKTYPE_NONE; @@ -1597,7 +1588,8 @@ static int pirq_acktype(struct domain *d if ( !strcmp(desc->handler->typename, "XT-PIC") ) return ACKTYPE_UNMASK; - printk("Unknown PIC type '%s' for IRQ %d\n", desc->handler->typename, irq); + printk("Unknown PIC type '%s' for IRQ%d\n", + desc->handler->typename, desc->irq); BUG(); return 0; @@ -1674,7 +1666,7 @@ int pirq_guest_bind(struct vcpu *v, stru action->nr_guests = 0; action->in_flight = 0; action->shareable = will_share; - action->ack_type = pirq_acktype(v->domain, pirq->pirq); + action->ack_type = irq_acktype(desc); init_timer(&action->eoi_timer, irq_guest_eoi_timer_fn, desc, 0); desc->status |= IRQ_GUEST; From patchwork Fri May 17 10:51:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 10947757 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4F7AB13AD for ; Fri, 17 May 2019 10:53:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3C5D426E47 for ; Fri, 17 May 2019 10:53:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2F19226E4F; Fri, 17 May 2019 10:53:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id DA3AE26E47 for ; Fri, 17 May 2019 10:53:57 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hRaSw-0006sI-I4; Fri, 17 May 2019 10:51:54 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hRaSv-0006s4-0E for xen-devel@lists.xenproject.org; Fri, 17 May 2019 10:51:53 +0000 X-Inumbo-ID: c6beb55f-7891-11e9-8980-bc764e045a96 Received: from prv1-mh.provo.novell.com (unknown [137.65.248.33]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id c6beb55f-7891-11e9-8980-bc764e045a96; Fri, 17 May 2019 10:51:51 +0000 (UTC) Received: from INET-PRV1-MTA by prv1-mh.provo.novell.com with Novell_GroupWise; Fri, 17 May 2019 04:51:50 -0600 Message-Id: <5CDE924602000078002300B7@prv1-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 18.1.0 Date: Fri, 17 May 2019 04:51:50 -0600 From: "Jan Beulich" To: "xen-devel" References: <5CC6DD090200007800229E80@prv1-mh.provo.novell.com> <5CDE8F5B020000780023005F@prv1-mh.provo.novell.com> In-Reply-To: <5CDE8F5B020000780023005F@prv1-mh.provo.novell.com> Mime-Version: 1.0 Content-Disposition: inline Subject: [Xen-devel] [PATCH v3 12/15] x86/IRQ: add explicit tracing-enabled check to trace_irq_mask() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: George Dunlap , Andrew Cooper , Wei Liu , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP The setup for calling trace_var() (which itself checks tb_init_done) is non-negligible, and hence a separate outer-most check is warranted. Signed-off-by: Jan Beulich Reviewed-by: Roger Pau Monné --- v3: New. --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -121,8 +121,8 @@ static void release_old_vec(struct irq_d } } -static void trace_irq_mask(uint32_t event, int irq, int vector, - const cpumask_t *mask) +static void _trace_irq_mask(uint32_t event, int irq, int vector, + const cpumask_t *mask) { struct { unsigned int irq:16, vec:16; @@ -137,6 +137,13 @@ static void trace_irq_mask(uint32_t even trace_var(event, 1, sizeof(d), &d); } +static inline void trace_irq_mask(uint32_t event, int irq, int vector, + const cpumask_t *mask) +{ + if ( unlikely(tb_init_done) ) + _trace_irq_mask(event, irq, vector, mask); +} + static int __init _bind_irq_vector(struct irq_desc *desc, int vector, const cpumask_t *cpu_mask) { From patchwork Fri May 17 10:52:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 10947759 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5AB7613AD for ; Fri, 17 May 2019 10:53:59 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 497AC26E47 for ; Fri, 17 May 2019 10:53:59 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3DA8226E4F; Fri, 17 May 2019 10:53:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id E79CD26E47 for ; Fri, 17 May 2019 10:53:58 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hRaTf-0006zq-1b; Fri, 17 May 2019 10:52:39 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hRaTe-0006zk-BD for xen-devel@lists.xenproject.org; Fri, 17 May 2019 10:52:38 +0000 X-Inumbo-ID: df7f79f0-7891-11e9-9429-bbaec451148c Received: from prv1-mh.provo.novell.com (unknown [137.65.248.33]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id df7f79f0-7891-11e9-9429-bbaec451148c; Fri, 17 May 2019 10:52:33 +0000 (UTC) Received: from INET-PRV1-MTA by prv1-mh.provo.novell.com with Novell_GroupWise; Fri, 17 May 2019 04:52:32 -0600 Message-Id: <5CDE927002000078002300BA@prv1-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 18.1.0 Date: Fri, 17 May 2019 04:52:32 -0600 From: "Jan Beulich" To: "xen-devel" References: <5CC6DD090200007800229E80@prv1-mh.provo.novell.com> <5CDE8F5B020000780023005F@prv1-mh.provo.novell.com> In-Reply-To: <5CDE8F5B020000780023005F@prv1-mh.provo.novell.com> Mime-Version: 1.0 Content-Disposition: inline Subject: [Xen-devel] [PATCH v3 13/15] x86/IRQ: tighten vector checks X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Use valid_irq_vector() rather than "> 0". Also replace an open-coded use of IRQ_VECTOR_UNASSIGNED. Signed-off-by: Jan Beulich Reviewed-by: Roger Pau Monné Acked-by: Andrew Cooper --- v3: New. --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -342,7 +342,7 @@ void clear_irq_vector(int irq) int irq_to_vector(int irq) { - int vector = -1; + int vector = IRQ_VECTOR_UNASSIGNED; BUG_ON(irq >= nr_irqs || irq < 0); @@ -452,15 +452,18 @@ static vmask_t *irq_get_used_vector_mask int vector; vector = irq_to_vector(irq); - if ( vector > 0 ) + if ( valid_irq_vector(vector) ) { - printk(XENLOG_INFO "IRQ %d already assigned vector %d\n", + printk(XENLOG_INFO "IRQ%d already assigned vector %02x\n", irq, vector); ASSERT(!test_bit(vector, ret)); set_bit(vector, ret); } + else if ( vector != IRQ_VECTOR_UNASSIGNED ) + printk(XENLOG_WARNING "IRQ%d mapped to bogus vector %02x\n", + irq, vector); } } else if ( IO_APIC_IRQ(irq) && @@ -491,7 +494,7 @@ static int _assign_irq_vector(struct irq vmask_t *irq_used_vectors = NULL; old_vector = irq_to_vector(irq); - if ( old_vector > 0 ) + if ( valid_irq_vector(old_vector) ) { cpumask_t tmp_mask; @@ -555,7 +558,7 @@ next: current_vector = vector; current_offset = offset; - if ( old_vector > 0 ) + if ( valid_irq_vector(old_vector) ) { cpumask_and(desc->arch.old_cpu_mask, desc->arch.cpu_mask, &cpu_online_map); From patchwork Fri May 17 10:52:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 10947761 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 67FC576 for ; Fri, 17 May 2019 10:54:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5309026E47 for ; Fri, 17 May 2019 10:54:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 47C0F26E4F; Fri, 17 May 2019 10:54:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id CF44326E47 for ; Fri, 17 May 2019 10:54:30 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hRaTx-00073j-CL; Fri, 17 May 2019 10:52:57 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hRaTw-00073V-35 for xen-devel@lists.xenproject.org; Fri, 17 May 2019 10:52:56 +0000 X-Inumbo-ID: ec1da149-7891-11e9-8980-bc764e045a96 Received: from prv1-mh.provo.novell.com (unknown [137.65.248.33]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id ec1da149-7891-11e9-8980-bc764e045a96; Fri, 17 May 2019 10:52:54 +0000 (UTC) Received: from INET-PRV1-MTA by prv1-mh.provo.novell.com with Novell_GroupWise; Fri, 17 May 2019 04:52:53 -0600 Message-Id: <5CDE928602000078002300BD@prv1-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 18.1.0 Date: Fri, 17 May 2019 04:52:54 -0600 From: "Jan Beulich" To: "xen-devel" References: <5CC6DD090200007800229E80@prv1-mh.provo.novell.com> <5CDE8F5B020000780023005F@prv1-mh.provo.novell.com> In-Reply-To: <5CDE8F5B020000780023005F@prv1-mh.provo.novell.com> Mime-Version: 1.0 Content-Disposition: inline Subject: [Xen-devel] [PATCH v3 14/15] x86/IRQ: eliminate some on-stack cpumask_t instances X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Use scratch_cpumask where possible, to avoid creating these possibly large stack objects. We can't use it in _assign_irq_vector() and set_desc_affinity(), as these get called in IRQ context. Signed-off-by: Jan Beulich Reviewed-by: Roger Pau Monné --- v3: New. --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -285,14 +285,15 @@ static void _clear_irq_vector(struct irq { unsigned int cpu, old_vector, irq = desc->irq; unsigned int vector = desc->arch.vector; - cpumask_t tmp_mask; + cpumask_t *tmp_mask = this_cpu(scratch_cpumask); BUG_ON(!valid_irq_vector(vector)); /* Always clear desc->arch.vector */ - cpumask_and(&tmp_mask, desc->arch.cpu_mask, &cpu_online_map); + cpumask_and(tmp_mask, desc->arch.cpu_mask, &cpu_online_map); - for_each_cpu(cpu, &tmp_mask) { + for_each_cpu(cpu, tmp_mask) + { ASSERT( per_cpu(vector_irq, cpu)[vector] == irq ); per_cpu(vector_irq, cpu)[vector] = ~irq; } @@ -308,16 +309,17 @@ static void _clear_irq_vector(struct irq desc->arch.used = IRQ_UNUSED; - trace_irq_mask(TRC_HW_IRQ_CLEAR_VECTOR, irq, vector, &tmp_mask); + trace_irq_mask(TRC_HW_IRQ_CLEAR_VECTOR, irq, vector, tmp_mask); if ( likely(!desc->arch.move_in_progress) ) return; /* If we were in motion, also clear desc->arch.old_vector */ old_vector = desc->arch.old_vector; - cpumask_and(&tmp_mask, desc->arch.old_cpu_mask, &cpu_online_map); + cpumask_and(tmp_mask, desc->arch.old_cpu_mask, &cpu_online_map); - for_each_cpu(cpu, &tmp_mask) { + for_each_cpu(cpu, tmp_mask) + { ASSERT( per_cpu(vector_irq, cpu)[old_vector] == irq ); TRACE_3D(TRC_HW_IRQ_MOVE_FINISH, irq, old_vector, cpu); per_cpu(vector_irq, cpu)[old_vector] = ~irq; @@ -1159,7 +1161,6 @@ static void irq_guest_eoi_timer_fn(void struct irq_desc *desc = data; unsigned int i, irq = desc - irq_desc; irq_guest_action_t *action; - cpumask_t cpu_eoi_map; spin_lock_irq(&desc->lock); @@ -1189,14 +1190,18 @@ static void irq_guest_eoi_timer_fn(void switch ( action->ack_type ) { + cpumask_t *cpu_eoi_map; + case ACKTYPE_UNMASK: if ( desc->handler->end ) desc->handler->end(desc, 0); break; + case ACKTYPE_EOI: - cpumask_copy(&cpu_eoi_map, action->cpu_eoi_map); + cpu_eoi_map = this_cpu(scratch_cpumask); + cpumask_copy(cpu_eoi_map, action->cpu_eoi_map); spin_unlock_irq(&desc->lock); - on_selected_cpus(&cpu_eoi_map, set_eoi_ready, desc, 0); + on_selected_cpus(cpu_eoi_map, set_eoi_ready, desc, 0); return; } @@ -2437,7 +2442,7 @@ void fixup_irqs(const cpumask_t *mask, b { bool break_affinity = false, set_affinity = true; unsigned int vector; - cpumask_t affinity; + cpumask_t *affinity = this_cpu(scratch_cpumask); if ( irq == 2 ) continue; @@ -2468,9 +2473,9 @@ void fixup_irqs(const cpumask_t *mask, b if ( desc->arch.move_cleanup_count ) { /* The cleanup IPI may have got sent while we were still online. */ - cpumask_andnot(&affinity, desc->arch.old_cpu_mask, + cpumask_andnot(affinity, desc->arch.old_cpu_mask, &cpu_online_map); - desc->arch.move_cleanup_count -= cpumask_weight(&affinity); + desc->arch.move_cleanup_count -= cpumask_weight(affinity); if ( !desc->arch.move_cleanup_count ) release_old_vec(desc); } @@ -2497,10 +2502,10 @@ void fixup_irqs(const cpumask_t *mask, b { unsigned int cpu; - cpumask_and(&affinity, desc->arch.old_cpu_mask, &cpu_online_map); + cpumask_and(affinity, desc->arch.old_cpu_mask, &cpu_online_map); spin_lock(&vector_lock); - for_each_cpu(cpu, &affinity) + for_each_cpu(cpu, affinity) per_cpu(vector_irq, cpu)[desc->arch.old_vector] = ~irq; spin_unlock(&vector_lock); @@ -2511,23 +2516,23 @@ void fixup_irqs(const cpumask_t *mask, b if ( !cpumask_intersects(mask, desc->affinity) ) { break_affinity = true; - cpumask_setall(&affinity); + cpumask_setall(affinity); } else - cpumask_copy(&affinity, desc->affinity); + cpumask_copy(affinity, desc->affinity); if ( desc->handler->disable ) desc->handler->disable(desc); if ( desc->handler->set_affinity ) - desc->handler->set_affinity(desc, &affinity); + desc->handler->set_affinity(desc, affinity); else if ( !(warned++) ) set_affinity = false; if ( desc->handler->enable ) desc->handler->enable(desc); - cpumask_copy(&affinity, desc->affinity); + cpumask_copy(affinity, desc->affinity); spin_unlock(&desc->lock); @@ -2538,7 +2543,7 @@ void fixup_irqs(const cpumask_t *mask, b printk("Cannot set affinity for IRQ%u\n", irq); else if ( break_affinity ) printk("Broke affinity for IRQ%u, new: %*pb\n", - irq, nr_cpu_ids, &affinity); + irq, nr_cpu_ids, affinity); } /* That doesn't seem sufficient. Give it 1ms. */ From patchwork Fri May 17 10:53:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 10947763 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4A60A76 for ; Fri, 17 May 2019 10:54:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 367F826E47 for ; Fri, 17 May 2019 10:54:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 290AE26E4F; Fri, 17 May 2019 10:54:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B182F26E47 for ; Fri, 17 May 2019 10:54:43 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hRaUS-00079f-MZ; Fri, 17 May 2019 10:53:28 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hRaUR-00079K-62 for xen-devel@lists.xenproject.org; Fri, 17 May 2019 10:53:27 +0000 X-Inumbo-ID: fd992ce2-7891-11e9-bb02-53d9015f7d8a Received: from prv1-mh.provo.novell.com (unknown [137.65.248.33]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id fd992ce2-7891-11e9-bb02-53d9015f7d8a; Fri, 17 May 2019 10:53:23 +0000 (UTC) Received: from INET-PRV1-MTA by prv1-mh.provo.novell.com with Novell_GroupWise; Fri, 17 May 2019 04:53:22 -0600 Message-Id: <5CDE92A302000078002300F9@prv1-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 18.1.0 Date: Fri, 17 May 2019 04:53:23 -0600 From: "Jan Beulich" To: "xen-devel" References: <5CC6DD090200007800229E80@prv1-mh.provo.novell.com> <5CDE8F5B020000780023005F@prv1-mh.provo.novell.com> In-Reply-To: <5CDE8F5B020000780023005F@prv1-mh.provo.novell.com> Mime-Version: 1.0 Content-Disposition: inline Subject: [Xen-devel] [PATCH v3 15/15] x86/IRQ: move {,_}clear_irq_vector() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This is largely to drop a forward declaration. There's one functional change - clear_irq_vector() gets marked __init, as its only caller is check_timer(). Beyond this only a few stray blanks get removed. Signed-off-by: Jan Beulich Acked-by: Andrew Cooper --- v3: New. --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -27,7 +27,6 @@ #include static int parse_irq_vector_map_param(const char *s); -static void _clear_irq_vector(struct irq_desc *desc); /* opt_noirqbalance: If true, software IRQ balancing/affinity is disabled. */ bool __read_mostly opt_noirqbalance; @@ -191,6 +190,67 @@ int __init bind_irq_vector(int irq, int return ret; } +static void _clear_irq_vector(struct irq_desc *desc) +{ + unsigned int cpu, old_vector, irq = desc->irq; + unsigned int vector = desc->arch.vector; + cpumask_t *tmp_mask = this_cpu(scratch_cpumask); + + BUG_ON(!valid_irq_vector(vector)); + + /* Always clear desc->arch.vector */ + cpumask_and(tmp_mask, desc->arch.cpu_mask, &cpu_online_map); + + for_each_cpu(cpu, tmp_mask) + { + ASSERT(per_cpu(vector_irq, cpu)[vector] == irq); + per_cpu(vector_irq, cpu)[vector] = ~irq; + } + + desc->arch.vector = IRQ_VECTOR_UNASSIGNED; + cpumask_clear(desc->arch.cpu_mask); + + if ( desc->arch.used_vectors ) + { + ASSERT(test_bit(vector, desc->arch.used_vectors)); + clear_bit(vector, desc->arch.used_vectors); + } + + desc->arch.used = IRQ_UNUSED; + + trace_irq_mask(TRC_HW_IRQ_CLEAR_VECTOR, irq, vector, tmp_mask); + + if ( likely(!desc->arch.move_in_progress) ) + return; + + /* If we were in motion, also clear desc->arch.old_vector */ + old_vector = desc->arch.old_vector; + cpumask_and(tmp_mask, desc->arch.old_cpu_mask, &cpu_online_map); + + for_each_cpu(cpu, tmp_mask) + { + ASSERT(per_cpu(vector_irq, cpu)[old_vector] == irq); + TRACE_3D(TRC_HW_IRQ_MOVE_FINISH, irq, old_vector, cpu); + per_cpu(vector_irq, cpu)[old_vector] = ~irq; + } + + release_old_vec(desc); + + desc->arch.move_in_progress = 0; +} + +void __init clear_irq_vector(int irq) +{ + struct irq_desc *desc = irq_to_desc(irq); + unsigned long flags; + + spin_lock_irqsave(&desc->lock, flags); + spin_lock(&vector_lock); + _clear_irq_vector(desc); + spin_unlock(&vector_lock); + spin_unlock_irqrestore(&desc->lock, flags); +} + /* * Dynamic irq allocate and deallocation for MSI */ @@ -281,67 +341,6 @@ void destroy_irq(unsigned int irq) xfree(action); } -static void _clear_irq_vector(struct irq_desc *desc) -{ - unsigned int cpu, old_vector, irq = desc->irq; - unsigned int vector = desc->arch.vector; - cpumask_t *tmp_mask = this_cpu(scratch_cpumask); - - BUG_ON(!valid_irq_vector(vector)); - - /* Always clear desc->arch.vector */ - cpumask_and(tmp_mask, desc->arch.cpu_mask, &cpu_online_map); - - for_each_cpu(cpu, tmp_mask) - { - ASSERT( per_cpu(vector_irq, cpu)[vector] == irq ); - per_cpu(vector_irq, cpu)[vector] = ~irq; - } - - desc->arch.vector = IRQ_VECTOR_UNASSIGNED; - cpumask_clear(desc->arch.cpu_mask); - - if ( desc->arch.used_vectors ) - { - ASSERT(test_bit(vector, desc->arch.used_vectors)); - clear_bit(vector, desc->arch.used_vectors); - } - - desc->arch.used = IRQ_UNUSED; - - trace_irq_mask(TRC_HW_IRQ_CLEAR_VECTOR, irq, vector, tmp_mask); - - if ( likely(!desc->arch.move_in_progress) ) - return; - - /* If we were in motion, also clear desc->arch.old_vector */ - old_vector = desc->arch.old_vector; - cpumask_and(tmp_mask, desc->arch.old_cpu_mask, &cpu_online_map); - - for_each_cpu(cpu, tmp_mask) - { - ASSERT( per_cpu(vector_irq, cpu)[old_vector] == irq ); - TRACE_3D(TRC_HW_IRQ_MOVE_FINISH, irq, old_vector, cpu); - per_cpu(vector_irq, cpu)[old_vector] = ~irq; - } - - release_old_vec(desc); - - desc->arch.move_in_progress = 0; -} - -void clear_irq_vector(int irq) -{ - struct irq_desc *desc = irq_to_desc(irq); - unsigned long flags; - - spin_lock_irqsave(&desc->lock, flags); - spin_lock(&vector_lock); - _clear_irq_vector(desc); - spin_unlock(&vector_lock); - spin_unlock_irqrestore(&desc->lock, flags); -} - int irq_to_vector(int irq) { int vector = IRQ_VECTOR_UNASSIGNED;