From patchwork Wed May 8 13:03:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 10935623 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3C4FF92A for ; Wed, 8 May 2019 13:05:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2CFCD288E0 for ; Wed, 8 May 2019 13:05:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 16B88289EC; Wed, 8 May 2019 13:05:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 798572880C for ; Wed, 8 May 2019 13:05:52 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hOMEu-0006BU-UF; Wed, 08 May 2019 13:04:04 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hOMEt-0006BM-FI for xen-devel@lists.xenproject.org; Wed, 08 May 2019 13:04:03 +0000 X-Inumbo-ID: bf78a394-7191-11e9-843c-bc764e045a96 Received: from prv1-mh.provo.novell.com (unknown [137.65.248.33]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id bf78a394-7191-11e9-843c-bc764e045a96; Wed, 08 May 2019 13:04:01 +0000 (UTC) Received: from INET-PRV1-MTA by prv1-mh.provo.novell.com with Novell_GroupWise; Wed, 08 May 2019 07:04:00 -0600 Message-Id: <5CD2D3BF020000780022CD03@prv1-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 18.1.0 Date: Wed, 08 May 2019 07:03:59 -0600 From: "Jan Beulich" To: "xen-devel" References: <5CC6DD090200007800229E80@prv1-mh.provo.novell.com> <5CD2D2C8020000780022CCF2@prv1-mh.provo.novell.com> In-Reply-To: <5CD2D2C8020000780022CCF2@prv1-mh.provo.novell.com> Mime-Version: 1.0 Content-Disposition: inline Subject: [Xen-devel] [PATCH v2 02/12] x86/IRQ: deal with move cleanup count state in fixup_irqs() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP The cleanup IPI may get sent immediately before a CPU gets removed from the online map. In such a case the IPI would get handled on the CPU being offlined no earlier than in the interrupts disabled window after fixup_irqs()' main loop. This is too late, however, because a possible affinity change may incur the need for vector assignment, which will fail when the IRQ's move cleanup count is still non-zero. To fix this - record the set of CPUs the cleanup IPIs gets actually sent to alongside setting their count, - adjust the count in fixup_irqs(), accounting for all CPUs that the cleanup IPI was sent to, but that are no longer online, - bail early from the cleanup IPI handler when the CPU is no longer online, to prevent double accounting. Signed-off-by: Jan Beulich Reviewed-by: Roger Pau Monné --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -665,6 +665,9 @@ void irq_move_cleanup_interrupt(struct c ack_APIC_irq(); me = smp_processor_id(); + if ( !cpu_online(me) ) + return; + for ( vector = FIRST_DYNAMIC_VECTOR; vector <= LAST_HIPRIORITY_VECTOR; vector++) { @@ -724,11 +727,14 @@ unlock: static void send_cleanup_vector(struct irq_desc *desc) { - cpumask_t cleanup_mask; + cpumask_and(desc->arch.old_cpu_mask, desc->arch.old_cpu_mask, + &cpu_online_map); + desc->arch.move_cleanup_count = cpumask_weight(desc->arch.old_cpu_mask); - cpumask_and(&cleanup_mask, desc->arch.old_cpu_mask, &cpu_online_map); - desc->arch.move_cleanup_count = cpumask_weight(&cleanup_mask); - send_IPI_mask(&cleanup_mask, IRQ_MOVE_CLEANUP_VECTOR); + if ( desc->arch.move_cleanup_count ) + send_IPI_mask(desc->arch.old_cpu_mask, IRQ_MOVE_CLEANUP_VECTOR); + else + release_old_vec(desc); desc->arch.move_in_progress = 0; } @@ -2401,6 +2407,16 @@ void fixup_irqs(const cpumask_t *mask, b vector <= LAST_HIPRIORITY_VECTOR ) cpumask_and(desc->arch.cpu_mask, desc->arch.cpu_mask, mask); + if ( desc->arch.move_cleanup_count ) + { + /* The cleanup IPI may have got sent while we were still online. */ + cpumask_andnot(&affinity, desc->arch.old_cpu_mask, + &cpu_online_map); + desc->arch.move_cleanup_count -= cpumask_weight(&affinity); + if ( !desc->arch.move_cleanup_count ) + release_old_vec(desc); + } + cpumask_copy(&affinity, desc->affinity); if ( !desc->action || cpumask_subset(&affinity, mask) ) {