From patchwork Fri Feb 28 09:33:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 11411909 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2A6B214E3 for ; Fri, 28 Feb 2020 09:35:15 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 06B3324695 for ; Fri, 28 Feb 2020 09:35:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="AvmVW7Wz" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 06B3324695 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j7c21-0000dS-VV; Fri, 28 Feb 2020 09:34:05 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j7c20-0000cp-Jp for xen-devel@lists.xenproject.org; Fri, 28 Feb 2020 09:34:04 +0000 X-Inumbo-ID: 72cd7fa6-5a0d-11ea-b7e8-bc764e2007e4 Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 72cd7fa6-5a0d-11ea-b7e8-bc764e2007e4; Fri, 28 Feb 2020 09:33:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1582882439; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=nVQnn5b/1mYJiuc0qBvbdvw96D3soqQ/4cxZzw/8cqI=; b=AvmVW7Wz/BAr4X1oevVDwrIx0+i2n60L8ZIfZ97H0iGeM9pGfQ55MsGS CeTn7j6XPpsNHpAx4KOB34teeD62qMnIMBuW3B1HVQibe2y+ISW70sScz DIK4IoCiRPMYoNUD4POjS2tR3z8F3UzIUKqPUkqHhyMYPWe6IxAQpAwPl o=; Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: 6Xi6wvmuSJjWPCgnZSZ5FEV5s8ll91G7/Zo1+kXGys4Rg03GZCR4/HOs7X6VJbfCPp8yFWvJfm Op0oMVsQt+APbXCaEFFb0ojnAp/+ijUHBm2Hpn68qdeMFVsgM5HLn8m56gKSGgmtXM0lSgQ2KN elyE6K3ryV0NAUdWd/HasEgmWRyIWL6xhTI9J5BDKsZQ5ykaE2DKX8wbYHzvLJJqbVEzdtCXga mVcbNY//JnOhJOHNi76xYOlejldrrjfsfkQo1nPO6XdBIXkEFWeoZZUrlD/1avj7HJodtYNYuC Imk= X-SBRS: 2.7 X-MesageID: 13588527 X-Ironport-Server: esa6.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.70,495,1574139600"; d="scan'208";a="13588527" From: Roger Pau Monne To: Date: Fri, 28 Feb 2020 10:33:33 +0100 Message-ID: <20200228093334.36586-2-roger.pau@citrix.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200228093334.36586-1-roger.pau@citrix.com> References: <20200228093334.36586-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v4 1/2] x86/smp: use a dedicated CPU mask in send_IPI_mask X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Some callers of send_IPI_mask pass the scratch cpumask as the mask parameter of send_IPI_mask, so the scratch cpumask cannot be used by the function. The following trace has been obtained with a debug patch and shows one of those callers: (XEN) scratch CPU mask already in use by arch/x86/mm.c#_get_page_type+0x1f9/0x1abf (XEN) Xen BUG at smp.c:45 [...] (XEN) Xen call trace: (XEN) [] R scratch_cpumask+0xd3/0xf9 (XEN) [] F send_IPI_mask+0x72/0x1ca (XEN) [] F flush_area_mask+0x10c/0x16c (XEN) [] F arch/x86/mm.c#_get_page_type+0x3ff/0x1abf (XEN) [] F get_page_type+0xe/0x2c (XEN) [] F pv_set_gdt+0xa1/0x2aa (XEN) [] F arch_set_info_guest+0x1196/0x16ba (XEN) [] F default_initialise_vcpu+0xc7/0xd4 (XEN) [] F arch_initialise_vcpu+0x61/0xcd (XEN) [] F do_vcpu_op+0x219/0x690 (XEN) [] F pv_hypercall+0x2f6/0x593 (XEN) [] F lstar_enter+0x112/0x120 _get_page_type will use the scratch cpumask to call flush_tlb_mask, which in turn calls send_IPI_mask. Fix this by using a dedicated per CPU cpumask in send_IPI_mask. Fixes: 5500d265a2a8 ('x86/smp: use APIC ALLBUT destination shorthand when possible') Signed-off-by: Roger Pau Monné --- xen/arch/x86/smp.c | 4 +++- xen/arch/x86/smpboot.c | 9 ++++++++- 2 files changed, 11 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/smp.c b/xen/arch/x86/smp.c index 0461812cf6..072638f0f6 100644 --- a/xen/arch/x86/smp.c +++ b/xen/arch/x86/smp.c @@ -59,6 +59,8 @@ static void send_IPI_shortcut(unsigned int shortcut, int vector, apic_write(APIC_ICR, cfg); } +DECLARE_PER_CPU(cpumask_var_t, send_ipi_cpumask); + /* * send_IPI_mask(cpumask, vector): sends @vector IPI to CPUs in @cpumask, * excluding the local CPU. @cpumask may be empty. @@ -67,7 +69,7 @@ static void send_IPI_shortcut(unsigned int shortcut, int vector, void send_IPI_mask(const cpumask_t *mask, int vector) { bool cpus_locked = false; - cpumask_t *scratch = this_cpu(scratch_cpumask); + cpumask_t *scratch = this_cpu(send_ipi_cpumask); if ( in_irq() || in_mce_handler() || in_nmi_handler() ) { diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c index ad49f2dcd7..6c548b0b53 100644 --- a/xen/arch/x86/smpboot.c +++ b/xen/arch/x86/smpboot.c @@ -57,6 +57,9 @@ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_core_mask); DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, scratch_cpumask); static cpumask_t scratch_cpu0mask; +DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, send_ipi_cpumask); +static cpumask_t send_ipi_cpu0mask; + cpumask_t cpu_online_map __read_mostly; EXPORT_SYMBOL(cpu_online_map); @@ -930,6 +933,8 @@ static void cpu_smpboot_free(unsigned int cpu, bool remove) FREE_CPUMASK_VAR(per_cpu(cpu_core_mask, cpu)); if ( per_cpu(scratch_cpumask, cpu) != &scratch_cpu0mask ) FREE_CPUMASK_VAR(per_cpu(scratch_cpumask, cpu)); + if ( per_cpu(send_ipi_cpumask, cpu) != &send_ipi_cpu0mask ) + FREE_CPUMASK_VAR(per_cpu(send_ipi_cpumask, cpu)); } cleanup_cpu_root_pgt(cpu); @@ -1034,7 +1039,8 @@ static int cpu_smpboot_alloc(unsigned int cpu) if ( !(cond_zalloc_cpumask_var(&per_cpu(cpu_sibling_mask, cpu)) && cond_zalloc_cpumask_var(&per_cpu(cpu_core_mask, cpu)) && - cond_alloc_cpumask_var(&per_cpu(scratch_cpumask, cpu))) ) + cond_alloc_cpumask_var(&per_cpu(scratch_cpumask, cpu)) && + cond_alloc_cpumask_var(&per_cpu(send_ipi_cpumask, cpu))) ) goto out; rc = 0; @@ -1175,6 +1181,7 @@ void __init smp_prepare_boot_cpu(void) cpumask_set_cpu(cpu, &cpu_present_map); #if NR_CPUS > 2 * BITS_PER_LONG per_cpu(scratch_cpumask, cpu) = &scratch_cpu0mask; + per_cpu(send_ipi_cpumask, cpu) = &send_ipi_cpu0mask; #endif get_cpu_info()->use_pv_cr3 = false;