From patchwork Mon Feb 17 18:43:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 11387185 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D0E1317F0 for ; Mon, 17 Feb 2020 18:44:53 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A19A6222D9 for ; Mon, 17 Feb 2020 18:44:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="G49tQaK4" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A19A6222D9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j3lMy-0000gz-MP; Mon, 17 Feb 2020 18:43:48 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j3lMx-0000gf-Km for xen-devel@lists.xenproject.org; Mon, 17 Feb 2020 18:43:47 +0000 X-Inumbo-ID: 6c1fec3a-51b5-11ea-ade5-bc764e2007e4 Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 6c1fec3a-51b5-11ea-ade5-bc764e2007e4; Mon, 17 Feb 2020 18:43:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1581965023; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=fy6cQxceuSPl5hwnWrPQgOWIlzy0cemwK+hRYbeus8Y=; b=G49tQaK4PkODxrbzmORfD3aywbgU+uIRmILI0c17hc6neY9HCp+d8+Ck iVF605rjzQ1kzNE/RGLwoL+2ggw9YwKh/KYUQwcqP/BOzTm5eMcYgoo2w zrox/hKebcjdXDKIxbFbxock7vws1p1qR8UUKRcxXNPTYwpdIjQFZ4Oof M=; Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: 8Be0QSt0IDyT+viQDOiPH1ELl9ECEU2FlooJsn5h/tRPc5WjDtg77hEwr861H44K1Jchqyz1/B RgiHsz83cSIKvN+MPY90ZsrEoLgIz1p3Wr7cyPN7RjX8MJl4x18fasgyMmQnUyK7x2Hr1oyX+V 2Oq+Skf3Xxa8pmI4tqqD6MCDyVR8Cv//HS8+8fOlRW7dYIEfwSYAEv3qoPtSiKCfzZFTeyCGMz 3Qqp1NFKL4wJsIp1dQCEiPXPfxirBrK2vzghOFd9C6NXWucW/Z9sIU78NIqfzhufG7Xn9oJ6L0 hVI= X-SBRS: 2.7 X-MesageID: 12942061 X-Ironport-Server: esa5.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.70,453,1574139600"; d="scan'208";a="12942061" From: Roger Pau Monne To: Date: Mon, 17 Feb 2020 19:43:24 +0100 Message-ID: <20200217184324.73762-7-roger.pau@citrix.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200217184324.73762-1-roger.pau@citrix.com> References: <20200217184324.73762-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v2 6/6] x86: add accessors for scratch cpu mask X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Current usage of the per-CPU scratch cpumask is dangerous since there's no way to figure out if the mask is already being used except for manual code inspection of all the callers and possible call paths. This is unsafe and not reliable, so introduce a minimal get/put infrastructure to prevent nested usage of the scratch mask and usage in interrupt context. Move the declaration of scratch_cpumask to smp.c in order to place the declaration and the accessors as close as possible. Signed-off-by: Roger Pau Monné --- Changes since v1: - Use __builtin_return_address(0) instead of __func__. - Move declaration of scratch_cpumask and scratch_cpumask accessor to smp.c. - Do not allow usage in #MC or #NMI context. --- xen/arch/x86/io_apic.c | 6 ++++-- xen/arch/x86/irq.c | 13 ++++++++++--- xen/arch/x86/mm.c | 30 +++++++++++++++++++++--------- xen/arch/x86/msi.c | 4 +++- xen/arch/x86/smp.c | 25 +++++++++++++++++++++++++ xen/arch/x86/smpboot.c | 1 - xen/include/asm-x86/smp.h | 10 ++++++++++ 7 files changed, 73 insertions(+), 16 deletions(-) diff --git a/xen/arch/x86/io_apic.c b/xen/arch/x86/io_apic.c index e98e08e9c8..4ee261b632 100644 --- a/xen/arch/x86/io_apic.c +++ b/xen/arch/x86/io_apic.c @@ -2236,10 +2236,11 @@ int io_apic_set_pci_routing (int ioapic, int pin, int irq, int edge_level, int a entry.vector = vector; if (cpumask_intersects(desc->arch.cpu_mask, TARGET_CPUS)) { - cpumask_t *mask = this_cpu(scratch_cpumask); + cpumask_t *mask = get_scratch_cpumask(); cpumask_and(mask, desc->arch.cpu_mask, TARGET_CPUS); SET_DEST(entry, logical, cpu_mask_to_apicid(mask)); + put_scratch_cpumask(); } else { printk(XENLOG_ERR "IRQ%d: no target CPU (%*pb vs %*pb)\n", irq, CPUMASK_PR(desc->arch.cpu_mask), CPUMASK_PR(TARGET_CPUS)); @@ -2433,10 +2434,11 @@ int ioapic_guest_write(unsigned long physbase, unsigned int reg, u32 val) if ( cpumask_intersects(desc->arch.cpu_mask, TARGET_CPUS) ) { - cpumask_t *mask = this_cpu(scratch_cpumask); + cpumask_t *mask = get_scratch_cpumask(); cpumask_and(mask, desc->arch.cpu_mask, TARGET_CPUS); SET_DEST(rte, logical, cpu_mask_to_apicid(mask)); + put_scratch_cpumask(); } else { diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c index cc2eb8e925..7ecf5376e3 100644 --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -196,7 +196,7 @@ static void _clear_irq_vector(struct irq_desc *desc) { unsigned int cpu, old_vector, irq = desc->irq; unsigned int vector = desc->arch.vector; - cpumask_t *tmp_mask = this_cpu(scratch_cpumask); + cpumask_t *tmp_mask = get_scratch_cpumask(); BUG_ON(!valid_irq_vector(vector)); @@ -223,7 +223,10 @@ static void _clear_irq_vector(struct irq_desc *desc) trace_irq_mask(TRC_HW_IRQ_CLEAR_VECTOR, irq, vector, tmp_mask); if ( likely(!desc->arch.move_in_progress) ) + { + put_scratch_cpumask(); return; + } /* If we were in motion, also clear desc->arch.old_vector */ old_vector = desc->arch.old_vector; @@ -236,6 +239,7 @@ static void _clear_irq_vector(struct irq_desc *desc) per_cpu(vector_irq, cpu)[old_vector] = ~irq; } + put_scratch_cpumask(); release_old_vec(desc); desc->arch.move_in_progress = 0; @@ -1152,10 +1156,11 @@ static void irq_guest_eoi_timer_fn(void *data) break; case ACKTYPE_EOI: - cpu_eoi_map = this_cpu(scratch_cpumask); + cpu_eoi_map = get_scratch_cpumask(); cpumask_copy(cpu_eoi_map, action->cpu_eoi_map); spin_unlock_irq(&desc->lock); on_selected_cpus(cpu_eoi_map, set_eoi_ready, desc, 0); + put_scratch_cpumask(); return; } @@ -2531,12 +2536,12 @@ void fixup_irqs(const cpumask_t *mask, bool verbose) unsigned int irq; static int warned; struct irq_desc *desc; + cpumask_t *affinity = get_scratch_cpumask(); for ( irq = 0; irq < nr_irqs; irq++ ) { bool break_affinity = false, set_affinity = true; unsigned int vector; - cpumask_t *affinity = this_cpu(scratch_cpumask); if ( irq == 2 ) continue; @@ -2640,6 +2645,8 @@ void fixup_irqs(const cpumask_t *mask, bool verbose) irq, CPUMASK_PR(affinity)); } + put_scratch_cpumask(); + /* That doesn't seem sufficient. Give it 1ms. */ local_irq_enable(); mdelay(1); diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index edc238e51a..75b6114c1c 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -1261,7 +1261,7 @@ void put_page_from_l1e(l1_pgentry_t l1e, struct domain *l1e_owner) (l1e_owner == pg_owner) ) { struct vcpu *v; - cpumask_t *mask = this_cpu(scratch_cpumask); + cpumask_t *mask = get_scratch_cpumask(); cpumask_clear(mask); @@ -1278,6 +1278,7 @@ void put_page_from_l1e(l1_pgentry_t l1e, struct domain *l1e_owner) if ( !cpumask_empty(mask) ) flush_tlb_mask(mask); + put_scratch_cpumask(); } #endif /* CONFIG_PV_LDT_PAGING */ put_page(page); @@ -2902,7 +2903,7 @@ static int _get_page_type(struct page_info *page, unsigned long type, * vital that no other CPUs are left with mappings of a frame * which is about to become writeable to the guest. */ - cpumask_t *mask = this_cpu(scratch_cpumask); + cpumask_t *mask = get_scratch_cpumask(); BUG_ON(in_irq()); cpumask_copy(mask, d->dirty_cpumask); @@ -2918,6 +2919,7 @@ static int _get_page_type(struct page_info *page, unsigned long type, perfc_incr(need_flush_tlb_flush); flush_tlb_mask(mask); } + put_scratch_cpumask(); /* We lose existing type and validity. */ nx &= ~(PGT_type_mask | PGT_validated); @@ -3634,7 +3636,7 @@ long do_mmuext_op( case MMUEXT_TLB_FLUSH_MULTI: case MMUEXT_INVLPG_MULTI: { - cpumask_t *mask = this_cpu(scratch_cpumask); + cpumask_t *mask = get_scratch_cpumask(); if ( unlikely(currd != pg_owner) ) rc = -EPERM; @@ -3644,12 +3646,17 @@ long do_mmuext_op( mask)) ) rc = -EINVAL; if ( unlikely(rc) ) + { + put_scratch_cpumask(); break; + } if ( op.cmd == MMUEXT_TLB_FLUSH_MULTI ) flush_tlb_mask(mask); else if ( __addr_ok(op.arg1.linear_addr) ) flush_tlb_one_mask(mask, op.arg1.linear_addr); + put_scratch_cpumask(); + break; } @@ -3682,7 +3689,7 @@ long do_mmuext_op( else if ( likely(cache_flush_permitted(currd)) ) { unsigned int cpu; - cpumask_t *mask = this_cpu(scratch_cpumask); + cpumask_t *mask = get_scratch_cpumask(); cpumask_clear(mask); for_each_online_cpu(cpu) @@ -3690,6 +3697,7 @@ long do_mmuext_op( per_cpu(cpu_sibling_mask, cpu)) ) __cpumask_set_cpu(cpu, mask); flush_mask(mask, FLUSH_CACHE); + put_scratch_cpumask(); } else rc = -EINVAL; @@ -4155,12 +4163,13 @@ long do_mmu_update( * Force other vCPU-s of the affected guest to pick up L4 entry * changes (if any). */ - unsigned int cpu = smp_processor_id(); - cpumask_t *mask = per_cpu(scratch_cpumask, cpu); + cpumask_t *mask = get_scratch_cpumask(); - cpumask_andnot(mask, pt_owner->dirty_cpumask, cpumask_of(cpu)); + cpumask_andnot(mask, pt_owner->dirty_cpumask, + cpumask_of(smp_processor_id())); if ( !cpumask_empty(mask) ) flush_mask(mask, FLUSH_TLB_GLOBAL | FLUSH_ROOT_PGTBL); + put_scratch_cpumask(); } perfc_add(num_page_updates, i); @@ -4352,7 +4361,7 @@ static int __do_update_va_mapping( mask = d->dirty_cpumask; break; default: - mask = this_cpu(scratch_cpumask); + mask = get_scratch_cpumask(); rc = vcpumask_to_pcpumask(d, const_guest_handle_from_ptr(bmap_ptr, void), mask); @@ -4372,7 +4381,7 @@ static int __do_update_va_mapping( mask = d->dirty_cpumask; break; default: - mask = this_cpu(scratch_cpumask); + mask = get_scratch_cpumask(); rc = vcpumask_to_pcpumask(d, const_guest_handle_from_ptr(bmap_ptr, void), mask); @@ -4383,6 +4392,9 @@ static int __do_update_va_mapping( break; } + if ( mask && mask != d->dirty_cpumask ) + put_scratch_cpumask(); + return rc; } diff --git a/xen/arch/x86/msi.c b/xen/arch/x86/msi.c index 161ee60dbe..6624ea20d0 100644 --- a/xen/arch/x86/msi.c +++ b/xen/arch/x86/msi.c @@ -159,13 +159,15 @@ void msi_compose_msg(unsigned vector, const cpumask_t *cpu_mask, struct msi_msg if ( cpu_mask ) { - cpumask_t *mask = this_cpu(scratch_cpumask); + cpumask_t *mask; if ( !cpumask_intersects(cpu_mask, &cpu_online_map) ) return; + mask = get_scratch_cpumask(); cpumask_and(mask, cpu_mask, &cpu_online_map); msg->dest32 = cpu_mask_to_apicid(mask); + put_scratch_cpumask(); } msg->address_hi = MSI_ADDR_BASE_HI; diff --git a/xen/arch/x86/smp.c b/xen/arch/x86/smp.c index 0a9a9e7f02..8ad2b6912f 100644 --- a/xen/arch/x86/smp.c +++ b/xen/arch/x86/smp.c @@ -25,6 +25,31 @@ #include #include +DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, scratch_cpumask); + +#ifndef NDEBUG +cpumask_t *scratch_cpumask(bool use) +{ + static DEFINE_PER_CPU(void *, scratch_cpumask_use); + + /* + * Due to reentrancy scratch cpumask cannot be used in IRQ, #MC or #NMI + * context. + */ + BUG_ON(in_irq() || in_mc() || in_nmi()); + + if ( use && unlikely(this_cpu(scratch_cpumask_use)) ) + { + printk("%p: scratch CPU mask already in use by %p\n", + __builtin_return_address(0), this_cpu(scratch_cpumask_use)); + BUG(); + } + this_cpu(scratch_cpumask_use) = use ? __builtin_return_address(0) : NULL; + + return use ? this_cpu(scratch_cpumask) : NULL; +} +#endif + /* Helper functions to prepare APIC register values. */ static unsigned int prepare_ICR(unsigned int shortcut, int vector) { diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c index 82e89201b3..a2ac3adb38 100644 --- a/xen/arch/x86/smpboot.c +++ b/xen/arch/x86/smpboot.c @@ -54,7 +54,6 @@ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_sibling_mask); /* representing HT and core siblings of each logical CPU */ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_core_mask); -DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, scratch_cpumask); static cpumask_t scratch_cpu0mask; DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, send_ipi_cpumask); diff --git a/xen/include/asm-x86/smp.h b/xen/include/asm-x86/smp.h index 92d69a5ea0..40ab6c251d 100644 --- a/xen/include/asm-x86/smp.h +++ b/xen/include/asm-x86/smp.h @@ -23,6 +23,16 @@ DECLARE_PER_CPU(cpumask_var_t, cpu_sibling_mask); DECLARE_PER_CPU(cpumask_var_t, cpu_core_mask); DECLARE_PER_CPU(cpumask_var_t, scratch_cpumask); +#ifndef NDEBUG +/* Not to be called directly, use {get/put}_scratch_cpumask(). */ +cpumask_t *scratch_cpumask(bool use); +#define get_scratch_cpumask() scratch_cpumask(true) +#define put_scratch_cpumask() ((void)scratch_cpumask(false)) +#else +#define get_scratch_cpumask() this_cpu(scratch_cpumask) +#define put_scratch_cpumask() +#endif + /* * Do we, for platform reasons, need to actually keep CPUs online when we * would otherwise prefer them to be off?