From patchwork Mon Feb 17 11:17:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew Cooper X-Patchwork-Id: 11386133 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 579F617E8 for ; Mon, 17 Feb 2020 11:19:01 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 28C462070B for ; Mon, 17 Feb 2020 11:19:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="Iq9NyMyQ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 28C462070B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j3ePU-0006eg-Vk; Mon, 17 Feb 2020 11:17:56 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j3ePT-0006eR-4Y for xen-devel@lists.xenproject.org; Mon, 17 Feb 2020 11:17:55 +0000 X-Inumbo-ID: 1ed444fa-5177-11ea-bfcd-12813bfff9fa Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 1ed444fa-5177-11ea-bfcd-12813bfff9fa; Mon, 17 Feb 2020 11:17:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1581938265; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=UaBlz+mwGZS2zBixbjlDvSS6DGh/P+acs2cMV/souIg=; b=Iq9NyMyQ37wMY+bKsAGNNBONEYbLaVhC/97hJBE3TiZVG8INGoFYcGH5 aKb3N0SEQx4gnoXRdyaGCYyuObclDa88QnyZ1I4wkR4K+5XIkdl4M5jLL EPZpZuE4lUkNdIaztEThTfK7+37YTvDOCfIeN7FTkskX1fzHDaqA2Rhcr E=; Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=andrew.cooper3@citrix.com; spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of andrew.cooper3@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="Andrew.Cooper3@citrix.com"; x-sender="andrew.cooper3@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of Andrew.Cooper3@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="Andrew.Cooper3@citrix.com"; x-sender="Andrew.Cooper3@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="Andrew.Cooper3@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: rVY5CSv3FG1+kZevZS0vE1vnK6B4VfawOOQd6kTJ5lt5Fie+u9T9xNYElxg+7MsRC1UxkTTBuy jJGaLkCTIZe0lP14ZryXaRAlKbHUxSCdaSJzsjkP3s60a+dQ/Yw3e4lP9Nx1PdWS4++aYKavq6 VqjBH0LT6zi4GPm98FELW14aq7TM6kMt4GpVJ6sxAgDdBTkCRP42+DEEjQh7i+dbCG+OBXjCx+ fSYNmcfFx8272bbf1QoHdVUUCnWpqLniXsdW/qu111WxWfgJyPtSEA/ErzjL4PD55tOPHOTOWf l3c= X-SBRS: 2.7 X-MesageID: 13181794 X-Ironport-Server: esa4.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.70,452,1574139600"; d="scan'208";a="13181794" From: Andrew Cooper To: Xen-devel Date: Mon, 17 Feb 2020 11:17:39 +0000 Message-ID: <20200217111740.7298-3-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20200217111740.7298-1-andrew.cooper3@citrix.com> References: <20200217111740.7298-1-andrew.cooper3@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH 2/3] xen: Move async_exception_* infrastructure into x86 X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Julien Grall , Wei Liu , Andrew Cooper , Jan Beulich , Volodymyr Babchuk , =?utf-8?q?Roger_Pau_Monn?= =?utf-8?q?=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" The async_exception_{state,mask} infrastructure is implemented in common code, but is limited to x86 because of the VCPU_TRAP_LAST ifdef-ary. The internals are very x86 specific (and even then, in need of correction), and won't be of interest to other architectures. Move it all into x86 specific code. No functional change. Signed-off-by: Andrew Cooper Reviewed-by: Jan Beulich --- CC: Jan Beulich CC: Wei Liu CC: Roger Pau Monné CC: Stefano Stabellini CC: Julien Grall CC: Volodymyr Babchuk --- xen/arch/x86/cpu/mcheck/vmce.c | 2 +- xen/arch/x86/cpu/vpmu.c | 2 +- xen/arch/x86/domain.c | 12 ++++++++++++ xen/arch/x86/domctl.c | 2 +- xen/arch/x86/hvm/irq.c | 8 ++++---- xen/arch/x86/hvm/vioapic.c | 2 +- xen/arch/x86/hvm/vlapic.c | 2 +- xen/arch/x86/nmi.c | 4 ++-- xen/arch/x86/oprofile/nmi_int.c | 2 +- xen/arch/x86/pv/callback.c | 2 +- xen/arch/x86/pv/iret.c | 13 +++++++------ xen/arch/x86/pv/traps.c | 2 +- xen/arch/x86/x86_64/asm-offsets.c | 10 +++++----- xen/common/domain.c | 15 --------------- xen/include/asm-x86/domain.h | 8 ++++++++ xen/include/xen/sched.h | 11 ----------- 16 files changed, 46 insertions(+), 51 deletions(-) diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c index 4f5de07e01..816ef61ad4 100644 --- a/xen/arch/x86/cpu/mcheck/vmce.c +++ b/xen/arch/x86/cpu/mcheck/vmce.c @@ -412,7 +412,7 @@ int inject_vmce(struct domain *d, int vcpu) if ( (is_hvm_domain(d) || pv_trap_callback_registered(v, TRAP_machine_check)) && - !test_and_set_bool(v->mce_pending) ) + !test_and_set_bool(v->arch.mce_pending) ) { mce_printk(MCE_VERBOSE, "MCE: inject vMCE to %pv\n", v); vcpu_kick(v); diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c index 3c778450ac..e50d478d23 100644 --- a/xen/arch/x86/cpu/vpmu.c +++ b/xen/arch/x86/cpu/vpmu.c @@ -329,7 +329,7 @@ void vpmu_do_interrupt(struct cpu_user_regs *regs) vlapic_set_irq(vlapic, vlapic_lvtpc & APIC_VECTOR_MASK, 0); break; case APIC_MODE_NMI: - sampling->nmi_pending = 1; + sampling->arch.nmi_pending = true; break; } #endif diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index 66150abf4c..fe63c23676 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -1246,6 +1246,10 @@ int arch_initialise_vcpu(struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg) int arch_vcpu_reset(struct vcpu *v) { + v->arch.async_exception_mask = 0; + memset(v->arch.async_exception_state, 0, + sizeof(v->arch.async_exception_state)); + if ( is_pv_vcpu(v) ) { pv_destroy_gdt(v); @@ -1264,6 +1268,14 @@ arch_do_vcpu_op( switch ( cmd ) { + case VCPUOP_send_nmi: + if ( !guest_handle_is_null(arg) ) + return -EINVAL; + + if ( !test_and_set_bool(v->arch.nmi_pending) ) + vcpu_kick(v); + break; + case VCPUOP_register_vcpu_time_memory_area: { struct vcpu_register_time_memory_area area; diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c index ce76d6d776..ed86762fa6 100644 --- a/xen/arch/x86/domctl.c +++ b/xen/arch/x86/domctl.c @@ -614,7 +614,7 @@ long arch_do_domctl( { case XEN_DOMCTL_SENDTRIGGER_NMI: ret = 0; - if ( !test_and_set_bool(v->nmi_pending) ) + if ( !test_and_set_bool(v->arch.nmi_pending) ) vcpu_kick(v); break; diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c index c684422b24..dd202aab5a 100644 --- a/xen/arch/x86/hvm/irq.c +++ b/xen/arch/x86/hvm/irq.c @@ -526,10 +526,10 @@ struct hvm_intack hvm_vcpu_has_pending_irq(struct vcpu *v) */ vlapic_sync_pir_to_irr(v); - if ( unlikely(v->nmi_pending) ) + if ( unlikely(v->arch.nmi_pending) ) return hvm_intack_nmi; - if ( unlikely(v->mce_pending) ) + if ( unlikely(v->arch.mce_pending) ) return hvm_intack_mce; if ( (plat->irq->callback_via_type == HVMIRQ_callback_vector) @@ -554,11 +554,11 @@ struct hvm_intack hvm_vcpu_ack_pending_irq( switch ( intack.source ) { case hvm_intsrc_nmi: - if ( !test_and_clear_bool(v->nmi_pending) ) + if ( !test_and_clear_bool(v->arch.nmi_pending) ) intack = hvm_intack_none; break; case hvm_intsrc_mce: - if ( !test_and_clear_bool(v->mce_pending) ) + if ( !test_and_clear_bool(v->arch.mce_pending) ) intack = hvm_intack_none; break; case hvm_intsrc_pic: diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c index 9aeef32a14..b87facb0e0 100644 --- a/xen/arch/x86/hvm/vioapic.c +++ b/xen/arch/x86/hvm/vioapic.c @@ -469,7 +469,7 @@ static void vioapic_deliver(struct hvm_vioapic *vioapic, unsigned int pin) for_each_vcpu ( d, v ) if ( vlapic_match_dest(vcpu_vlapic(v), NULL, 0, dest, dest_mode) && - !test_and_set_bool(v->nmi_pending) ) + !test_and_set_bool(v->arch.nmi_pending) ) vcpu_kick(v); break; } diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c index acb9ddf46f..26726a4312 100644 --- a/xen/arch/x86/hvm/vlapic.c +++ b/xen/arch/x86/hvm/vlapic.c @@ -355,7 +355,7 @@ static void vlapic_accept_irq(struct vcpu *v, uint32_t icr_low) break; case APIC_DM_NMI: - if ( !test_and_set_bool(v->nmi_pending) ) + if ( !test_and_set_bool(v->arch.nmi_pending) ) { bool_t wake = 0; domain_lock(v->domain); diff --git a/xen/arch/x86/nmi.c b/xen/arch/x86/nmi.c index 638677a5fe..0390d9b0b4 100644 --- a/xen/arch/x86/nmi.c +++ b/xen/arch/x86/nmi.c @@ -599,8 +599,8 @@ static void do_nmi_stats(unsigned char key) !(v = hardware_domain->vcpu[0]) ) return; - pend = v->nmi_pending; - mask = v->async_exception_mask & (1 << VCPU_TRAP_NMI); + pend = v->arch.nmi_pending; + mask = v->arch.async_exception_mask & (1 << VCPU_TRAP_NMI); if ( pend || mask ) printk("%pv: NMI%s%s\n", v, pend ? " pending" : "", mask ? " masked" : ""); diff --git a/xen/arch/x86/oprofile/nmi_int.c b/xen/arch/x86/oprofile/nmi_int.c index 8f97f7522c..2969db47fc 100644 --- a/xen/arch/x86/oprofile/nmi_int.c +++ b/xen/arch/x86/oprofile/nmi_int.c @@ -93,7 +93,7 @@ static int nmi_callback(const struct cpu_user_regs *regs, int cpu) send_guest_vcpu_virq(current, VIRQ_XENOPROF); if ( ovf == 2 ) - current->nmi_pending = 1; + current->arch.nmi_pending = true; return 1; } diff --git a/xen/arch/x86/pv/callback.c b/xen/arch/x86/pv/callback.c index 1178efddb6..106c16ed01 100644 --- a/xen/arch/x86/pv/callback.c +++ b/xen/arch/x86/pv/callback.c @@ -52,7 +52,7 @@ static int register_guest_nmi_callback(unsigned long address) * now. */ if ( curr->vcpu_id == 0 && arch_get_nmi_reason(d) != 0 ) - curr->nmi_pending = 1; + curr->arch.nmi_pending = true; return 0; } diff --git a/xen/arch/x86/pv/iret.c b/xen/arch/x86/pv/iret.c index 16b449ff64..9e34b616f9 100644 --- a/xen/arch/x86/pv/iret.c +++ b/xen/arch/x86/pv/iret.c @@ -27,15 +27,15 @@ static void async_exception_cleanup(struct vcpu *curr) { unsigned int trap; - if ( !curr->async_exception_mask ) + if ( !curr->arch.async_exception_mask ) return; - if ( !(curr->async_exception_mask & (curr->async_exception_mask - 1)) ) - trap = __scanbit(curr->async_exception_mask, VCPU_TRAP_NONE); + if ( !(curr->arch.async_exception_mask & (curr->arch.async_exception_mask - 1)) ) + trap = __scanbit(curr->arch.async_exception_mask, VCPU_TRAP_NONE); else for ( trap = VCPU_TRAP_NONE + 1; trap <= VCPU_TRAP_LAST; ++trap ) - if ( (curr->async_exception_mask ^ - curr->async_exception_state(trap).old_mask) == (1u << trap) ) + if ( (curr->arch.async_exception_mask ^ + curr->arch.async_exception_state(trap).old_mask) == (1u << trap) ) break; if ( unlikely(trap > VCPU_TRAP_LAST) ) { @@ -44,7 +44,8 @@ static void async_exception_cleanup(struct vcpu *curr) } /* Restore previous asynchronous exception mask. */ - curr->async_exception_mask = curr->async_exception_state(trap).old_mask; + curr->arch.async_exception_mask = + curr->arch.async_exception_state(trap).old_mask; } unsigned long do_iret(void) diff --git a/xen/arch/x86/pv/traps.c b/xen/arch/x86/pv/traps.c index 950cf25b4a..d97ebf7890 100644 --- a/xen/arch/x86/pv/traps.c +++ b/xen/arch/x86/pv/traps.c @@ -176,7 +176,7 @@ int pv_raise_nmi(struct vcpu *v) if ( cmpxchgptr(v_ptr, NULL, v) ) return -EBUSY; - if ( !test_and_set_bool(v->nmi_pending) ) + if ( !test_and_set_bool(v->arch.nmi_pending) ) { /* Not safe to wake up a vcpu here */ raise_softirq(NMI_SOFTIRQ); diff --git a/xen/arch/x86/x86_64/asm-offsets.c b/xen/arch/x86/x86_64/asm-offsets.c index 07d2155bf5..b8e8510439 100644 --- a/xen/arch/x86/x86_64/asm-offsets.c +++ b/xen/arch/x86/x86_64/asm-offsets.c @@ -72,11 +72,11 @@ void __dummy__(void) OFFSET(VCPU_guest_context_flags, struct vcpu, arch.pv.vgc_flags); OFFSET(VCPU_cr3, struct vcpu, arch.cr3); OFFSET(VCPU_arch_msrs, struct vcpu, arch.msrs); - OFFSET(VCPU_nmi_pending, struct vcpu, nmi_pending); - OFFSET(VCPU_mce_pending, struct vcpu, mce_pending); - OFFSET(VCPU_nmi_old_mask, struct vcpu, nmi_state.old_mask); - OFFSET(VCPU_mce_old_mask, struct vcpu, mce_state.old_mask); - OFFSET(VCPU_async_exception_mask, struct vcpu, async_exception_mask); + OFFSET(VCPU_nmi_pending, struct vcpu, arch.nmi_pending); + OFFSET(VCPU_mce_pending, struct vcpu, arch.mce_pending); + OFFSET(VCPU_nmi_old_mask, struct vcpu, arch.nmi_state.old_mask); + OFFSET(VCPU_mce_old_mask, struct vcpu, arch.mce_state.old_mask); + OFFSET(VCPU_async_exception_mask, struct vcpu, arch.async_exception_mask); DEFINE(VCPU_TRAP_NMI, VCPU_TRAP_NMI); DEFINE(VCPU_TRAP_MCE, VCPU_TRAP_MCE); DEFINE(_VGCF_syscall_disables_events, _VGCF_syscall_disables_events); diff --git a/xen/common/domain.c b/xen/common/domain.c index 0ae04d5bb9..6ad458fa6b 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -1199,10 +1199,6 @@ int vcpu_reset(struct vcpu *v) v->fpu_initialised = 0; v->fpu_dirtied = 0; v->is_initialised = 0; -#ifdef VCPU_TRAP_LAST - v->async_exception_mask = 0; - memset(v->async_exception_state, 0, sizeof(v->async_exception_state)); -#endif if ( v->affinity_broken & VCPU_AFFINITY_OVERRIDE ) vcpu_temporary_affinity(v, NR_CPUS, VCPU_AFFINITY_OVERRIDE); if ( v->affinity_broken & VCPU_AFFINITY_WAIT ) @@ -1511,17 +1507,6 @@ long do_vcpu_op(int cmd, unsigned int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg) break; } -#ifdef VCPU_TRAP_NMI - case VCPUOP_send_nmi: - if ( !guest_handle_is_null(arg) ) - return -EINVAL; - - if ( !test_and_set_bool(v->nmi_pending) ) - vcpu_kick(v); - - break; -#endif - default: rc = arch_do_vcpu_op(cmd, v, arg); break; diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h index 1843c76d1a..105adf96eb 100644 --- a/xen/include/asm-x86/domain.h +++ b/xen/include/asm-x86/domain.h @@ -19,6 +19,7 @@ #define is_hvm_pv_evtchn_vcpu(v) (is_hvm_pv_evtchn_domain(v->domain)) #define is_domain_direct_mapped(d) ((void)(d), 0) +#define VCPU_TRAP_NONE 0 #define VCPU_TRAP_NMI 1 #define VCPU_TRAP_MCE 2 #define VCPU_TRAP_LAST VCPU_TRAP_MCE @@ -556,6 +557,13 @@ struct arch_vcpu struct vpmu_struct vpmu; + struct { + bool pending; + uint8_t old_mask; + } async_exception_state[VCPU_TRAP_LAST]; +#define async_exception_state(t) async_exception_state[(t)-1] + uint8_t async_exception_mask; + /* Virtual Machine Extensions */ union { struct pv_vcpu pv; diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 21b5f4cebd..3a4f43098c 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -191,17 +191,6 @@ struct vcpu bool is_urgent; /* VCPU must context_switch without scheduling unit. */ bool force_context_switch; - -#ifdef VCPU_TRAP_LAST -#define VCPU_TRAP_NONE 0 - struct { - bool pending; - uint8_t old_mask; - } async_exception_state[VCPU_TRAP_LAST]; -#define async_exception_state(t) async_exception_state[(t)-1] - uint8_t async_exception_mask; -#endif - /* Require shutdown to be deferred for some asynchronous operation? */ bool defer_shutdown; /* VCPU is paused following shutdown request (d->is_shutting_down)? */