From patchwork Mon Feb 17 11:17:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew Cooper X-Patchwork-Id: 11386137 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A2439109A for ; Mon, 17 Feb 2020 11:19:04 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 73BEA2070B for ; Mon, 17 Feb 2020 11:19:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="a6LXj4OO" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 73BEA2070B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j3ePP-0006d2-HA; Mon, 17 Feb 2020 11:17:51 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j3ePO-0006cq-4M for xen-devel@lists.xenproject.org; Mon, 17 Feb 2020 11:17:50 +0000 X-Inumbo-ID: 1e455011-5177-11ea-bfcd-12813bfff9fa Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 1e455011-5177-11ea-bfcd-12813bfff9fa; Mon, 17 Feb 2020 11:17:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1581938265; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=fphZQyLyE40k2zGcrPetHZnABqpMBdUZ4Vyx9sP+a38=; b=a6LXj4OOIWjGYhoGnN75Dr+a0CCZjH70QVtSTghQtDC9otq/74BMCv04 cSCduifATfgGZ33CMPBcMpFI96+vMnKQldWirOxDIPV2J7FKm5eDHcv3B uSrzyO/5Bimo0hgEvbDDtIOO4E6NuVVFI13X0Qqrv7cTGiN+Zv/wM+3Nk M=; Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=andrew.cooper3@citrix.com; spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of andrew.cooper3@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="Andrew.Cooper3@citrix.com"; x-sender="andrew.cooper3@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of Andrew.Cooper3@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="Andrew.Cooper3@citrix.com"; x-sender="Andrew.Cooper3@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="Andrew.Cooper3@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: uAIc5lwthmXWjzOqz9psgJNTMka5a2KvqcJP2FJXHEkX5XVs/Z+QsDomVbwVNaFykT1azUpsqm XfPCQigF7DdhLYohMRFZz+v3TOZ/q2AHWHbYqXFOJtnVdM0GCNBCHEMwt/Tonwlnly17Wjsr/a CW9tRpwDfGdNcrm3k2V2e0GzCs2JgFgRtNWu5VAUuOkCURLwrVuwcO9sOZ/0y/dxJofXaMq8JU BLuaNwCdu3WwPv7BfSs5LTWY4e+wmg2ItyQhv11ax7bN45Q+hp9PxkPTtnpgP/RiOKVsM4uk0z l9Q= X-SBRS: 2.7 X-MesageID: 12922761 X-Ironport-Server: esa5.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.70,452,1574139600"; d="scan'208";a="12922761" From: Andrew Cooper To: Xen-devel Date: Mon, 17 Feb 2020 11:17:40 +0000 Message-ID: <20200217111740.7298-4-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20200217111740.7298-1-andrew.cooper3@citrix.com> References: <20200217111740.7298-1-andrew.cooper3@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH 3/3] xen/x86: Rename and simplify async_event_* infrastructure X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" The name async_exception isn't appropriate. NMI isn't an exception at all, and while MCE is classified as an exception (i.e. can occur at any point), the mechanics of injecting it behave like other external interrupts. Rename to async_event_* which is a little shorter. Drop VCPU_TRAP_NONE and renumber VCPU_TRAP_* to be 0-based, rather than 1-based, and remove async_exception_state() which hides the fixup internally. This shifts the bits used in async_event_mask along by one, but doesn't alter the overall logic. Drop the {nmi,mce}_{state,pending} defines which obfuscate the data layout. Instead, use an anonymous union to overlay names on the async_event[] array, to retain the easy-to-follow v->arch.{nmi,mce}_pending logic. No functional change. Signed-off-by: Andrew Cooper --- CC: Jan Beulich CC: Wei Liu CC: Roger Pau Monné --- xen/arch/x86/domain.c | 5 ++--- xen/arch/x86/nmi.c | 2 +- xen/arch/x86/pv/iret.c | 15 +++++++-------- xen/arch/x86/x86_64/asm-offsets.c | 6 +++--- xen/arch/x86/x86_64/compat/entry.S | 12 ++++++------ xen/arch/x86/x86_64/entry.S | 12 ++++++------ xen/include/asm-x86/domain.h | 33 ++++++++++++++++----------------- 7 files changed, 41 insertions(+), 44 deletions(-) diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index fe63c23676..7ee6853522 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -1246,9 +1246,8 @@ int arch_initialise_vcpu(struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg) int arch_vcpu_reset(struct vcpu *v) { - v->arch.async_exception_mask = 0; - memset(v->arch.async_exception_state, 0, - sizeof(v->arch.async_exception_state)); + v->arch.async_event_mask = 0; + memset(v->arch.async_event, 0, sizeof(v->arch.async_event)); if ( is_pv_vcpu(v) ) { diff --git a/xen/arch/x86/nmi.c b/xen/arch/x86/nmi.c index 0390d9b0b4..44507cd86b 100644 --- a/xen/arch/x86/nmi.c +++ b/xen/arch/x86/nmi.c @@ -600,7 +600,7 @@ static void do_nmi_stats(unsigned char key) return; pend = v->arch.nmi_pending; - mask = v->arch.async_exception_mask & (1 << VCPU_TRAP_NMI); + mask = v->arch.async_event_mask & (1 << VCPU_TRAP_NMI); if ( pend || mask ) printk("%pv: NMI%s%s\n", v, pend ? " pending" : "", mask ? " masked" : ""); diff --git a/xen/arch/x86/pv/iret.c b/xen/arch/x86/pv/iret.c index 9e34b616f9..27bb39f162 100644 --- a/xen/arch/x86/pv/iret.c +++ b/xen/arch/x86/pv/iret.c @@ -27,15 +27,15 @@ static void async_exception_cleanup(struct vcpu *curr) { unsigned int trap; - if ( !curr->arch.async_exception_mask ) + if ( !curr->arch.async_event_mask ) return; - if ( !(curr->arch.async_exception_mask & (curr->arch.async_exception_mask - 1)) ) - trap = __scanbit(curr->arch.async_exception_mask, VCPU_TRAP_NONE); + if ( !(curr->arch.async_event_mask & (curr->arch.async_event_mask - 1)) ) + trap = __scanbit(curr->arch.async_event_mask, 0); else - for ( trap = VCPU_TRAP_NONE + 1; trap <= VCPU_TRAP_LAST; ++trap ) - if ( (curr->arch.async_exception_mask ^ - curr->arch.async_exception_state(trap).old_mask) == (1u << trap) ) + for ( trap = 0; trap <= VCPU_TRAP_LAST; ++trap ) + if ( (curr->arch.async_event_mask ^ + curr->arch.async_event[trap].old_mask) == (1u << trap) ) break; if ( unlikely(trap > VCPU_TRAP_LAST) ) { @@ -44,8 +44,7 @@ static void async_exception_cleanup(struct vcpu *curr) } /* Restore previous asynchronous exception mask. */ - curr->arch.async_exception_mask = - curr->arch.async_exception_state(trap).old_mask; + curr->arch.async_event_mask = curr->arch.async_event[trap].old_mask; } unsigned long do_iret(void) diff --git a/xen/arch/x86/x86_64/asm-offsets.c b/xen/arch/x86/x86_64/asm-offsets.c index b8e8510439..59b62649e2 100644 --- a/xen/arch/x86/x86_64/asm-offsets.c +++ b/xen/arch/x86/x86_64/asm-offsets.c @@ -74,9 +74,9 @@ void __dummy__(void) OFFSET(VCPU_arch_msrs, struct vcpu, arch.msrs); OFFSET(VCPU_nmi_pending, struct vcpu, arch.nmi_pending); OFFSET(VCPU_mce_pending, struct vcpu, arch.mce_pending); - OFFSET(VCPU_nmi_old_mask, struct vcpu, arch.nmi_state.old_mask); - OFFSET(VCPU_mce_old_mask, struct vcpu, arch.mce_state.old_mask); - OFFSET(VCPU_async_exception_mask, struct vcpu, arch.async_exception_mask); + OFFSET(VCPU_nmi_old_mask, struct vcpu, arch.nmi_old_mask); + OFFSET(VCPU_mce_old_mask, struct vcpu, arch.mce_old_mask); + OFFSET(VCPU_async_event_mask, struct vcpu, arch.async_event_mask); DEFINE(VCPU_TRAP_NMI, VCPU_TRAP_NMI); DEFINE(VCPU_TRAP_MCE, VCPU_TRAP_MCE); DEFINE(_VGCF_syscall_disables_events, _VGCF_syscall_disables_events); diff --git a/xen/arch/x86/x86_64/compat/entry.S b/xen/arch/x86/x86_64/compat/entry.S index 3cd375bd48..17b1153f79 100644 --- a/xen/arch/x86/x86_64/compat/entry.S +++ b/xen/arch/x86/x86_64/compat/entry.S @@ -84,33 +84,33 @@ compat_process_softirqs: ALIGN /* %rbx: struct vcpu */ compat_process_mce: - testb $1 << VCPU_TRAP_MCE,VCPU_async_exception_mask(%rbx) + testb $1 << VCPU_TRAP_MCE, VCPU_async_event_mask(%rbx) jnz .Lcompat_test_guest_nmi sti movb $0, VCPU_mce_pending(%rbx) call set_guest_machinecheck_trapbounce test %al, %al jz compat_test_all_events - movzbl VCPU_async_exception_mask(%rbx),%edx # save mask for the + movzbl VCPU_async_event_mask(%rbx), %edx # save mask for the movb %dl,VCPU_mce_old_mask(%rbx) # iret hypercall orl $1 << VCPU_TRAP_MCE,%edx - movb %dl,VCPU_async_exception_mask(%rbx) + movb %dl, VCPU_async_event_mask(%rbx) jmp compat_process_trap ALIGN /* %rbx: struct vcpu */ compat_process_nmi: - testb $1 << VCPU_TRAP_NMI,VCPU_async_exception_mask(%rbx) + testb $1 << VCPU_TRAP_NMI, VCPU_async_event_mask(%rbx) jnz compat_test_guest_events sti movb $0, VCPU_nmi_pending(%rbx) call set_guest_nmi_trapbounce test %al, %al jz compat_test_all_events - movzbl VCPU_async_exception_mask(%rbx),%edx # save mask for the + movzbl VCPU_async_event_mask(%rbx), %edx # save mask for the movb %dl,VCPU_nmi_old_mask(%rbx) # iret hypercall orl $1 << VCPU_TRAP_NMI,%edx - movb %dl,VCPU_async_exception_mask(%rbx) + movb %dl, VCPU_async_event_mask(%rbx) /* FALLTHROUGH */ compat_process_trap: leaq VCPU_trap_bounce(%rbx),%rdx diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S index 997c481ecb..7832ff6fda 100644 --- a/xen/arch/x86/x86_64/entry.S +++ b/xen/arch/x86/x86_64/entry.S @@ -96,33 +96,33 @@ process_softirqs: ALIGN /* %rbx: struct vcpu */ process_mce: - testb $1 << VCPU_TRAP_MCE, VCPU_async_exception_mask(%rbx) + testb $1 << VCPU_TRAP_MCE, VCPU_async_event_mask(%rbx) jnz .Ltest_guest_nmi sti movb $0, VCPU_mce_pending(%rbx) call set_guest_machinecheck_trapbounce test %al, %al jz test_all_events - movzbl VCPU_async_exception_mask(%rbx), %edx # save mask for the + movzbl VCPU_async_event_mask(%rbx), %edx # save mask for the movb %dl, VCPU_mce_old_mask(%rbx) # iret hypercall orl $1 << VCPU_TRAP_MCE, %edx - movb %dl, VCPU_async_exception_mask(%rbx) + movb %dl, VCPU_async_event_mask(%rbx) jmp process_trap ALIGN /* %rbx: struct vcpu */ process_nmi: - testb $1 << VCPU_TRAP_NMI, VCPU_async_exception_mask(%rbx) + testb $1 << VCPU_TRAP_NMI, VCPU_async_event_mask(%rbx) jnz test_guest_events sti movb $0, VCPU_nmi_pending(%rbx) call set_guest_nmi_trapbounce test %al, %al jz test_all_events - movzbl VCPU_async_exception_mask(%rbx), %edx # save mask for the + movzbl VCPU_async_event_mask(%rbx), %edx # save mask for the movb %dl, VCPU_nmi_old_mask(%rbx) # iret hypercall orl $1 << VCPU_TRAP_NMI, %edx - movb %dl, VCPU_async_exception_mask(%rbx) + movb %dl, VCPU_async_event_mask(%rbx) /* FALLTHROUGH */ process_trap: leaq VCPU_trap_bounce(%rbx), %rdx diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h index 105adf96eb..dd056360b4 100644 --- a/xen/include/asm-x86/domain.h +++ b/xen/include/asm-x86/domain.h @@ -19,17 +19,6 @@ #define is_hvm_pv_evtchn_vcpu(v) (is_hvm_pv_evtchn_domain(v->domain)) #define is_domain_direct_mapped(d) ((void)(d), 0) -#define VCPU_TRAP_NONE 0 -#define VCPU_TRAP_NMI 1 -#define VCPU_TRAP_MCE 2 -#define VCPU_TRAP_LAST VCPU_TRAP_MCE - -#define nmi_state async_exception_state(VCPU_TRAP_NMI) -#define mce_state async_exception_state(VCPU_TRAP_MCE) - -#define nmi_pending nmi_state.pending -#define mce_pending mce_state.pending - struct trap_bounce { uint32_t error_code; uint8_t flags; /* TBF_ */ @@ -557,12 +546,22 @@ struct arch_vcpu struct vpmu_struct vpmu; - struct { - bool pending; - uint8_t old_mask; - } async_exception_state[VCPU_TRAP_LAST]; -#define async_exception_state(t) async_exception_state[(t)-1] - uint8_t async_exception_mask; + union { +#define VCPU_TRAP_NMI 0 +#define VCPU_TRAP_MCE 1 +#define VCPU_TRAP_LAST VCPU_TRAP_MCE + struct { + bool pending; + uint8_t old_mask; + } async_event[VCPU_TRAP_LAST + 1]; + struct { + bool nmi_pending; + uint8_t nmi_old_mask; + bool mce_pending; + uint8_t mce_old_mask; + }; + }; + uint8_t async_event_mask; /* Virtual Machine Extensions */ union {