From patchwork Tue Feb 28 13:39:28 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 9595645 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 405B860429 for ; Tue, 28 Feb 2017 13:41:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2660F1FF29 for ; Tue, 28 Feb 2017 13:41:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 17D7B28524; Tue, 28 Feb 2017 13:41:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 0C809284D0 for ; Tue, 28 Feb 2017 13:41:57 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cii09-00065p-Nu; Tue, 28 Feb 2017 13:39:37 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cii08-00065K-GY for xen-devel@lists.xenproject.org; Tue, 28 Feb 2017 13:39:36 +0000 Received: from [85.158.137.68] by server-3.bemta-3.messagelabs.com id 96/4D-14551-79D75B85; Tue, 28 Feb 2017 13:39:35 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrFIsWRWlGSWpSXmKPExsXS6fjDS3da7dY IgzcnjSy+b5nM5MDocfjDFZYAxijWzLyk/IoE1ox5J9gKlk1lrDg6ex5rA+OUki5GTg4hgTyJ I8ceMYLYvAJ2EvfmnmUGsSUEDCWevr/OBmKzCKhKtF98xwJiswmoS7Q9287axcjBISJgIHHua FIXIxcHs8B6Romzq84xgdQIC/hLPJx9gw1ivp3EuqsHwWxOAXuJq68bmEB6eQUEJf7uEAYxmY FKmm6aTGDkmYWQmIWQmAXUyiygJfHw1y0WCFtbYtnC18wQJdISy/9xQITNgT5Zy4ypxEmi/Vf NAkaOVYzqxalFZalFuqZ6SUWZ6RkluYmZObqGBsZ6uanFxYnpqTmJScV6yfm5mxiBYVrPwMC4 g/HyV6dDjJIcTEqivFnJWyOE+JLyUyozEosz4otKc1KLDzHKcHAoSfB21wDlBItS01Mr0jJzg BEDk5bg4FES4T0LkuYtLkjMLc5Mh0idYlSUEuetBUkIgCQySvPg2mBReolRVkqYl5GBgUGIpy C1KDezBFX+FaM4B6OSMO8bkCk8mXklcNNfAS1mAlr8QgVscUkiQkqqgTFAU5Fjql0zy2XLkDT Xszp1i1+Eda1NL198QbY4SM88taJo2Rme2Vxeua9Tcz5POBXzxdTpqu3X/Rs+7PwV9O+C4+62 nj8dvEcTZI4YFnwQ97568sXGfMY98znVNgW/ncCW17rv3VyPn3Wpv08VL2DadrF3j+a1YuZAU yV+dZvtoflp8+umv1diKc5INNRiLipOBAA4LmV7zQIAAA== X-Env-Sender: JBeulich@suse.com X-Msg-Ref: server-5.tower-31.messagelabs.com!1488289172!84599377!1 X-Originating-IP: [137.65.248.74] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.2.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 51362 invoked from network); 28 Feb 2017 13:39:34 -0000 Received: from prv-mh.provo.novell.com (HELO prv-mh.provo.novell.com) (137.65.248.74) by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 28 Feb 2017 13:39:34 -0000 Received: from INET-PRV-MTA by prv-mh.provo.novell.com with Novell_GroupWise; Tue, 28 Feb 2017 06:39:32 -0700 Message-Id: <58B58BA0020000780013E43E@prv-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 14.2.1 Date: Tue, 28 Feb 2017 06:39:28 -0700 From: "Jan Beulich" To: "xen-devel" References: <58B588D0020000780013E3C3@prv-mh.provo.novell.com> In-Reply-To: <58B588D0020000780013E3C3@prv-mh.provo.novell.com> Mime-Version: 1.0 Cc: George Dunlap , Andrew Cooper , Kevin Tian , Jun Nakajima Subject: [Xen-devel] [PATCH 8/8] x86/VMX: switch away from temporary 32-bit register names X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Jan Beulich x86/VMX: switch away from temporary 32-bit register names Signed-off-by: Jan Beulich --- a/xen/arch/x86/hvm/vmx/realmode.c +++ b/xen/arch/x86/hvm/vmx/realmode.c @@ -72,7 +72,7 @@ static void realmode_deliver_exception( /* We can't test hvmemul_ctxt->ctxt.sp_size: it may not be initialised. */ if ( hvmemul_ctxt->seg_reg[x86_seg_ss].attr.fields.db ) - pstk = regs->_esp -= 6; + pstk = regs->esp -= 6; else pstk = regs->sp -= 6; @@ -82,7 +82,7 @@ static void realmode_deliver_exception( csr->sel = cs_eip >> 16; csr->base = (uint32_t)csr->sel << 4; regs->ip = (uint16_t)cs_eip; - regs->_eflags &= ~(X86_EFLAGS_TF | X86_EFLAGS_IF | X86_EFLAGS_RF); + regs->eflags &= ~(X86_EFLAGS_TF | X86_EFLAGS_IF | X86_EFLAGS_RF); /* Exception delivery clears STI and MOV-SS blocking. */ if ( hvmemul_ctxt->intr_shadow & --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -607,7 +607,7 @@ int vmx_guest_x86_mode(struct vcpu *v) if ( unlikely(!(v->arch.hvm_vcpu.guest_cr[0] & X86_CR0_PE)) ) return 0; - if ( unlikely(guest_cpu_user_regs()->_eflags & X86_EFLAGS_VM) ) + if ( unlikely(guest_cpu_user_regs()->eflags & X86_EFLAGS_VM) ) return 1; __vmread(GUEST_CS_AR_BYTES, &cs_ar_bytes); if ( hvm_long_mode_enabled(v) && @@ -1753,7 +1753,7 @@ static void vmx_inject_event(const struc switch ( _event.vector | -(_event.type == X86_EVENTTYPE_SW_INTERRUPT) ) { case TRAP_debug: - if ( guest_cpu_user_regs()->_eflags & X86_EFLAGS_TF ) + if ( guest_cpu_user_regs()->eflags & X86_EFLAGS_TF ) { __restore_debug_registers(curr); write_debugreg(6, read_debugreg(6) | DR_STEP); @@ -1853,7 +1853,7 @@ static void vmx_set_info_guest(struct vc */ __vmread(GUEST_INTERRUPTIBILITY_INFO, &intr_shadow); if ( v->domain->debugger_attached && - (v->arch.user_regs._eflags & X86_EFLAGS_TF) && + (v->arch.user_regs.eflags & X86_EFLAGS_TF) && (intr_shadow & VMX_INTR_SHADOW_STI) ) { intr_shadow &= ~VMX_INTR_SHADOW_STI; @@ -2092,8 +2092,8 @@ static int vmx_vcpu_emulate_vmfunc(const struct vcpu *curr = current; if ( !cpu_has_vmx_vmfunc && altp2m_active(curr->domain) && - regs->_eax == 0 && - p2m_switch_vcpu_altp2m_by_id(curr, regs->_ecx) ) + regs->eax == 0 && + p2m_switch_vcpu_altp2m_by_id(curr, regs->ecx) ) rc = X86EMUL_OKAY; return rc; @@ -2416,7 +2416,7 @@ void update_guest_eip(void) unsigned long x; regs->rip += get_instruction_length(); /* Safe: callers audited */ - regs->_eflags &= ~X86_EFLAGS_RF; + regs->eflags &= ~X86_EFLAGS_RF; __vmread(GUEST_INTERRUPTIBILITY_INFO, &x); if ( x & (VMX_INTR_SHADOW_STI | VMX_INTR_SHADOW_MOV_SS) ) @@ -2425,7 +2425,7 @@ void update_guest_eip(void) __vmwrite(GUEST_INTERRUPTIBILITY_INFO, x); } - if ( regs->_eflags & X86_EFLAGS_TF ) + if ( regs->eflags & X86_EFLAGS_TF ) hvm_inject_hw_exception(TRAP_debug, X86_EVENT_NO_EC); } @@ -2446,7 +2446,7 @@ static void vmx_fpu_dirty_intercept(void static int vmx_do_cpuid(struct cpu_user_regs *regs) { struct vcpu *curr = current; - uint32_t leaf = regs->_eax, subleaf = regs->_ecx; + uint32_t leaf = regs->eax, subleaf = regs->ecx; struct cpuid_leaf res; if ( hvm_check_cpuid_faulting(current) ) @@ -3204,8 +3204,8 @@ void vmx_enter_realmode(struct cpu_user_ /* Adjust RFLAGS to enter virtual 8086 mode with IOPL == 3. Since * we have CR4.VME == 1 and our own TSS with an empty interrupt * redirection bitmap, all software INTs will be handled by vm86 */ - v->arch.hvm_vmx.vm86_saved_eflags = regs->_eflags; - regs->_eflags |= (X86_EFLAGS_VM | X86_EFLAGS_IOPL); + v->arch.hvm_vmx.vm86_saved_eflags = regs->eflags; + regs->eflags |= (X86_EFLAGS_VM | X86_EFLAGS_IOPL); } static int vmx_handle_eoi_write(void) @@ -3347,10 +3347,10 @@ void vmx_vmexit_handler(struct cpu_user_ if ( hvm_long_mode_enabled(v) ) HVMTRACE_ND(VMEXIT64, 0, 1/*cycles*/, 3, exit_reason, - regs->_eip, regs->rip >> 32, 0, 0, 0); + regs->eip, regs->rip >> 32, 0, 0, 0); else HVMTRACE_ND(VMEXIT, 0, 1/*cycles*/, 2, exit_reason, - regs->_eip, 0, 0, 0, 0); + regs->eip, 0, 0, 0, 0); perfc_incra(vmexits, exit_reason); @@ -3435,8 +3435,8 @@ void vmx_vmexit_handler(struct cpu_user_ if ( v->arch.hvm_vmx.vmx_realmode ) { /* Put RFLAGS back the way the guest wants it */ - regs->_eflags &= ~(X86_EFLAGS_VM | X86_EFLAGS_IOPL); - regs->_eflags |= (v->arch.hvm_vmx.vm86_saved_eflags & X86_EFLAGS_IOPL); + regs->eflags &= ~(X86_EFLAGS_VM | X86_EFLAGS_IOPL); + regs->eflags |= (v->arch.hvm_vmx.vm86_saved_eflags & X86_EFLAGS_IOPL); /* Unless this exit was for an interrupt, we've hit something * vm86 can't handle. Try again, using the emulator. */ @@ -3681,7 +3681,7 @@ void vmx_vmexit_handler(struct cpu_user_ } case EXIT_REASON_HLT: update_guest_eip(); /* Safe: HLT */ - hvm_hlt(regs->_eflags); + hvm_hlt(regs->eflags); break; case EXIT_REASON_INVLPG: update_guest_eip(); /* Safe: INVLPG */ @@ -3698,7 +3698,7 @@ void vmx_vmexit_handler(struct cpu_user_ break; case EXIT_REASON_VMCALL: - HVMTRACE_1D(VMMCALL, regs->_eax); + HVMTRACE_1D(VMMCALL, regs->eax); if ( hvm_hypercall(regs) == HVM_HCALL_completed ) update_guest_eip(); /* Safe: VMCALL */ @@ -3722,7 +3722,7 @@ void vmx_vmexit_handler(struct cpu_user_ { uint64_t msr_content = 0; - switch ( hvm_msr_read_intercept(regs->_ecx, &msr_content) ) + switch ( hvm_msr_read_intercept(regs->ecx, &msr_content) ) { case X86EMUL_OKAY: msr_split(regs, msr_content); @@ -3731,7 +3731,7 @@ void vmx_vmexit_handler(struct cpu_user_ } case EXIT_REASON_MSR_WRITE: - switch ( hvm_msr_write_intercept(regs->_ecx, msr_fold(regs), 1) ) + switch ( hvm_msr_write_intercept(regs->ecx, msr_fold(regs), 1) ) { case X86EMUL_OKAY: update_guest_eip(); /* Safe: WRMSR */ @@ -3894,7 +3894,7 @@ void vmx_vmexit_handler(struct cpu_user_ break; case EXIT_REASON_XSETBV: - if ( hvm_handle_xsetbv(regs->_ecx, msr_fold(regs)) == 0 ) + if ( hvm_handle_xsetbv(regs->ecx, msr_fold(regs)) == 0 ) update_guest_eip(); /* Safe: XSETBV */ break; @@ -3952,7 +3952,7 @@ out: */ mode = vmx_guest_x86_mode(v); if ( mode == 8 ? !is_canonical_address(regs->rip) - : regs->rip != regs->_eip ) + : regs->rip != regs->eip ) { gprintk(XENLOG_WARNING, "Bad rIP %lx for mode %u\n", regs->rip, mode); @@ -3966,7 +3966,7 @@ out: regs->rip = (long)(regs->rip << (64 - VADDR_BITS)) >> (64 - VADDR_BITS); else - regs->rip = regs->_eip; + regs->rip = regs->eip; } else domain_crash(v->domain); --- a/xen/arch/x86/hvm/vmx/vvmx.c +++ b/xen/arch/x86/hvm/vmx/vvmx.c @@ -462,23 +462,23 @@ gp_fault: static void vmsucceed(struct cpu_user_regs *regs) { - regs->_eflags &= ~X86_EFLAGS_ARITH_MASK; + regs->eflags &= ~X86_EFLAGS_ARITH_MASK; } static void vmfail_valid(struct cpu_user_regs *regs, enum vmx_insn_errno errno) { struct vcpu *v = current; - unsigned int eflags = regs->_eflags; + unsigned int eflags = regs->eflags; - regs->_eflags = (eflags & ~X86_EFLAGS_ARITH_MASK) | X86_EFLAGS_ZF; + regs->eflags = (eflags & ~X86_EFLAGS_ARITH_MASK) | X86_EFLAGS_ZF; set_vvmcs(v, VM_INSTRUCTION_ERROR, errno); } static void vmfail_invalid(struct cpu_user_regs *regs) { - unsigned int eflags = regs->_eflags; + unsigned int eflags = regs->eflags; - regs->_eflags = (eflags & ~X86_EFLAGS_ARITH_MASK) | X86_EFLAGS_CF; + regs->eflags = (eflags & ~X86_EFLAGS_ARITH_MASK) | X86_EFLAGS_CF; } static void vmfail(struct cpu_user_regs *regs, enum vmx_insn_errno errno) @@ -2187,7 +2187,7 @@ int nvmx_n2_vmexit_handler(struct cpu_us ctrl = __n2_exec_control(v); if ( ctrl & CPU_BASED_ACTIVATE_MSR_BITMAP ) { - status = vmx_check_msr_bitmap(nvmx->msrbitmap, regs->_ecx, + status = vmx_check_msr_bitmap(nvmx->msrbitmap, regs->ecx, !!(exit_reason == EXIT_REASON_MSR_WRITE)); if ( status ) nvcpu->nv_vmexit_pending = 1; Acked-by: Kevin Tian --- a/xen/arch/x86/hvm/vmx/realmode.c +++ b/xen/arch/x86/hvm/vmx/realmode.c @@ -72,7 +72,7 @@ static void realmode_deliver_exception( /* We can't test hvmemul_ctxt->ctxt.sp_size: it may not be initialised. */ if ( hvmemul_ctxt->seg_reg[x86_seg_ss].attr.fields.db ) - pstk = regs->_esp -= 6; + pstk = regs->esp -= 6; else pstk = regs->sp -= 6; @@ -82,7 +82,7 @@ static void realmode_deliver_exception( csr->sel = cs_eip >> 16; csr->base = (uint32_t)csr->sel << 4; regs->ip = (uint16_t)cs_eip; - regs->_eflags &= ~(X86_EFLAGS_TF | X86_EFLAGS_IF | X86_EFLAGS_RF); + regs->eflags &= ~(X86_EFLAGS_TF | X86_EFLAGS_IF | X86_EFLAGS_RF); /* Exception delivery clears STI and MOV-SS blocking. */ if ( hvmemul_ctxt->intr_shadow & --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -607,7 +607,7 @@ int vmx_guest_x86_mode(struct vcpu *v) if ( unlikely(!(v->arch.hvm_vcpu.guest_cr[0] & X86_CR0_PE)) ) return 0; - if ( unlikely(guest_cpu_user_regs()->_eflags & X86_EFLAGS_VM) ) + if ( unlikely(guest_cpu_user_regs()->eflags & X86_EFLAGS_VM) ) return 1; __vmread(GUEST_CS_AR_BYTES, &cs_ar_bytes); if ( hvm_long_mode_enabled(v) && @@ -1753,7 +1753,7 @@ static void vmx_inject_event(const struc switch ( _event.vector | -(_event.type == X86_EVENTTYPE_SW_INTERRUPT) ) { case TRAP_debug: - if ( guest_cpu_user_regs()->_eflags & X86_EFLAGS_TF ) + if ( guest_cpu_user_regs()->eflags & X86_EFLAGS_TF ) { __restore_debug_registers(curr); write_debugreg(6, read_debugreg(6) | DR_STEP); @@ -1853,7 +1853,7 @@ static void vmx_set_info_guest(struct vc */ __vmread(GUEST_INTERRUPTIBILITY_INFO, &intr_shadow); if ( v->domain->debugger_attached && - (v->arch.user_regs._eflags & X86_EFLAGS_TF) && + (v->arch.user_regs.eflags & X86_EFLAGS_TF) && (intr_shadow & VMX_INTR_SHADOW_STI) ) { intr_shadow &= ~VMX_INTR_SHADOW_STI; @@ -2092,8 +2092,8 @@ static int vmx_vcpu_emulate_vmfunc(const struct vcpu *curr = current; if ( !cpu_has_vmx_vmfunc && altp2m_active(curr->domain) && - regs->_eax == 0 && - p2m_switch_vcpu_altp2m_by_id(curr, regs->_ecx) ) + regs->eax == 0 && + p2m_switch_vcpu_altp2m_by_id(curr, regs->ecx) ) rc = X86EMUL_OKAY; return rc; @@ -2416,7 +2416,7 @@ void update_guest_eip(void) unsigned long x; regs->rip += get_instruction_length(); /* Safe: callers audited */ - regs->_eflags &= ~X86_EFLAGS_RF; + regs->eflags &= ~X86_EFLAGS_RF; __vmread(GUEST_INTERRUPTIBILITY_INFO, &x); if ( x & (VMX_INTR_SHADOW_STI | VMX_INTR_SHADOW_MOV_SS) ) @@ -2425,7 +2425,7 @@ void update_guest_eip(void) __vmwrite(GUEST_INTERRUPTIBILITY_INFO, x); } - if ( regs->_eflags & X86_EFLAGS_TF ) + if ( regs->eflags & X86_EFLAGS_TF ) hvm_inject_hw_exception(TRAP_debug, X86_EVENT_NO_EC); } @@ -2446,7 +2446,7 @@ static void vmx_fpu_dirty_intercept(void static int vmx_do_cpuid(struct cpu_user_regs *regs) { struct vcpu *curr = current; - uint32_t leaf = regs->_eax, subleaf = regs->_ecx; + uint32_t leaf = regs->eax, subleaf = regs->ecx; struct cpuid_leaf res; if ( hvm_check_cpuid_faulting(current) ) @@ -3204,8 +3204,8 @@ void vmx_enter_realmode(struct cpu_user_ /* Adjust RFLAGS to enter virtual 8086 mode with IOPL == 3. Since * we have CR4.VME == 1 and our own TSS with an empty interrupt * redirection bitmap, all software INTs will be handled by vm86 */ - v->arch.hvm_vmx.vm86_saved_eflags = regs->_eflags; - regs->_eflags |= (X86_EFLAGS_VM | X86_EFLAGS_IOPL); + v->arch.hvm_vmx.vm86_saved_eflags = regs->eflags; + regs->eflags |= (X86_EFLAGS_VM | X86_EFLAGS_IOPL); } static int vmx_handle_eoi_write(void) @@ -3347,10 +3347,10 @@ void vmx_vmexit_handler(struct cpu_user_ if ( hvm_long_mode_enabled(v) ) HVMTRACE_ND(VMEXIT64, 0, 1/*cycles*/, 3, exit_reason, - regs->_eip, regs->rip >> 32, 0, 0, 0); + regs->eip, regs->rip >> 32, 0, 0, 0); else HVMTRACE_ND(VMEXIT, 0, 1/*cycles*/, 2, exit_reason, - regs->_eip, 0, 0, 0, 0); + regs->eip, 0, 0, 0, 0); perfc_incra(vmexits, exit_reason); @@ -3435,8 +3435,8 @@ void vmx_vmexit_handler(struct cpu_user_ if ( v->arch.hvm_vmx.vmx_realmode ) { /* Put RFLAGS back the way the guest wants it */ - regs->_eflags &= ~(X86_EFLAGS_VM | X86_EFLAGS_IOPL); - regs->_eflags |= (v->arch.hvm_vmx.vm86_saved_eflags & X86_EFLAGS_IOPL); + regs->eflags &= ~(X86_EFLAGS_VM | X86_EFLAGS_IOPL); + regs->eflags |= (v->arch.hvm_vmx.vm86_saved_eflags & X86_EFLAGS_IOPL); /* Unless this exit was for an interrupt, we've hit something * vm86 can't handle. Try again, using the emulator. */ @@ -3681,7 +3681,7 @@ void vmx_vmexit_handler(struct cpu_user_ } case EXIT_REASON_HLT: update_guest_eip(); /* Safe: HLT */ - hvm_hlt(regs->_eflags); + hvm_hlt(regs->eflags); break; case EXIT_REASON_INVLPG: update_guest_eip(); /* Safe: INVLPG */ @@ -3698,7 +3698,7 @@ void vmx_vmexit_handler(struct cpu_user_ break; case EXIT_REASON_VMCALL: - HVMTRACE_1D(VMMCALL, regs->_eax); + HVMTRACE_1D(VMMCALL, regs->eax); if ( hvm_hypercall(regs) == HVM_HCALL_completed ) update_guest_eip(); /* Safe: VMCALL */ @@ -3722,7 +3722,7 @@ void vmx_vmexit_handler(struct cpu_user_ { uint64_t msr_content = 0; - switch ( hvm_msr_read_intercept(regs->_ecx, &msr_content) ) + switch ( hvm_msr_read_intercept(regs->ecx, &msr_content) ) { case X86EMUL_OKAY: msr_split(regs, msr_content); @@ -3731,7 +3731,7 @@ void vmx_vmexit_handler(struct cpu_user_ } case EXIT_REASON_MSR_WRITE: - switch ( hvm_msr_write_intercept(regs->_ecx, msr_fold(regs), 1) ) + switch ( hvm_msr_write_intercept(regs->ecx, msr_fold(regs), 1) ) { case X86EMUL_OKAY: update_guest_eip(); /* Safe: WRMSR */ @@ -3894,7 +3894,7 @@ void vmx_vmexit_handler(struct cpu_user_ break; case EXIT_REASON_XSETBV: - if ( hvm_handle_xsetbv(regs->_ecx, msr_fold(regs)) == 0 ) + if ( hvm_handle_xsetbv(regs->ecx, msr_fold(regs)) == 0 ) update_guest_eip(); /* Safe: XSETBV */ break; @@ -3952,7 +3952,7 @@ out: */ mode = vmx_guest_x86_mode(v); if ( mode == 8 ? !is_canonical_address(regs->rip) - : regs->rip != regs->_eip ) + : regs->rip != regs->eip ) { gprintk(XENLOG_WARNING, "Bad rIP %lx for mode %u\n", regs->rip, mode); @@ -3966,7 +3966,7 @@ out: regs->rip = (long)(regs->rip << (64 - VADDR_BITS)) >> (64 - VADDR_BITS); else - regs->rip = regs->_eip; + regs->rip = regs->eip; } else domain_crash(v->domain); --- a/xen/arch/x86/hvm/vmx/vvmx.c +++ b/xen/arch/x86/hvm/vmx/vvmx.c @@ -462,23 +462,23 @@ gp_fault: static void vmsucceed(struct cpu_user_regs *regs) { - regs->_eflags &= ~X86_EFLAGS_ARITH_MASK; + regs->eflags &= ~X86_EFLAGS_ARITH_MASK; } static void vmfail_valid(struct cpu_user_regs *regs, enum vmx_insn_errno errno) { struct vcpu *v = current; - unsigned int eflags = regs->_eflags; + unsigned int eflags = regs->eflags; - regs->_eflags = (eflags & ~X86_EFLAGS_ARITH_MASK) | X86_EFLAGS_ZF; + regs->eflags = (eflags & ~X86_EFLAGS_ARITH_MASK) | X86_EFLAGS_ZF; set_vvmcs(v, VM_INSTRUCTION_ERROR, errno); } static void vmfail_invalid(struct cpu_user_regs *regs) { - unsigned int eflags = regs->_eflags; + unsigned int eflags = regs->eflags; - regs->_eflags = (eflags & ~X86_EFLAGS_ARITH_MASK) | X86_EFLAGS_CF; + regs->eflags = (eflags & ~X86_EFLAGS_ARITH_MASK) | X86_EFLAGS_CF; } static void vmfail(struct cpu_user_regs *regs, enum vmx_insn_errno errno) @@ -2187,7 +2187,7 @@ int nvmx_n2_vmexit_handler(struct cpu_us ctrl = __n2_exec_control(v); if ( ctrl & CPU_BASED_ACTIVATE_MSR_BITMAP ) { - status = vmx_check_msr_bitmap(nvmx->msrbitmap, regs->_ecx, + status = vmx_check_msr_bitmap(nvmx->msrbitmap, regs->ecx, !!(exit_reason == EXIT_REASON_MSR_WRITE)); if ( status ) nvcpu->nv_vmexit_pending = 1;