From patchwork Tue Feb 28 13:35:39 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 9595627 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9B40B601D7 for ; Tue, 28 Feb 2017 13:38:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8478D28504 for ; Tue, 28 Feb 2017 13:38:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6E45C28516; Tue, 28 Feb 2017 13:38:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id E2E2F284EF for ; Tue, 28 Feb 2017 13:37:58 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cihwS-0005CS-GY; Tue, 28 Feb 2017 13:35:48 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cihwR-0005CA-CH for xen-devel@lists.xenproject.org; Tue, 28 Feb 2017 13:35:47 +0000 Received: from [85.158.139.211] by server-11.bemta-5.messagelabs.com id 4C/98-01711-2BC75B85; Tue, 28 Feb 2017 13:35:46 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrHIsWRWlGSWpSXmKPExsXS6fjDS3dDzdY Ig9drNC2+b5nM5MDocfjDFZYAxijWzLyk/IoE1oyHix4zFvz4wlgx/YZ8A2PnRsYuRk4OIYE8 iWnnO9lBbF4BO4ln+26xgdgSAoYST99fB7NZBFQlrrTuAatnE1CXaHu2nRXEFhEIkdi9/TVQD QcHs0C8xNV19iBhYQFviZm/t7JDjLeTWHf1INgYTgF7iauvG5hAynkFBCX+7hAGCTMDlbyc9I d5AiPPLITMLCQZCFtL4uGvWywQtrbEsoWvmWeB7ZWWWP6PAyJsLNHQeIgJVQnEmMuzf7ItYOR YxahRnFpUllqka2isl1SUmZ5RkpuYmaNraGCql5taXJyYnpqTmFSsl5yfu4kRGK4MQLCD8d82 z0OMkhxMSqK8WclbI4T4kvJTKjMSizPii0pzUosPMcpwcChJ8HJVA+UEi1LTUyvSMnOAkQOTl uDgURLhfVUFlOYtLkjMLc5Mh0idYlSUEuedBJIQAElklObBtcGi9RKjrJQwLyPQIUI8BalFuZ klqPKvGMU5GJWEIbbzZOaVwE1/BbSYCWjxCxWwxSWJCCmpBsa4yhPaXsxTNFaePHb7R3A4mzC 3kPV/h+B0U5W3PClxUwvcrizMPK8Rnv3FpldrVuc/Fr6mCJeNScH7j3HlRVQn3/Ky/OZbPtP4 P6vRv4L5P6QOLmdJZyipXderPnktK7f+VKb5Jmby7Hei1B48YPzDofRrcb9QUIFf5sZ9B6Km1 vxl/5iXoMRSnJFoqMVcVJwIANT6zKPRAgAA X-Env-Sender: JBeulich@suse.com X-Msg-Ref: server-12.tower-206.messagelabs.com!1488288942!50823896!1 X-Originating-IP: [137.65.248.74] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.2.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 37055 invoked from network); 28 Feb 2017 13:35:43 -0000 Received: from prv-mh.provo.novell.com (HELO prv-mh.provo.novell.com) (137.65.248.74) by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 28 Feb 2017 13:35:43 -0000 Received: from INET-PRV-MTA by prv-mh.provo.novell.com with Novell_GroupWise; Tue, 28 Feb 2017 06:35:41 -0700 Message-Id: <58B58ABB020000780013E3E2@prv-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 14.2.1 Date: Tue, 28 Feb 2017 06:35:39 -0700 From: "Jan Beulich" To: "xen-devel" , "Jan Beulich" References: <58B588D0020000780013E3C3@prv-mh.provo.novell.com> In-Reply-To: <58B588D0020000780013E3C3@prv-mh.provo.novell.com> Mime-Version: 1.0 Cc: George Dunlap , Andrew Cooper Subject: [Xen-devel] [PATCH 2/8] x86: switch away from temporary 32-bit register names X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Jan Beulich x86: switch away from temporary 32-bit register names Signed-off-by: Jan Beulich --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -1015,11 +1015,11 @@ int arch_set_info_guest( init_int80_direct_trap(v); /* IOPL privileges are virtualised. */ - v->arch.pv_vcpu.iopl = v->arch.user_regs._eflags & X86_EFLAGS_IOPL; - v->arch.user_regs._eflags &= ~X86_EFLAGS_IOPL; + v->arch.pv_vcpu.iopl = v->arch.user_regs.eflags & X86_EFLAGS_IOPL; + v->arch.user_regs.eflags &= ~X86_EFLAGS_IOPL; /* Ensure real hardware interrupts are enabled. */ - v->arch.user_regs._eflags |= X86_EFLAGS_IF; + v->arch.user_regs.eflags |= X86_EFLAGS_IF; if ( !v->is_initialised ) { @@ -1776,14 +1776,14 @@ static void load_segments(struct vcpu *n if ( !ring_1(regs) ) { ret = put_user(regs->ss, esp-1); - ret |= put_user(regs->_esp, esp-2); + ret |= put_user(regs->esp, esp-2); esp -= 2; } if ( ret | put_user(rflags, esp-1) | put_user(cs_and_mask, esp-2) | - put_user(regs->_eip, esp-3) | + put_user(regs->eip, esp-3) | put_user(uregs->gs, esp-4) | put_user(uregs->fs, esp-5) | put_user(uregs->es, esp-6) | @@ -1798,12 +1798,12 @@ static void load_segments(struct vcpu *n vcpu_info(n, evtchn_upcall_mask) = 1; regs->entry_vector |= TRAP_syscall; - regs->_eflags &= ~(X86_EFLAGS_VM|X86_EFLAGS_RF|X86_EFLAGS_NT| + regs->eflags &= ~(X86_EFLAGS_VM|X86_EFLAGS_RF|X86_EFLAGS_NT| X86_EFLAGS_IOPL|X86_EFLAGS_TF); regs->ss = FLAT_COMPAT_KERNEL_SS; - regs->_esp = (unsigned long)(esp-7); + regs->esp = (unsigned long)(esp-7); regs->cs = FLAT_COMPAT_KERNEL_CS; - regs->_eip = pv->failsafe_callback_eip; + regs->eip = pv->failsafe_callback_eip; return; } --- a/xen/arch/x86/domain_build.c +++ b/xen/arch/x86/domain_build.c @@ -1667,7 +1667,7 @@ int __init construct_dom0( regs->rip = parms.virt_entry; regs->rsp = vstack_end; regs->rsi = vstartinfo_start; - regs->_eflags = X86_EFLAGS_IF; + regs->eflags = X86_EFLAGS_IF; #ifdef CONFIG_SHADOW_PAGING if ( opt_dom0_shadow ) --- a/xen/arch/x86/domctl.c +++ b/xen/arch/x86/domctl.c @@ -1587,8 +1587,8 @@ void arch_get_info_guest(struct vcpu *v, } /* IOPL privileges are virtualised: merge back into returned eflags. */ - BUG_ON((c(user_regs._eflags) & X86_EFLAGS_IOPL) != 0); - c(user_regs._eflags |= v->arch.pv_vcpu.iopl); + BUG_ON((c(user_regs.eflags) & X86_EFLAGS_IOPL) != 0); + c(user_regs.eflags |= v->arch.pv_vcpu.iopl); if ( !compat ) { --- a/xen/arch/x86/gdbstub.c +++ b/xen/arch/x86/gdbstub.c @@ -68,14 +68,14 @@ gdb_arch_resume(struct cpu_user_regs *re if ( addr != -1UL ) regs->rip = addr; - regs->_eflags &= ~X86_EFLAGS_TF; + regs->eflags &= ~X86_EFLAGS_TF; /* Set eflags.RF to ensure we do not re-enter. */ - regs->_eflags |= X86_EFLAGS_RF; + regs->eflags |= X86_EFLAGS_RF; /* Set the trap flag if we are single stepping. */ if ( type == GDB_STEP ) - regs->_eflags |= X86_EFLAGS_TF; + regs->eflags |= X86_EFLAGS_TF; } /* --- a/xen/arch/x86/traps.c +++ b/xen/arch/x86/traps.c @@ -626,7 +626,7 @@ void fatal_trap(const struct cpu_user_re panic("FATAL TRAP: vector = %d (%s)\n" "[error_code=%04x] %s", trapnr, trapstr(trapnr), regs->error_code, - (regs->_eflags & X86_EFLAGS_IF) ? "" : ", IN INTERRUPT CONTEXT"); + (regs->eflags & X86_EFLAGS_IF) ? "" : ", IN INTERRUPT CONTEXT"); } void pv_inject_event(const struct x86_event *event) @@ -703,8 +703,8 @@ static inline void do_guest_trap(unsigne static void instruction_done(struct cpu_user_regs *regs, unsigned long rip) { regs->rip = rip; - regs->_eflags &= ~X86_EFLAGS_RF; - if ( regs->_eflags & X86_EFLAGS_TF ) + regs->eflags &= ~X86_EFLAGS_RF; + if ( regs->eflags & X86_EFLAGS_TF ) { current->arch.debugreg[6] |= DR_STEP | DR_STATUS_RESERVED_ONE; do_guest_trap(TRAP_debug, regs); @@ -1070,7 +1070,7 @@ static int emulate_forced_invalid_op(str eip += sizeof(instr); - guest_cpuid(current, regs->_eax, regs->_ecx, &res); + guest_cpuid(current, regs->eax, regs->ecx, &res); regs->rax = res.a; regs->rbx = res.b; @@ -1395,7 +1395,7 @@ leaf: * - Page fault in kernel mode */ if ( (cr4 & X86_CR4_SMAP) && !(error_code & PFEC_user_mode) && - (((regs->cs & 3) == 3) || !(regs->_eflags & X86_EFLAGS_AC)) ) + (((regs->cs & 3) == 3) || !(regs->eflags & X86_EFLAGS_AC)) ) return smap_fault; } @@ -1425,7 +1425,7 @@ static int fixup_page_fault(unsigned lon struct domain *d = v->domain; /* No fixups in interrupt context or when interrupts are disabled. */ - if ( in_irq() || !(regs->_eflags & X86_EFLAGS_IF) ) + if ( in_irq() || !(regs->eflags & X86_EFLAGS_IF) ) return 0; if ( !(regs->error_code & PFEC_page_present) && @@ -2290,7 +2290,7 @@ static int priv_op_rep_ins(uint16_t port break; /* x86_emulate() clips the repetition count to ensure we don't wrap. */ - if ( unlikely(ctxt->regs->_eflags & X86_EFLAGS_DF) ) + if ( unlikely(ctxt->regs->eflags & X86_EFLAGS_DF) ) offset -= bytes_per_rep; else offset += bytes_per_rep; @@ -2358,7 +2358,7 @@ static int priv_op_rep_outs(enum x86_seg break; /* x86_emulate() clips the repetition count to ensure we don't wrap. */ - if ( unlikely(ctxt->regs->_eflags & X86_EFLAGS_DF) ) + if ( unlikely(ctxt->regs->eflags & X86_EFLAGS_DF) ) offset -= bytes_per_rep; else offset += bytes_per_rep; @@ -3004,14 +3004,14 @@ static int emulate_privileged_op(struct return 0; /* Mirror virtualized state into EFLAGS. */ - ASSERT(regs->_eflags & X86_EFLAGS_IF); + ASSERT(regs->eflags & X86_EFLAGS_IF); if ( vcpu_info(curr, evtchn_upcall_mask) ) - regs->_eflags &= ~X86_EFLAGS_IF; + regs->eflags &= ~X86_EFLAGS_IF; else - regs->_eflags |= X86_EFLAGS_IF; - ASSERT(!(regs->_eflags & X86_EFLAGS_IOPL)); - regs->_eflags |= curr->arch.pv_vcpu.iopl; - eflags = regs->_eflags; + regs->eflags |= X86_EFLAGS_IF; + ASSERT(!(regs->eflags & X86_EFLAGS_IOPL)); + regs->eflags |= curr->arch.pv_vcpu.iopl; + eflags = regs->eflags; ctxt.ctxt.addr_size = ar & _SEGMENT_L ? 64 : ar & _SEGMENT_DB ? 32 : 16; /* Leave zero in ctxt.ctxt.sp_size, as it's not needed. */ @@ -3025,10 +3025,10 @@ static int emulate_privileged_op(struct * Nothing we allow to be emulated can change anything other than the * arithmetic bits, and the resume flag. */ - ASSERT(!((regs->_eflags ^ eflags) & + ASSERT(!((regs->eflags ^ eflags) & ~(X86_EFLAGS_RF | X86_EFLAGS_ARITH_MASK))); - regs->_eflags |= X86_EFLAGS_IF; - regs->_eflags &= ~X86_EFLAGS_IOPL; + regs->eflags |= X86_EFLAGS_IF; + regs->eflags &= ~X86_EFLAGS_IOPL; /* More strict than x86_emulate_wrapper(). */ ASSERT(ctxt.ctxt.event_pending == (rc == X86EMUL_EXCEPTION)); @@ -3348,7 +3348,8 @@ static void emulate_gate_op(struct cpu_u !(ar & _SEGMENT_WR) || !check_stack_limit(ar, limit, esp + nparm * 4, nparm * 4) ) return do_guest_trap(TRAP_gp_fault, regs); - ustkp = (unsigned int *)(unsigned long)((unsigned int)base + regs->_esp + nparm * 4); + ustkp = (unsigned int *)(unsigned long) + ((unsigned int)base + regs->esp + nparm * 4); if ( !compat_access_ok(ustkp - nparm, nparm * 4) ) { do_guest_trap(TRAP_gp_fault, regs); @@ -3728,20 +3729,20 @@ void do_debug(struct cpu_user_regs *regs if ( !guest_mode(regs) ) { - if ( regs->_eflags & X86_EFLAGS_TF ) + if ( regs->eflags & X86_EFLAGS_TF ) { /* In SYSENTER entry path we can't zap TF until EFLAGS is saved. */ if ( (regs->rip >= (unsigned long)sysenter_entry) && (regs->rip <= (unsigned long)sysenter_eflags_saved) ) { if ( regs->rip == (unsigned long)sysenter_eflags_saved ) - regs->_eflags &= ~X86_EFLAGS_TF; + regs->eflags &= ~X86_EFLAGS_TF; goto out; } if ( !debugger_trap_fatal(TRAP_debug, regs) ) { WARN(); - regs->_eflags &= ~X86_EFLAGS_TF; + regs->eflags &= ~X86_EFLAGS_TF; } } else --- a/xen/arch/x86/x86_64/compat/mm.c +++ b/xen/arch/x86/x86_64/compat/mm.c @@ -327,7 +327,7 @@ int compat_mmuext_op(XEN_GUEST_HANDLE_PA struct cpu_user_regs *regs = guest_cpu_user_regs(); struct mc_state *mcs = ¤t->mc_state; unsigned int arg1 = !(mcs->flags & MCSF_in_multicall) - ? regs->_ecx + ? regs->ecx : mcs->call.args[1]; unsigned int left = arg1 & ~MMU_UPDATE_PREEMPTED; @@ -341,7 +341,7 @@ int compat_mmuext_op(XEN_GUEST_HANDLE_PA BUG_ON(!hypercall_xlat_continuation(&left, 4, 0x01, nat_ops, cmp_uops)); if ( !(mcs->flags & MCSF_in_multicall) ) - regs->_ecx += count - i; + regs->ecx += count - i; else mcs->compat_call.args[1] += count - i; } --- a/xen/arch/x86/x86_64/compat/traps.c +++ b/xen/arch/x86/x86_64/compat/traps.c @@ -8,7 +8,7 @@ void compat_show_guest_stack(struct vcpu { unsigned int i, *stack, addr, mask = STACK_SIZE; - stack = (unsigned int *)(unsigned long)regs->_esp; + stack = (unsigned int *)(unsigned long)regs->esp; printk("Guest stack trace from esp=%08lx:\n ", (unsigned long)stack); if ( !__compat_access_ok(v->domain, stack, sizeof(*stack)) ) @@ -76,14 +76,14 @@ unsigned int compat_iret(void) regs->rsp = (u32)regs->rsp; /* Restore EAX (clobbered by hypercall). */ - if ( unlikely(__get_user(regs->_eax, (u32 *)regs->rsp)) ) + if ( unlikely(__get_user(regs->eax, (u32 *)regs->rsp)) ) { domain_crash(v->domain); return 0; } /* Restore CS and EIP. */ - if ( unlikely(__get_user(regs->_eip, (u32 *)regs->rsp + 1)) || + if ( unlikely(__get_user(regs->eip, (u32 *)regs->rsp + 1)) || unlikely(__get_user(regs->cs, (u32 *)regs->rsp + 2)) ) { domain_crash(v->domain); @@ -103,7 +103,7 @@ unsigned int compat_iret(void) if ( VM_ASSIST(v->domain, architectural_iopl) ) v->arch.pv_vcpu.iopl = eflags & X86_EFLAGS_IOPL; - regs->_eflags = (eflags & ~X86_EFLAGS_IOPL) | X86_EFLAGS_IF; + regs->eflags = (eflags & ~X86_EFLAGS_IOPL) | X86_EFLAGS_IF; if ( unlikely(eflags & X86_EFLAGS_VM) ) { @@ -121,8 +121,8 @@ unsigned int compat_iret(void) int rc = 0; gdprintk(XENLOG_ERR, "VM86 mode unavailable (ksp:%08X->%08X)\n", - regs->_esp, ksp); - if ( ksp < regs->_esp ) + regs->esp, ksp); + if ( ksp < regs->esp ) { for (i = 1; i < 10; ++i) { @@ -130,7 +130,7 @@ unsigned int compat_iret(void) rc |= __put_user(x, (u32 *)(unsigned long)ksp + i); } } - else if ( ksp > regs->_esp ) + else if ( ksp > regs->esp ) { for ( i = 9; i > 0; --i ) { @@ -143,20 +143,20 @@ unsigned int compat_iret(void) domain_crash(v->domain); return 0; } - regs->_esp = ksp; + regs->esp = ksp; regs->ss = v->arch.pv_vcpu.kernel_ss; ti = &v->arch.pv_vcpu.trap_ctxt[TRAP_gp_fault]; if ( TI_GET_IF(ti) ) eflags &= ~X86_EFLAGS_IF; - regs->_eflags &= ~(X86_EFLAGS_VM|X86_EFLAGS_RF| - X86_EFLAGS_NT|X86_EFLAGS_TF); + regs->eflags &= ~(X86_EFLAGS_VM|X86_EFLAGS_RF| + X86_EFLAGS_NT|X86_EFLAGS_TF); if ( unlikely(__put_user(0, (u32 *)regs->rsp)) ) { domain_crash(v->domain); return 0; } - regs->_eip = ti->address; + regs->eip = ti->address; regs->cs = ti->cs; } else if ( unlikely(ring_0(regs)) ) @@ -165,10 +165,10 @@ unsigned int compat_iret(void) return 0; } else if ( ring_1(regs) ) - regs->_esp += 16; + regs->esp += 16; /* Return to ring 2/3: restore ESP and SS. */ else if ( __get_user(regs->ss, (u32 *)regs->rsp + 5) || - __get_user(regs->_esp, (u32 *)regs->rsp + 4) ) + __get_user(regs->esp, (u32 *)regs->rsp + 4) ) { domain_crash(v->domain); return 0; @@ -183,7 +183,7 @@ unsigned int compat_iret(void) * The hypercall exit path will overwrite EAX with this return * value. */ - return regs->_eax; + return regs->eax; } static long compat_register_guest_callback( --- a/xen/arch/x86/x86_64/gdbstub.c +++ b/xen/arch/x86/x86_64/gdbstub.c @@ -44,7 +44,7 @@ gdb_arch_read_reg_array(struct cpu_user_ GDB_REG64(regs->r15); GDB_REG64(regs->rip); - GDB_REG32(regs->_eflags); + GDB_REG32(regs->eflags); GDB_REG32(regs->cs); GDB_REG32(regs->ss); --- a/xen/include/asm-x86/msr.h +++ b/xen/include/asm-x86/msr.h @@ -73,7 +73,7 @@ static inline int wrmsr_safe(unsigned in static inline uint64_t msr_fold(const struct cpu_user_regs *regs) { - return (regs->rdx << 32) | regs->_eax; + return (regs->rdx << 32) | regs->eax; } static inline void msr_split(struct cpu_user_regs *regs, uint64_t val) --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -1015,11 +1015,11 @@ int arch_set_info_guest( init_int80_direct_trap(v); /* IOPL privileges are virtualised. */ - v->arch.pv_vcpu.iopl = v->arch.user_regs._eflags & X86_EFLAGS_IOPL; - v->arch.user_regs._eflags &= ~X86_EFLAGS_IOPL; + v->arch.pv_vcpu.iopl = v->arch.user_regs.eflags & X86_EFLAGS_IOPL; + v->arch.user_regs.eflags &= ~X86_EFLAGS_IOPL; /* Ensure real hardware interrupts are enabled. */ - v->arch.user_regs._eflags |= X86_EFLAGS_IF; + v->arch.user_regs.eflags |= X86_EFLAGS_IF; if ( !v->is_initialised ) { @@ -1776,14 +1776,14 @@ static void load_segments(struct vcpu *n if ( !ring_1(regs) ) { ret = put_user(regs->ss, esp-1); - ret |= put_user(regs->_esp, esp-2); + ret |= put_user(regs->esp, esp-2); esp -= 2; } if ( ret | put_user(rflags, esp-1) | put_user(cs_and_mask, esp-2) | - put_user(regs->_eip, esp-3) | + put_user(regs->eip, esp-3) | put_user(uregs->gs, esp-4) | put_user(uregs->fs, esp-5) | put_user(uregs->es, esp-6) | @@ -1798,12 +1798,12 @@ static void load_segments(struct vcpu *n vcpu_info(n, evtchn_upcall_mask) = 1; regs->entry_vector |= TRAP_syscall; - regs->_eflags &= ~(X86_EFLAGS_VM|X86_EFLAGS_RF|X86_EFLAGS_NT| + regs->eflags &= ~(X86_EFLAGS_VM|X86_EFLAGS_RF|X86_EFLAGS_NT| X86_EFLAGS_IOPL|X86_EFLAGS_TF); regs->ss = FLAT_COMPAT_KERNEL_SS; - regs->_esp = (unsigned long)(esp-7); + regs->esp = (unsigned long)(esp-7); regs->cs = FLAT_COMPAT_KERNEL_CS; - regs->_eip = pv->failsafe_callback_eip; + regs->eip = pv->failsafe_callback_eip; return; } --- a/xen/arch/x86/domain_build.c +++ b/xen/arch/x86/domain_build.c @@ -1667,7 +1667,7 @@ int __init construct_dom0( regs->rip = parms.virt_entry; regs->rsp = vstack_end; regs->rsi = vstartinfo_start; - regs->_eflags = X86_EFLAGS_IF; + regs->eflags = X86_EFLAGS_IF; #ifdef CONFIG_SHADOW_PAGING if ( opt_dom0_shadow ) --- a/xen/arch/x86/domctl.c +++ b/xen/arch/x86/domctl.c @@ -1587,8 +1587,8 @@ void arch_get_info_guest(struct vcpu *v, } /* IOPL privileges are virtualised: merge back into returned eflags. */ - BUG_ON((c(user_regs._eflags) & X86_EFLAGS_IOPL) != 0); - c(user_regs._eflags |= v->arch.pv_vcpu.iopl); + BUG_ON((c(user_regs.eflags) & X86_EFLAGS_IOPL) != 0); + c(user_regs.eflags |= v->arch.pv_vcpu.iopl); if ( !compat ) { --- a/xen/arch/x86/gdbstub.c +++ b/xen/arch/x86/gdbstub.c @@ -68,14 +68,14 @@ gdb_arch_resume(struct cpu_user_regs *re if ( addr != -1UL ) regs->rip = addr; - regs->_eflags &= ~X86_EFLAGS_TF; + regs->eflags &= ~X86_EFLAGS_TF; /* Set eflags.RF to ensure we do not re-enter. */ - regs->_eflags |= X86_EFLAGS_RF; + regs->eflags |= X86_EFLAGS_RF; /* Set the trap flag if we are single stepping. */ if ( type == GDB_STEP ) - regs->_eflags |= X86_EFLAGS_TF; + regs->eflags |= X86_EFLAGS_TF; } /* --- a/xen/arch/x86/traps.c +++ b/xen/arch/x86/traps.c @@ -626,7 +626,7 @@ void fatal_trap(const struct cpu_user_re panic("FATAL TRAP: vector = %d (%s)\n" "[error_code=%04x] %s", trapnr, trapstr(trapnr), regs->error_code, - (regs->_eflags & X86_EFLAGS_IF) ? "" : ", IN INTERRUPT CONTEXT"); + (regs->eflags & X86_EFLAGS_IF) ? "" : ", IN INTERRUPT CONTEXT"); } void pv_inject_event(const struct x86_event *event) @@ -703,8 +703,8 @@ static inline void do_guest_trap(unsigne static void instruction_done(struct cpu_user_regs *regs, unsigned long rip) { regs->rip = rip; - regs->_eflags &= ~X86_EFLAGS_RF; - if ( regs->_eflags & X86_EFLAGS_TF ) + regs->eflags &= ~X86_EFLAGS_RF; + if ( regs->eflags & X86_EFLAGS_TF ) { current->arch.debugreg[6] |= DR_STEP | DR_STATUS_RESERVED_ONE; do_guest_trap(TRAP_debug, regs); @@ -1070,7 +1070,7 @@ static int emulate_forced_invalid_op(str eip += sizeof(instr); - guest_cpuid(current, regs->_eax, regs->_ecx, &res); + guest_cpuid(current, regs->eax, regs->ecx, &res); regs->rax = res.a; regs->rbx = res.b; @@ -1395,7 +1395,7 @@ leaf: * - Page fault in kernel mode */ if ( (cr4 & X86_CR4_SMAP) && !(error_code & PFEC_user_mode) && - (((regs->cs & 3) == 3) || !(regs->_eflags & X86_EFLAGS_AC)) ) + (((regs->cs & 3) == 3) || !(regs->eflags & X86_EFLAGS_AC)) ) return smap_fault; } @@ -1425,7 +1425,7 @@ static int fixup_page_fault(unsigned lon struct domain *d = v->domain; /* No fixups in interrupt context or when interrupts are disabled. */ - if ( in_irq() || !(regs->_eflags & X86_EFLAGS_IF) ) + if ( in_irq() || !(regs->eflags & X86_EFLAGS_IF) ) return 0; if ( !(regs->error_code & PFEC_page_present) && @@ -2290,7 +2290,7 @@ static int priv_op_rep_ins(uint16_t port break; /* x86_emulate() clips the repetition count to ensure we don't wrap. */ - if ( unlikely(ctxt->regs->_eflags & X86_EFLAGS_DF) ) + if ( unlikely(ctxt->regs->eflags & X86_EFLAGS_DF) ) offset -= bytes_per_rep; else offset += bytes_per_rep; @@ -2358,7 +2358,7 @@ static int priv_op_rep_outs(enum x86_seg break; /* x86_emulate() clips the repetition count to ensure we don't wrap. */ - if ( unlikely(ctxt->regs->_eflags & X86_EFLAGS_DF) ) + if ( unlikely(ctxt->regs->eflags & X86_EFLAGS_DF) ) offset -= bytes_per_rep; else offset += bytes_per_rep; @@ -3004,14 +3004,14 @@ static int emulate_privileged_op(struct return 0; /* Mirror virtualized state into EFLAGS. */ - ASSERT(regs->_eflags & X86_EFLAGS_IF); + ASSERT(regs->eflags & X86_EFLAGS_IF); if ( vcpu_info(curr, evtchn_upcall_mask) ) - regs->_eflags &= ~X86_EFLAGS_IF; + regs->eflags &= ~X86_EFLAGS_IF; else - regs->_eflags |= X86_EFLAGS_IF; - ASSERT(!(regs->_eflags & X86_EFLAGS_IOPL)); - regs->_eflags |= curr->arch.pv_vcpu.iopl; - eflags = regs->_eflags; + regs->eflags |= X86_EFLAGS_IF; + ASSERT(!(regs->eflags & X86_EFLAGS_IOPL)); + regs->eflags |= curr->arch.pv_vcpu.iopl; + eflags = regs->eflags; ctxt.ctxt.addr_size = ar & _SEGMENT_L ? 64 : ar & _SEGMENT_DB ? 32 : 16; /* Leave zero in ctxt.ctxt.sp_size, as it's not needed. */ @@ -3025,10 +3025,10 @@ static int emulate_privileged_op(struct * Nothing we allow to be emulated can change anything other than the * arithmetic bits, and the resume flag. */ - ASSERT(!((regs->_eflags ^ eflags) & + ASSERT(!((regs->eflags ^ eflags) & ~(X86_EFLAGS_RF | X86_EFLAGS_ARITH_MASK))); - regs->_eflags |= X86_EFLAGS_IF; - regs->_eflags &= ~X86_EFLAGS_IOPL; + regs->eflags |= X86_EFLAGS_IF; + regs->eflags &= ~X86_EFLAGS_IOPL; /* More strict than x86_emulate_wrapper(). */ ASSERT(ctxt.ctxt.event_pending == (rc == X86EMUL_EXCEPTION)); @@ -3348,7 +3348,8 @@ static void emulate_gate_op(struct cpu_u !(ar & _SEGMENT_WR) || !check_stack_limit(ar, limit, esp + nparm * 4, nparm * 4) ) return do_guest_trap(TRAP_gp_fault, regs); - ustkp = (unsigned int *)(unsigned long)((unsigned int)base + regs->_esp + nparm * 4); + ustkp = (unsigned int *)(unsigned long) + ((unsigned int)base + regs->esp + nparm * 4); if ( !compat_access_ok(ustkp - nparm, nparm * 4) ) { do_guest_trap(TRAP_gp_fault, regs); @@ -3728,20 +3729,20 @@ void do_debug(struct cpu_user_regs *regs if ( !guest_mode(regs) ) { - if ( regs->_eflags & X86_EFLAGS_TF ) + if ( regs->eflags & X86_EFLAGS_TF ) { /* In SYSENTER entry path we can't zap TF until EFLAGS is saved. */ if ( (regs->rip >= (unsigned long)sysenter_entry) && (regs->rip <= (unsigned long)sysenter_eflags_saved) ) { if ( regs->rip == (unsigned long)sysenter_eflags_saved ) - regs->_eflags &= ~X86_EFLAGS_TF; + regs->eflags &= ~X86_EFLAGS_TF; goto out; } if ( !debugger_trap_fatal(TRAP_debug, regs) ) { WARN(); - regs->_eflags &= ~X86_EFLAGS_TF; + regs->eflags &= ~X86_EFLAGS_TF; } } else --- a/xen/arch/x86/x86_64/compat/mm.c +++ b/xen/arch/x86/x86_64/compat/mm.c @@ -327,7 +327,7 @@ int compat_mmuext_op(XEN_GUEST_HANDLE_PA struct cpu_user_regs *regs = guest_cpu_user_regs(); struct mc_state *mcs = ¤t->mc_state; unsigned int arg1 = !(mcs->flags & MCSF_in_multicall) - ? regs->_ecx + ? regs->ecx : mcs->call.args[1]; unsigned int left = arg1 & ~MMU_UPDATE_PREEMPTED; @@ -341,7 +341,7 @@ int compat_mmuext_op(XEN_GUEST_HANDLE_PA BUG_ON(!hypercall_xlat_continuation(&left, 4, 0x01, nat_ops, cmp_uops)); if ( !(mcs->flags & MCSF_in_multicall) ) - regs->_ecx += count - i; + regs->ecx += count - i; else mcs->compat_call.args[1] += count - i; } --- a/xen/arch/x86/x86_64/compat/traps.c +++ b/xen/arch/x86/x86_64/compat/traps.c @@ -8,7 +8,7 @@ void compat_show_guest_stack(struct vcpu { unsigned int i, *stack, addr, mask = STACK_SIZE; - stack = (unsigned int *)(unsigned long)regs->_esp; + stack = (unsigned int *)(unsigned long)regs->esp; printk("Guest stack trace from esp=%08lx:\n ", (unsigned long)stack); if ( !__compat_access_ok(v->domain, stack, sizeof(*stack)) ) @@ -76,14 +76,14 @@ unsigned int compat_iret(void) regs->rsp = (u32)regs->rsp; /* Restore EAX (clobbered by hypercall). */ - if ( unlikely(__get_user(regs->_eax, (u32 *)regs->rsp)) ) + if ( unlikely(__get_user(regs->eax, (u32 *)regs->rsp)) ) { domain_crash(v->domain); return 0; } /* Restore CS and EIP. */ - if ( unlikely(__get_user(regs->_eip, (u32 *)regs->rsp + 1)) || + if ( unlikely(__get_user(regs->eip, (u32 *)regs->rsp + 1)) || unlikely(__get_user(regs->cs, (u32 *)regs->rsp + 2)) ) { domain_crash(v->domain); @@ -103,7 +103,7 @@ unsigned int compat_iret(void) if ( VM_ASSIST(v->domain, architectural_iopl) ) v->arch.pv_vcpu.iopl = eflags & X86_EFLAGS_IOPL; - regs->_eflags = (eflags & ~X86_EFLAGS_IOPL) | X86_EFLAGS_IF; + regs->eflags = (eflags & ~X86_EFLAGS_IOPL) | X86_EFLAGS_IF; if ( unlikely(eflags & X86_EFLAGS_VM) ) { @@ -121,8 +121,8 @@ unsigned int compat_iret(void) int rc = 0; gdprintk(XENLOG_ERR, "VM86 mode unavailable (ksp:%08X->%08X)\n", - regs->_esp, ksp); - if ( ksp < regs->_esp ) + regs->esp, ksp); + if ( ksp < regs->esp ) { for (i = 1; i < 10; ++i) { @@ -130,7 +130,7 @@ unsigned int compat_iret(void) rc |= __put_user(x, (u32 *)(unsigned long)ksp + i); } } - else if ( ksp > regs->_esp ) + else if ( ksp > regs->esp ) { for ( i = 9; i > 0; --i ) { @@ -143,20 +143,20 @@ unsigned int compat_iret(void) domain_crash(v->domain); return 0; } - regs->_esp = ksp; + regs->esp = ksp; regs->ss = v->arch.pv_vcpu.kernel_ss; ti = &v->arch.pv_vcpu.trap_ctxt[TRAP_gp_fault]; if ( TI_GET_IF(ti) ) eflags &= ~X86_EFLAGS_IF; - regs->_eflags &= ~(X86_EFLAGS_VM|X86_EFLAGS_RF| - X86_EFLAGS_NT|X86_EFLAGS_TF); + regs->eflags &= ~(X86_EFLAGS_VM|X86_EFLAGS_RF| + X86_EFLAGS_NT|X86_EFLAGS_TF); if ( unlikely(__put_user(0, (u32 *)regs->rsp)) ) { domain_crash(v->domain); return 0; } - regs->_eip = ti->address; + regs->eip = ti->address; regs->cs = ti->cs; } else if ( unlikely(ring_0(regs)) ) @@ -165,10 +165,10 @@ unsigned int compat_iret(void) return 0; } else if ( ring_1(regs) ) - regs->_esp += 16; + regs->esp += 16; /* Return to ring 2/3: restore ESP and SS. */ else if ( __get_user(regs->ss, (u32 *)regs->rsp + 5) || - __get_user(regs->_esp, (u32 *)regs->rsp + 4) ) + __get_user(regs->esp, (u32 *)regs->rsp + 4) ) { domain_crash(v->domain); return 0; @@ -183,7 +183,7 @@ unsigned int compat_iret(void) * The hypercall exit path will overwrite EAX with this return * value. */ - return regs->_eax; + return regs->eax; } static long compat_register_guest_callback( --- a/xen/arch/x86/x86_64/gdbstub.c +++ b/xen/arch/x86/x86_64/gdbstub.c @@ -44,7 +44,7 @@ gdb_arch_read_reg_array(struct cpu_user_ GDB_REG64(regs->r15); GDB_REG64(regs->rip); - GDB_REG32(regs->_eflags); + GDB_REG32(regs->eflags); GDB_REG32(regs->cs); GDB_REG32(regs->ss); --- a/xen/include/asm-x86/msr.h +++ b/xen/include/asm-x86/msr.h @@ -73,7 +73,7 @@ static inline int wrmsr_safe(unsigned in static inline uint64_t msr_fold(const struct cpu_user_regs *regs) { - return (regs->rdx << 32) | regs->_eax; + return (regs->rdx << 32) | regs->eax; } static inline void msr_split(struct cpu_user_regs *regs, uint64_t val)