From patchwork Tue Dec 20 10:39:56 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 9481411 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C30CF600CA for ; Tue, 20 Dec 2016 10:42:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BDC532793A for ; Tue, 20 Dec 2016 10:42:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B26F8283A6; Tue, 20 Dec 2016 10:42:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3495D2793A for ; Tue, 20 Dec 2016 10:42:33 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cJHq3-0001IP-Sc; Tue, 20 Dec 2016 10:40:07 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cJHq2-0001HV-VY for xen-devel@lists.xenproject.org; Tue, 20 Dec 2016 10:40:07 +0000 Received: from [85.158.139.211] by server-9.bemta-5.messagelabs.com id D0/A8-06369-48A09585; Tue, 20 Dec 2016 10:40:04 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrBIsWRWlGSWpSXmKPExsXS6fjDS7eRKzL CYNMSDYvvWyYzOTB6HP5whSWAMYo1My8pvyKBNePyxO3MBTc2M1bMnPaQsYHxTxtjFyMnh5BA nsTvT9PYuxg5OHgF7CSWng0ECUsIGEo8fX+dDcRmEVCV+LK9jx3EZhNQl2h7tp0VpFxEwEDi3 NEkEJNZIF7i6jp7kAphoCGftixlhRhuJ3H4/m0mkBJOAXuJ9rsGEHsEJf7uEAapYAaqOP5sNu sERp5ZCJlZSDIQtpbEw1+3WCBsbYllC18zzwJbKy2x/B8HRNhcou/4RCZUJSC2k8TCyX8ZFzB yrGLUKE4tKkst0jUy0EsqykzPKMlNzMzRNTQw1ctNLS5OTE/NSUwq1kvOz93ECAzVegYGxh2M jbP9DjFKcjApifLu54mIEOJLyk+pzEgszogvKs1JLT7EqMHBIbBt1+oLjFIsefl5qUoSvM6ck RFCgkWp6akVaZk5wGiCKZXg4FES4Q0DSfMWFyTmFmemQ6ROMSpKifNygCQEQBIZpXlwbbAIvs QoKyXMy8jAwCDEU5BalJtZgir/ilGcg1FJmDcUZApPZl4J3PRXQIuZgBYv7A4HWVySiJCSamD k/hvEty7iA3/4Oqclr+ZIBhhE7D56cYN+fea8uxe0xH20Hz1XLei65qhZc+ZM6SPNI4fDJv2Y JzfT04ljz8lv4n/7IzL47d/efH10o5vPpaXX7c4+Tde1X+LEX9GmsNLiy/UbwuKJ+nc4zb6nm bnf8eDZqyCy6MXO3ft25xnNsi7LvslatL5XiaU4I9FQi7moOBEAqpqRIdsCAAA= X-Env-Sender: JBeulich@suse.com X-Msg-Ref: server-10.tower-206.messagelabs.com!1482230398!59215602!1 X-Originating-IP: [137.65.248.74] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.1.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 4697 invoked from network); 20 Dec 2016 10:40:00 -0000 Received: from prv-mh.provo.novell.com (HELO prv-mh.provo.novell.com) (137.65.248.74) by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 20 Dec 2016 10:40:00 -0000 Received: from INET-PRV-MTA by prv-mh.provo.novell.com with Novell_GroupWise; Tue, 20 Dec 2016 03:39:58 -0700 Message-Id: <5859188C020000780012ADC7@prv-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 14.2.1 Date: Tue, 20 Dec 2016 03:39:56 -0700 From: "Jan Beulich" To: "xen-devel" References: <58590E27020000780012AD5E@prv-mh.provo.novell.com> In-Reply-To: <58590E27020000780012AD5E@prv-mh.provo.novell.com> Mime-Version: 1.0 Cc: George Dunlap , Andrew Cooper Subject: [Xen-devel] [PATCH 05/10] x86/HVM: use unambiguous register names X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This is in preparation of eliminating the mis-naming of 64-bit fields with 32-bit register names (eflags instead of rflags etc). Use the guaranteed 32-bit underscore prefixed names for now where appropriate. Signed-off-by: Jan Beulich x86/HVM: use unambiguous register names This is in preparation of eliminating the mis-naming of 64-bit fields with 32-bit register names (eflags instead of rflags etc). Use the guaranteed 32-bit underscore prefixed names for now where appropriate. Signed-off-by: Jan Beulich --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -882,16 +882,16 @@ static int hvm_save_cpu_ctxt(struct doma ctxt.flags = XEN_X86_FPU_INITIALISED; } - ctxt.rax = v->arch.user_regs.eax; - ctxt.rbx = v->arch.user_regs.ebx; - ctxt.rcx = v->arch.user_regs.ecx; - ctxt.rdx = v->arch.user_regs.edx; - ctxt.rbp = v->arch.user_regs.ebp; - ctxt.rsi = v->arch.user_regs.esi; - ctxt.rdi = v->arch.user_regs.edi; - ctxt.rsp = v->arch.user_regs.esp; - ctxt.rip = v->arch.user_regs.eip; - ctxt.rflags = v->arch.user_regs.eflags; + ctxt.rax = v->arch.user_regs.rax; + ctxt.rbx = v->arch.user_regs.rbx; + ctxt.rcx = v->arch.user_regs.rcx; + ctxt.rdx = v->arch.user_regs.rdx; + ctxt.rbp = v->arch.user_regs.rbp; + ctxt.rsi = v->arch.user_regs.rsi; + ctxt.rdi = v->arch.user_regs.rdi; + ctxt.rsp = v->arch.user_regs.rsp; + ctxt.rip = v->arch.user_regs.rip; + ctxt.rflags = v->arch.user_regs.rflags; ctxt.r8 = v->arch.user_regs.r8; ctxt.r9 = v->arch.user_regs.r9; ctxt.r10 = v->arch.user_regs.r10; @@ -1197,16 +1197,16 @@ static int hvm_load_cpu_ctxt(struct doma if ( xsave_area ) xsave_area->xsave_hdr.xcomp_bv = 0; - v->arch.user_regs.eax = ctxt.rax; - v->arch.user_regs.ebx = ctxt.rbx; - v->arch.user_regs.ecx = ctxt.rcx; - v->arch.user_regs.edx = ctxt.rdx; - v->arch.user_regs.ebp = ctxt.rbp; - v->arch.user_regs.esi = ctxt.rsi; - v->arch.user_regs.edi = ctxt.rdi; - v->arch.user_regs.esp = ctxt.rsp; - v->arch.user_regs.eip = ctxt.rip; - v->arch.user_regs.eflags = ctxt.rflags | X86_EFLAGS_MBS; + v->arch.user_regs.rax = ctxt.rax; + v->arch.user_regs.rbx = ctxt.rbx; + v->arch.user_regs.rcx = ctxt.rcx; + v->arch.user_regs.rdx = ctxt.rdx; + v->arch.user_regs.rbp = ctxt.rbp; + v->arch.user_regs.rsi = ctxt.rsi; + v->arch.user_regs.rdi = ctxt.rdi; + v->arch.user_regs.rsp = ctxt.rsp; + v->arch.user_regs.rip = ctxt.rip; + v->arch.user_regs.rflags = ctxt.rflags | X86_EFLAGS_MBS; v->arch.user_regs.r8 = ctxt.r8; v->arch.user_regs.r9 = ctxt.r9; v->arch.user_regs.r10 = ctxt.r10; @@ -1658,7 +1658,7 @@ void hvm_vcpu_down(struct vcpu *v) } } -void hvm_hlt(unsigned long rflags) +void hvm_hlt(unsigned int eflags) { struct vcpu *curr = current; @@ -1670,7 +1670,7 @@ void hvm_hlt(unsigned long rflags) * want to shut down. In a real processor, NMIs are the only way to break * out of this. */ - if ( unlikely(!(rflags & X86_EFLAGS_IF)) ) + if ( unlikely(!(eflags & X86_EFLAGS_IF)) ) return hvm_vcpu_down(curr); do_sched_op(SCHEDOP_block, guest_handle_from_ptr(NULL, void)); @@ -2901,7 +2901,7 @@ void hvm_task_switch( struct segment_register gdt, tr, prev_tr, segr; struct desc_struct *optss_desc = NULL, *nptss_desc = NULL, tss_desc; bool_t otd_writable, ntd_writable; - unsigned long eflags; + unsigned int eflags; pagefault_info_t pfinfo; int exn_raised, rc; struct { @@ -2975,20 +2975,20 @@ void hvm_task_switch( if ( rc != HVMCOPY_okay ) goto out; - eflags = regs->eflags; + eflags = regs->_eflags; if ( taskswitch_reason == TSW_iret ) eflags &= ~X86_EFLAGS_NT; - tss.eip = regs->eip; + tss.eip = regs->_eip; tss.eflags = eflags; - tss.eax = regs->eax; - tss.ecx = regs->ecx; - tss.edx = regs->edx; - tss.ebx = regs->ebx; - tss.esp = regs->esp; - tss.ebp = regs->ebp; - tss.esi = regs->esi; - tss.edi = regs->edi; + tss.eax = regs->_eax; + tss.ecx = regs->_ecx; + tss.edx = regs->_edx; + tss.ebx = regs->_ebx; + tss.esp = regs->_esp; + tss.ebp = regs->_ebp; + tss.esi = regs->_esi; + tss.edi = regs->_edi; hvm_get_segment_register(v, x86_seg_es, &segr); tss.es = segr.sel; @@ -3032,16 +3032,16 @@ void hvm_task_switch( if ( hvm_set_cr3(tss.cr3, 1) ) goto out; - regs->eip = tss.eip; - regs->eflags = tss.eflags | 2; - regs->eax = tss.eax; - regs->ecx = tss.ecx; - regs->edx = tss.edx; - regs->ebx = tss.ebx; - regs->esp = tss.esp; - regs->ebp = tss.ebp; - regs->esi = tss.esi; - regs->edi = tss.edi; + regs->rip = tss.eip; + regs->rflags = tss.eflags | 2; + regs->rax = tss.eax; + regs->rcx = tss.ecx; + regs->rdx = tss.edx; + regs->rbx = tss.ebx; + regs->rsp = tss.esp; + regs->rbp = tss.ebp; + regs->rsi = tss.esi; + regs->rdi = tss.edi; exn_raised = 0; if ( hvm_load_segment_selector(x86_seg_es, tss.es, tss.eflags) || @@ -3054,7 +3054,7 @@ void hvm_task_switch( if ( taskswitch_reason == TSW_call_or_int ) { - regs->eflags |= X86_EFLAGS_NT; + regs->_eflags |= X86_EFLAGS_NT; tss.back_link = prev_tr.sel; rc = hvm_copy_to_guest_linear(tr.base + offsetof(typeof(tss), back_link), @@ -4012,7 +4012,7 @@ void hvm_ud_intercept(struct cpu_user_re unsigned long addr; char sig[5]; /* ud2; .ascii "xen" */ - if ( hvm_virtual_to_linear_addr(x86_seg_cs, cs, regs->eip, + if ( hvm_virtual_to_linear_addr(x86_seg_cs, cs, regs->rip, sizeof(sig), hvm_access_insn_fetch, (hvm_long_mode_enabled(cur) && cs->attr.fields.l) ? 64 : @@ -4021,12 +4021,12 @@ void hvm_ud_intercept(struct cpu_user_re walk, NULL) == HVMCOPY_okay) && (memcmp(sig, "\xf\xbxen", sizeof(sig)) == 0) ) { - regs->eip += sizeof(sig); - regs->eflags &= ~X86_EFLAGS_RF; + regs->rip += sizeof(sig); + regs->_eflags &= ~X86_EFLAGS_RF; /* Zero the upper 32 bits of %rip if not in 64bit mode. */ if ( !(hvm_long_mode_enabled(cur) && cs->attr.fields.l) ) - regs->eip = regs->_eip; + regs->rip = regs->_eip; add_taint(TAINT_HVM_FEP); } @@ -4062,7 +4062,7 @@ enum hvm_intblk hvm_interrupt_blocked(st } if ( (intack.source != hvm_intsrc_nmi) && - !(guest_cpu_user_regs()->eflags & X86_EFLAGS_IF) ) + !(guest_cpu_user_regs()->_eflags & X86_EFLAGS_IF) ) return hvm_intblk_rflags_ie; intr_shadow = hvm_funcs.get_interrupt_shadow(v); @@ -4255,7 +4255,7 @@ int hvm_do_hypercall(struct cpu_user_reg if ( unlikely(hvm_get_cpl(curr)) ) { default: - regs->eax = -EPERM; + regs->rax = -EPERM; return HVM_HCALL_completed; } case 0: @@ -4271,7 +4271,7 @@ int hvm_do_hypercall(struct cpu_user_reg if ( (eax >= ARRAY_SIZE(hvm_hypercall_table)) || !hvm_hypercall_table[eax].native ) { - regs->eax = -ENOSYS; + regs->rax = -ENOSYS; return HVM_HCALL_completed; } @@ -4317,9 +4317,9 @@ int hvm_do_hypercall(struct cpu_user_reg case 6: regs->r9 = 0xdeadbeefdeadf00dUL; case 5: regs->r8 = 0xdeadbeefdeadf00dUL; case 4: regs->r10 = 0xdeadbeefdeadf00dUL; - case 3: regs->edx = 0xdeadbeefdeadf00dUL; - case 2: regs->esi = 0xdeadbeefdeadf00dUL; - case 1: regs->edi = 0xdeadbeefdeadf00dUL; + case 3: regs->rdx = 0xdeadbeefdeadf00dUL; + case 2: regs->rsi = 0xdeadbeefdeadf00dUL; + case 1: regs->rdi = 0xdeadbeefdeadf00dUL; } } #endif @@ -4349,8 +4349,8 @@ int hvm_do_hypercall(struct cpu_user_reg } #endif - regs->_eax = hvm_hypercall_table[eax].compat(ebx, ecx, edx, esi, edi, - ebp); + regs->rax = hvm_hypercall_table[eax].compat(ebx, ecx, edx, esi, edi, + ebp); #ifndef NDEBUG if ( !curr->arch.hvm_vcpu.hcall_preempted ) @@ -4358,19 +4358,18 @@ int hvm_do_hypercall(struct cpu_user_reg /* Deliberately corrupt parameter regs used by this hypercall. */ switch ( hypercall_args_table[eax].compat ) { - case 6: regs->ebp = 0xdeadf00d; - case 5: regs->edi = 0xdeadf00d; - case 4: regs->esi = 0xdeadf00d; - case 3: regs->edx = 0xdeadf00d; - case 2: regs->ecx = 0xdeadf00d; - case 1: regs->ebx = 0xdeadf00d; + case 6: regs->rbp = 0xdeadf00d; + case 5: regs->rdi = 0xdeadf00d; + case 4: regs->rsi = 0xdeadf00d; + case 3: regs->rdx = 0xdeadf00d; + case 2: regs->rcx = 0xdeadf00d; + case 1: regs->rbx = 0xdeadf00d; } } #endif } - HVM_DBG_LOG(DBG_LEVEL_HCALL, "hcall%lu -> %lx", - eax, (unsigned long)regs->eax); + HVM_DBG_LOG(DBG_LEVEL_HCALL, "hcall%lu -> %lx", eax, regs->rax); if ( curr->arch.hvm_vcpu.hcall_preempted ) return HVM_HCALL_preempted; @@ -4490,9 +4489,9 @@ void hvm_vcpu_reset_state(struct vcpu *v v->arch.vgc_flags = VGCF_online; memset(&v->arch.user_regs, 0, sizeof(v->arch.user_regs)); - v->arch.user_regs.eflags = X86_EFLAGS_MBS; - v->arch.user_regs.edx = 0x00000f00; - v->arch.user_regs.eip = ip; + v->arch.user_regs.rflags = X86_EFLAGS_MBS; + v->arch.user_regs.rdx = 0x00000f00; + v->arch.user_regs.rip = ip; memset(&v->arch.debugreg, 0, sizeof(v->arch.debugreg)); v->arch.hvm_vcpu.guest_cr[0] = X86_CR0_ET; --- a/xen/include/asm-x86/hvm/support.h +++ b/xen/include/asm-x86/hvm/support.h @@ -108,7 +108,7 @@ enum hvm_copy_result hvm_fetch_from_gues #define HVM_HCALL_invalidate 2 /* invalidate ioemu-dm memory cache */ int hvm_do_hypercall(struct cpu_user_regs *pregs); -void hvm_hlt(unsigned long rflags); +void hvm_hlt(unsigned int eflags); void hvm_triple_fault(void); void hvm_rdtsc_intercept(struct cpu_user_regs *regs); --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -882,16 +882,16 @@ static int hvm_save_cpu_ctxt(struct doma ctxt.flags = XEN_X86_FPU_INITIALISED; } - ctxt.rax = v->arch.user_regs.eax; - ctxt.rbx = v->arch.user_regs.ebx; - ctxt.rcx = v->arch.user_regs.ecx; - ctxt.rdx = v->arch.user_regs.edx; - ctxt.rbp = v->arch.user_regs.ebp; - ctxt.rsi = v->arch.user_regs.esi; - ctxt.rdi = v->arch.user_regs.edi; - ctxt.rsp = v->arch.user_regs.esp; - ctxt.rip = v->arch.user_regs.eip; - ctxt.rflags = v->arch.user_regs.eflags; + ctxt.rax = v->arch.user_regs.rax; + ctxt.rbx = v->arch.user_regs.rbx; + ctxt.rcx = v->arch.user_regs.rcx; + ctxt.rdx = v->arch.user_regs.rdx; + ctxt.rbp = v->arch.user_regs.rbp; + ctxt.rsi = v->arch.user_regs.rsi; + ctxt.rdi = v->arch.user_regs.rdi; + ctxt.rsp = v->arch.user_regs.rsp; + ctxt.rip = v->arch.user_regs.rip; + ctxt.rflags = v->arch.user_regs.rflags; ctxt.r8 = v->arch.user_regs.r8; ctxt.r9 = v->arch.user_regs.r9; ctxt.r10 = v->arch.user_regs.r10; @@ -1197,16 +1197,16 @@ static int hvm_load_cpu_ctxt(struct doma if ( xsave_area ) xsave_area->xsave_hdr.xcomp_bv = 0; - v->arch.user_regs.eax = ctxt.rax; - v->arch.user_regs.ebx = ctxt.rbx; - v->arch.user_regs.ecx = ctxt.rcx; - v->arch.user_regs.edx = ctxt.rdx; - v->arch.user_regs.ebp = ctxt.rbp; - v->arch.user_regs.esi = ctxt.rsi; - v->arch.user_regs.edi = ctxt.rdi; - v->arch.user_regs.esp = ctxt.rsp; - v->arch.user_regs.eip = ctxt.rip; - v->arch.user_regs.eflags = ctxt.rflags | X86_EFLAGS_MBS; + v->arch.user_regs.rax = ctxt.rax; + v->arch.user_regs.rbx = ctxt.rbx; + v->arch.user_regs.rcx = ctxt.rcx; + v->arch.user_regs.rdx = ctxt.rdx; + v->arch.user_regs.rbp = ctxt.rbp; + v->arch.user_regs.rsi = ctxt.rsi; + v->arch.user_regs.rdi = ctxt.rdi; + v->arch.user_regs.rsp = ctxt.rsp; + v->arch.user_regs.rip = ctxt.rip; + v->arch.user_regs.rflags = ctxt.rflags | X86_EFLAGS_MBS; v->arch.user_regs.r8 = ctxt.r8; v->arch.user_regs.r9 = ctxt.r9; v->arch.user_regs.r10 = ctxt.r10; @@ -1658,7 +1658,7 @@ void hvm_vcpu_down(struct vcpu *v) } } -void hvm_hlt(unsigned long rflags) +void hvm_hlt(unsigned int eflags) { struct vcpu *curr = current; @@ -1670,7 +1670,7 @@ void hvm_hlt(unsigned long rflags) * want to shut down. In a real processor, NMIs are the only way to break * out of this. */ - if ( unlikely(!(rflags & X86_EFLAGS_IF)) ) + if ( unlikely(!(eflags & X86_EFLAGS_IF)) ) return hvm_vcpu_down(curr); do_sched_op(SCHEDOP_block, guest_handle_from_ptr(NULL, void)); @@ -2901,7 +2901,7 @@ void hvm_task_switch( struct segment_register gdt, tr, prev_tr, segr; struct desc_struct *optss_desc = NULL, *nptss_desc = NULL, tss_desc; bool_t otd_writable, ntd_writable; - unsigned long eflags; + unsigned int eflags; pagefault_info_t pfinfo; int exn_raised, rc; struct { @@ -2975,20 +2975,20 @@ void hvm_task_switch( if ( rc != HVMCOPY_okay ) goto out; - eflags = regs->eflags; + eflags = regs->_eflags; if ( taskswitch_reason == TSW_iret ) eflags &= ~X86_EFLAGS_NT; - tss.eip = regs->eip; + tss.eip = regs->_eip; tss.eflags = eflags; - tss.eax = regs->eax; - tss.ecx = regs->ecx; - tss.edx = regs->edx; - tss.ebx = regs->ebx; - tss.esp = regs->esp; - tss.ebp = regs->ebp; - tss.esi = regs->esi; - tss.edi = regs->edi; + tss.eax = regs->_eax; + tss.ecx = regs->_ecx; + tss.edx = regs->_edx; + tss.ebx = regs->_ebx; + tss.esp = regs->_esp; + tss.ebp = regs->_ebp; + tss.esi = regs->_esi; + tss.edi = regs->_edi; hvm_get_segment_register(v, x86_seg_es, &segr); tss.es = segr.sel; @@ -3032,16 +3032,16 @@ void hvm_task_switch( if ( hvm_set_cr3(tss.cr3, 1) ) goto out; - regs->eip = tss.eip; - regs->eflags = tss.eflags | 2; - regs->eax = tss.eax; - regs->ecx = tss.ecx; - regs->edx = tss.edx; - regs->ebx = tss.ebx; - regs->esp = tss.esp; - regs->ebp = tss.ebp; - regs->esi = tss.esi; - regs->edi = tss.edi; + regs->rip = tss.eip; + regs->rflags = tss.eflags | 2; + regs->rax = tss.eax; + regs->rcx = tss.ecx; + regs->rdx = tss.edx; + regs->rbx = tss.ebx; + regs->rsp = tss.esp; + regs->rbp = tss.ebp; + regs->rsi = tss.esi; + regs->rdi = tss.edi; exn_raised = 0; if ( hvm_load_segment_selector(x86_seg_es, tss.es, tss.eflags) || @@ -3054,7 +3054,7 @@ void hvm_task_switch( if ( taskswitch_reason == TSW_call_or_int ) { - regs->eflags |= X86_EFLAGS_NT; + regs->_eflags |= X86_EFLAGS_NT; tss.back_link = prev_tr.sel; rc = hvm_copy_to_guest_linear(tr.base + offsetof(typeof(tss), back_link), @@ -4012,7 +4012,7 @@ void hvm_ud_intercept(struct cpu_user_re unsigned long addr; char sig[5]; /* ud2; .ascii "xen" */ - if ( hvm_virtual_to_linear_addr(x86_seg_cs, cs, regs->eip, + if ( hvm_virtual_to_linear_addr(x86_seg_cs, cs, regs->rip, sizeof(sig), hvm_access_insn_fetch, (hvm_long_mode_enabled(cur) && cs->attr.fields.l) ? 64 : @@ -4021,12 +4021,12 @@ void hvm_ud_intercept(struct cpu_user_re walk, NULL) == HVMCOPY_okay) && (memcmp(sig, "\xf\xbxen", sizeof(sig)) == 0) ) { - regs->eip += sizeof(sig); - regs->eflags &= ~X86_EFLAGS_RF; + regs->rip += sizeof(sig); + regs->_eflags &= ~X86_EFLAGS_RF; /* Zero the upper 32 bits of %rip if not in 64bit mode. */ if ( !(hvm_long_mode_enabled(cur) && cs->attr.fields.l) ) - regs->eip = regs->_eip; + regs->rip = regs->_eip; add_taint(TAINT_HVM_FEP); } @@ -4062,7 +4062,7 @@ enum hvm_intblk hvm_interrupt_blocked(st } if ( (intack.source != hvm_intsrc_nmi) && - !(guest_cpu_user_regs()->eflags & X86_EFLAGS_IF) ) + !(guest_cpu_user_regs()->_eflags & X86_EFLAGS_IF) ) return hvm_intblk_rflags_ie; intr_shadow = hvm_funcs.get_interrupt_shadow(v); @@ -4255,7 +4255,7 @@ int hvm_do_hypercall(struct cpu_user_reg if ( unlikely(hvm_get_cpl(curr)) ) { default: - regs->eax = -EPERM; + regs->rax = -EPERM; return HVM_HCALL_completed; } case 0: @@ -4271,7 +4271,7 @@ int hvm_do_hypercall(struct cpu_user_reg if ( (eax >= ARRAY_SIZE(hvm_hypercall_table)) || !hvm_hypercall_table[eax].native ) { - regs->eax = -ENOSYS; + regs->rax = -ENOSYS; return HVM_HCALL_completed; } @@ -4317,9 +4317,9 @@ int hvm_do_hypercall(struct cpu_user_reg case 6: regs->r9 = 0xdeadbeefdeadf00dUL; case 5: regs->r8 = 0xdeadbeefdeadf00dUL; case 4: regs->r10 = 0xdeadbeefdeadf00dUL; - case 3: regs->edx = 0xdeadbeefdeadf00dUL; - case 2: regs->esi = 0xdeadbeefdeadf00dUL; - case 1: regs->edi = 0xdeadbeefdeadf00dUL; + case 3: regs->rdx = 0xdeadbeefdeadf00dUL; + case 2: regs->rsi = 0xdeadbeefdeadf00dUL; + case 1: regs->rdi = 0xdeadbeefdeadf00dUL; } } #endif @@ -4349,8 +4349,8 @@ int hvm_do_hypercall(struct cpu_user_reg } #endif - regs->_eax = hvm_hypercall_table[eax].compat(ebx, ecx, edx, esi, edi, - ebp); + regs->rax = hvm_hypercall_table[eax].compat(ebx, ecx, edx, esi, edi, + ebp); #ifndef NDEBUG if ( !curr->arch.hvm_vcpu.hcall_preempted ) @@ -4358,19 +4358,18 @@ int hvm_do_hypercall(struct cpu_user_reg /* Deliberately corrupt parameter regs used by this hypercall. */ switch ( hypercall_args_table[eax].compat ) { - case 6: regs->ebp = 0xdeadf00d; - case 5: regs->edi = 0xdeadf00d; - case 4: regs->esi = 0xdeadf00d; - case 3: regs->edx = 0xdeadf00d; - case 2: regs->ecx = 0xdeadf00d; - case 1: regs->ebx = 0xdeadf00d; + case 6: regs->rbp = 0xdeadf00d; + case 5: regs->rdi = 0xdeadf00d; + case 4: regs->rsi = 0xdeadf00d; + case 3: regs->rdx = 0xdeadf00d; + case 2: regs->rcx = 0xdeadf00d; + case 1: regs->rbx = 0xdeadf00d; } } #endif } - HVM_DBG_LOG(DBG_LEVEL_HCALL, "hcall%lu -> %lx", - eax, (unsigned long)regs->eax); + HVM_DBG_LOG(DBG_LEVEL_HCALL, "hcall%lu -> %lx", eax, regs->rax); if ( curr->arch.hvm_vcpu.hcall_preempted ) return HVM_HCALL_preempted; @@ -4490,9 +4489,9 @@ void hvm_vcpu_reset_state(struct vcpu *v v->arch.vgc_flags = VGCF_online; memset(&v->arch.user_regs, 0, sizeof(v->arch.user_regs)); - v->arch.user_regs.eflags = X86_EFLAGS_MBS; - v->arch.user_regs.edx = 0x00000f00; - v->arch.user_regs.eip = ip; + v->arch.user_regs.rflags = X86_EFLAGS_MBS; + v->arch.user_regs.rdx = 0x00000f00; + v->arch.user_regs.rip = ip; memset(&v->arch.debugreg, 0, sizeof(v->arch.debugreg)); v->arch.hvm_vcpu.guest_cr[0] = X86_CR0_ET; --- a/xen/include/asm-x86/hvm/support.h +++ b/xen/include/asm-x86/hvm/support.h @@ -108,7 +108,7 @@ enum hvm_copy_result hvm_fetch_from_gues #define HVM_HCALL_invalidate 2 /* invalidate ioemu-dm memory cache */ int hvm_do_hypercall(struct cpu_user_regs *pregs); -void hvm_hlt(unsigned long rflags); +void hvm_hlt(unsigned int eflags); void hvm_triple_fault(void); void hvm_rdtsc_intercept(struct cpu_user_regs *regs);