From patchwork Tue Dec 20 10:36:02 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 9481401 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9F907600CA for ; Tue, 20 Dec 2016 10:38:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9B7E92793A for ; Tue, 20 Dec 2016 10:38:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 903A0283A6; Tue, 20 Dec 2016 10:38:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2BB1B2793A for ; Tue, 20 Dec 2016 10:38:31 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cJHmG-0000SF-K5; Tue, 20 Dec 2016 10:36:12 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cJHmF-0000S9-Fe for xen-devel@lists.xenproject.org; Tue, 20 Dec 2016 10:36:11 +0000 Received: from [85.158.143.35] by server-9.bemta-6.messagelabs.com id D8/B7-27165-A9909585; Tue, 20 Dec 2016 10:36:10 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrLIsWRWlGSWpSXmKPExsXS6fjDS3cmZ2S EwbOXNhbft0xmcmD0OPzhCksAYxRrZl5SfkUCa8aniUfZC5oTK169vcXawDjbs4uRk0NIIE+i ce4SdhCbV8BO4uLk+awgtoSAocTT99fZQGwWAVWJrk0XwWw2AXWJtmfbgWo4OEQEDCTOHU3qY uTiYBaYxSSxq38iI0iNsIC7xPeej0wQ8+0kDt+/zQRSzylgL9F+1wDE5BUQlPi7Qxikghmo4v Pr/cwTGHlmIWRmIclA2FoSD3/dYoGwtSWWLXzNDFLOLCAtsfwfB0TYXGLJt0mMqEpAbCeJC7s usi5g5FjFqFGcWlSWWqRraKyXVJSZnlGSm5iZo2toYKaXm1pcnJiempOYVKyXnJ+7iREYrAxA sIPxy7KAQ4ySHExKorz7eSIihPiS8lMqMxKLM+KLSnNSiw8xynBwKEnwenFERggJFqWmp1akZ eYA4wYmLcHBoyTC+4YdKM1bXJCYW5yZDpE6xagoJc7LB9InAJLIKM2Da4PF6iVGWSlhXkagQ4 R4ClKLcjNLUOVfMYpzMCoJ82aATOHJzCuBm/4KaDET0OKF3eEgi0sSEVJSDYzLet9JxTN+XNR Yc2DGtPsS0VydPwWzdn+fU68aajp7wSmFV5Vv+GS0HNwkYjdLRa459KBy9pvVM5t75sr/rzR3 enPE4AGjfULNne1Zd7/zKN14Oo+1xGBj1PFn0oxbTn+YP//ssYKM2cFSzpfD520NSr/Slxoz4 9P1s9MaPuy7aisQvS8x5NsWJZbijERDLeai4kQAQA5gEdACAAA= X-Env-Sender: JBeulich@suse.com X-Msg-Ref: server-6.tower-21.messagelabs.com!1482230167!26112193!1 X-Originating-IP: [137.65.248.74] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.1.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 19369 invoked from network); 20 Dec 2016 10:36:09 -0000 Received: from prv-mh.provo.novell.com (HELO prv-mh.provo.novell.com) (137.65.248.74) by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 20 Dec 2016 10:36:09 -0000 Received: from INET-PRV-MTA by prv-mh.provo.novell.com with Novell_GroupWise; Tue, 20 Dec 2016 03:36:04 -0700 Message-Id: <585917A2020000780012ADAC@prv-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 14.2.1 Date: Tue, 20 Dec 2016 03:36:02 -0700 From: "Jan Beulich" To: "xen-devel" References: <58590E27020000780012AD5E@prv-mh.provo.novell.com> In-Reply-To: <58590E27020000780012AD5E@prv-mh.provo.novell.com> Mime-Version: 1.0 Cc: Kevin Tian , Suravee Suthikulpanit , George Dunlap , Andrew Cooper , Jun Nakajima , Boris Ostrovsky Subject: [Xen-devel] [PATCH 01/10] x86/MSR: introduce MSR access split/fold helpers X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This is in preparation of eliminating the mis-naming of 64-bit fields with 32-bit register names (eflags instead of rflags etc). Use the guaranteed 32-bit underscore prefixed names for now where appropriate. Signed-off-by: Jan Beulich x86/MSR: introduce MSR access split/fold helpers This is in preparation of eliminating the mis-naming of 64-bit fields with 32-bit register names (eflags instead of rflags etc). Use the guaranteed 32-bit underscore prefixed names for now where appropriate. Signed-off-by: Jan Beulich --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -3695,12 +3695,9 @@ static uint64_t _hvm_rdtsc_intercept(voi void hvm_rdtsc_intercept(struct cpu_user_regs *regs) { - uint64_t tsc = _hvm_rdtsc_intercept(); + msr_split(regs, _hvm_rdtsc_intercept()); - regs->eax = (uint32_t)tsc; - regs->edx = (uint32_t)(tsc >> 32); - - HVMTRACE_2D(RDTSC, regs->eax, regs->edx); + HVMTRACE_2D(RDTSC, regs->_eax, regs->_edx); } int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content) --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -1936,14 +1936,10 @@ static void svm_do_msr_access(struct cpu rc = hvm_msr_read_intercept(regs->_ecx, &msr_content); if ( rc == X86EMUL_OKAY ) - { - regs->rax = (uint32_t)msr_content; - regs->rdx = (uint32_t)(msr_content >> 32); - } + msr_split(regs, msr_content); } else - rc = hvm_msr_write_intercept(regs->_ecx, - (regs->rdx << 32) | regs->_eax, 1); + rc = hvm_msr_write_intercept(regs->_ecx, msr_fold(regs), 1); if ( rc == X86EMUL_OKAY ) __update_guest_eip(regs, inst_len); @@ -2618,8 +2614,7 @@ void svm_vmexit_handler(struct cpu_user_ if ( vmcb_get_cpl(vmcb) ) hvm_inject_hw_exception(TRAP_gp_fault, 0); else if ( (inst_len = __get_instruction_length(v, INSTR_XSETBV)) && - hvm_handle_xsetbv(regs->ecx, - (regs->rdx << 32) | regs->_eax) == 0 ) + hvm_handle_xsetbv(regs->_ecx, msr_fold(regs)) == 0 ) __update_guest_eip(regs, inst_len); break; --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -3626,22 +3626,18 @@ void vmx_vmexit_handler(struct cpu_user_ case EXIT_REASON_MSR_READ: { uint64_t msr_content; - if ( hvm_msr_read_intercept(regs->ecx, &msr_content) == X86EMUL_OKAY ) + if ( hvm_msr_read_intercept(regs->_ecx, &msr_content) == X86EMUL_OKAY ) { - regs->eax = (uint32_t)msr_content; - regs->edx = (uint32_t)(msr_content >> 32); + msr_split(regs, msr_content); update_guest_eip(); /* Safe: RDMSR */ } break; } + case EXIT_REASON_MSR_WRITE: - { - uint64_t msr_content; - msr_content = ((uint64_t)regs->edx << 32) | (uint32_t)regs->eax; - if ( hvm_msr_write_intercept(regs->ecx, msr_content, 1) == X86EMUL_OKAY ) + if ( hvm_msr_write_intercept(regs->_ecx, msr_fold(regs), 1) == X86EMUL_OKAY ) update_guest_eip(); /* Safe: WRMSR */ break; - } case EXIT_REASON_VMXOFF: if ( nvmx_handle_vmxoff(regs) == X86EMUL_OKAY ) @@ -3802,8 +3798,7 @@ void vmx_vmexit_handler(struct cpu_user_ break; case EXIT_REASON_XSETBV: - if ( hvm_handle_xsetbv(regs->ecx, - (regs->rdx << 32) | regs->_eax) == 0 ) + if ( hvm_handle_xsetbv(regs->_ecx, msr_fold(regs)) == 0 ) update_guest_eip(); /* Safe: XSETBV */ break; --- a/xen/arch/x86/hvm/vmx/vvmx.c +++ b/xen/arch/x86/hvm/vmx/vvmx.c @@ -2322,15 +2322,11 @@ int nvmx_n2_vmexit_handler(struct cpu_us nvcpu->nv_vmexit_pending = 1; else { - uint64_t tsc; - /* * special handler is needed if L1 doesn't intercept rdtsc, * avoiding changing guest_tsc and messing up timekeeping in L1 */ - tsc = hvm_get_guest_tsc(v) + get_vvmcs(v, TSC_OFFSET); - regs->eax = (uint32_t)tsc; - regs->edx = (uint32_t)(tsc >> 32); + msr_split(regs, hvm_get_guest_tsc(v) + get_vvmcs(v, TSC_OFFSET)); update_guest_eip(); return 1; --- a/xen/arch/x86/time.c +++ b/xen/arch/x86/time.c @@ -1918,13 +1918,10 @@ void pv_soft_rdtsc(struct vcpu *v, struc spin_unlock(&d->arch.vtsc_lock); - now = gtime_to_gtsc(d, now); - - regs->eax = (uint32_t)now; - regs->edx = (uint32_t)(now >> 32); + msr_split(regs, gtime_to_gtsc(d, now)); if ( rdtscp ) - regs->ecx = + regs->rcx = (d->arch.tsc_mode == TSC_MODE_PVRDTSCP) ? d->arch.incarnation : 0; } --- a/xen/arch/x86/traps.c +++ b/xen/arch/x86/traps.c @@ -3401,12 +3401,7 @@ if(rc) printk("%pv: %02x @ %08lx -> %d\n else if ( currd->arch.vtsc ) pv_soft_rdtsc(curr, regs, 0); else - { - uint64_t val = rdtsc(); - - regs->eax = (uint32_t)val; - regs->edx = (uint32_t)(val >> 32); - } + msr_split(regs, rdtsc()); } if ( ctxt.ctxt.retire.singlestep ) --- a/xen/include/asm-x86/msr.h +++ b/xen/include/asm-x86/msr.h @@ -71,6 +71,17 @@ static inline int wrmsr_safe(unsigned in return _rc; } +static inline uint64_t msr_fold(const struct cpu_user_regs *regs) +{ + return (regs->rdx << 32) | regs->_eax; +} + +static inline void msr_split(struct cpu_user_regs *regs, uint64_t val) +{ + regs->rdx = val >> 32; + regs->rax = (uint32_t)val; +} + static inline uint64_t rdtsc(void) { uint32_t low, high; Reviewed-by: Kevin Tian --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -3695,12 +3695,9 @@ static uint64_t _hvm_rdtsc_intercept(voi void hvm_rdtsc_intercept(struct cpu_user_regs *regs) { - uint64_t tsc = _hvm_rdtsc_intercept(); + msr_split(regs, _hvm_rdtsc_intercept()); - regs->eax = (uint32_t)tsc; - regs->edx = (uint32_t)(tsc >> 32); - - HVMTRACE_2D(RDTSC, regs->eax, regs->edx); + HVMTRACE_2D(RDTSC, regs->_eax, regs->_edx); } int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content) --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -1936,14 +1936,10 @@ static void svm_do_msr_access(struct cpu rc = hvm_msr_read_intercept(regs->_ecx, &msr_content); if ( rc == X86EMUL_OKAY ) - { - regs->rax = (uint32_t)msr_content; - regs->rdx = (uint32_t)(msr_content >> 32); - } + msr_split(regs, msr_content); } else - rc = hvm_msr_write_intercept(regs->_ecx, - (regs->rdx << 32) | regs->_eax, 1); + rc = hvm_msr_write_intercept(regs->_ecx, msr_fold(regs), 1); if ( rc == X86EMUL_OKAY ) __update_guest_eip(regs, inst_len); @@ -2618,8 +2614,7 @@ void svm_vmexit_handler(struct cpu_user_ if ( vmcb_get_cpl(vmcb) ) hvm_inject_hw_exception(TRAP_gp_fault, 0); else if ( (inst_len = __get_instruction_length(v, INSTR_XSETBV)) && - hvm_handle_xsetbv(regs->ecx, - (regs->rdx << 32) | regs->_eax) == 0 ) + hvm_handle_xsetbv(regs->_ecx, msr_fold(regs)) == 0 ) __update_guest_eip(regs, inst_len); break; --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -3626,22 +3626,18 @@ void vmx_vmexit_handler(struct cpu_user_ case EXIT_REASON_MSR_READ: { uint64_t msr_content; - if ( hvm_msr_read_intercept(regs->ecx, &msr_content) == X86EMUL_OKAY ) + if ( hvm_msr_read_intercept(regs->_ecx, &msr_content) == X86EMUL_OKAY ) { - regs->eax = (uint32_t)msr_content; - regs->edx = (uint32_t)(msr_content >> 32); + msr_split(regs, msr_content); update_guest_eip(); /* Safe: RDMSR */ } break; } + case EXIT_REASON_MSR_WRITE: - { - uint64_t msr_content; - msr_content = ((uint64_t)regs->edx << 32) | (uint32_t)regs->eax; - if ( hvm_msr_write_intercept(regs->ecx, msr_content, 1) == X86EMUL_OKAY ) + if ( hvm_msr_write_intercept(regs->_ecx, msr_fold(regs), 1) == X86EMUL_OKAY ) update_guest_eip(); /* Safe: WRMSR */ break; - } case EXIT_REASON_VMXOFF: if ( nvmx_handle_vmxoff(regs) == X86EMUL_OKAY ) @@ -3802,8 +3798,7 @@ void vmx_vmexit_handler(struct cpu_user_ break; case EXIT_REASON_XSETBV: - if ( hvm_handle_xsetbv(regs->ecx, - (regs->rdx << 32) | regs->_eax) == 0 ) + if ( hvm_handle_xsetbv(regs->_ecx, msr_fold(regs)) == 0 ) update_guest_eip(); /* Safe: XSETBV */ break; --- a/xen/arch/x86/hvm/vmx/vvmx.c +++ b/xen/arch/x86/hvm/vmx/vvmx.c @@ -2322,15 +2322,11 @@ int nvmx_n2_vmexit_handler(struct cpu_us nvcpu->nv_vmexit_pending = 1; else { - uint64_t tsc; - /* * special handler is needed if L1 doesn't intercept rdtsc, * avoiding changing guest_tsc and messing up timekeeping in L1 */ - tsc = hvm_get_guest_tsc(v) + get_vvmcs(v, TSC_OFFSET); - regs->eax = (uint32_t)tsc; - regs->edx = (uint32_t)(tsc >> 32); + msr_split(regs, hvm_get_guest_tsc(v) + get_vvmcs(v, TSC_OFFSET)); update_guest_eip(); return 1; --- a/xen/arch/x86/time.c +++ b/xen/arch/x86/time.c @@ -1918,13 +1918,10 @@ void pv_soft_rdtsc(struct vcpu *v, struc spin_unlock(&d->arch.vtsc_lock); - now = gtime_to_gtsc(d, now); - - regs->eax = (uint32_t)now; - regs->edx = (uint32_t)(now >> 32); + msr_split(regs, gtime_to_gtsc(d, now)); if ( rdtscp ) - regs->ecx = + regs->rcx = (d->arch.tsc_mode == TSC_MODE_PVRDTSCP) ? d->arch.incarnation : 0; } --- a/xen/arch/x86/traps.c +++ b/xen/arch/x86/traps.c @@ -3401,12 +3401,7 @@ if(rc) printk("%pv: %02x @ %08lx -> %d\n else if ( currd->arch.vtsc ) pv_soft_rdtsc(curr, regs, 0); else - { - uint64_t val = rdtsc(); - - regs->eax = (uint32_t)val; - regs->edx = (uint32_t)(val >> 32); - } + msr_split(regs, rdtsc()); } if ( ctxt.ctxt.retire.singlestep ) --- a/xen/include/asm-x86/msr.h +++ b/xen/include/asm-x86/msr.h @@ -71,6 +71,17 @@ static inline int wrmsr_safe(unsigned in return _rc; } +static inline uint64_t msr_fold(const struct cpu_user_regs *regs) +{ + return (regs->rdx << 32) | regs->_eax; +} + +static inline void msr_split(struct cpu_user_regs *regs, uint64_t val) +{ + regs->rdx = val >> 32; + regs->rax = (uint32_t)val; +} + static inline uint64_t rdtsc(void) { uint32_t low, high;