From patchwork Mon Dec 5 10:09:28 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Cooper X-Patchwork-Id: 9460775 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9F33260459 for ; Mon, 5 Dec 2016 10:11:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 96EB322ADC for ; Mon, 5 Dec 2016 10:11:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8B96826490; Mon, 5 Dec 2016 10:11:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D355022ADC for ; Mon, 5 Dec 2016 10:11:54 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cDqDK-0007kV-DO; Mon, 05 Dec 2016 10:09:38 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cDqDJ-0007jr-Ak for xen-devel@lists.xen.org; Mon, 05 Dec 2016 10:09:37 +0000 Received: from [193.109.254.147] by server-8.bemta-6.messagelabs.com id 7F/66-30093-0EC35485; Mon, 05 Dec 2016 10:09:36 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprKIsWRWlGSWpSXmKPExsXitHSDve4DG9c Ig2PtrBZLPi5mcWD0OLr7N1MAYxRrZl5SfkUCa8bHk1uZCra6V9xYdoypgXGRRRcjB4eEgL/E r2dVXYycHGwC+hK7X3xiArFFBNQlTndcZO1i5OJgFljJJHFi1Wk2kHphgQiJ7e95QUwWARWJe W/MQcp5BTwlDm3byAhiSwjISZw//pMZxOYU8JJYOnsrC4gtBFTz+2MfE4StJnGt/xI7RK+gxM mZT8BqmAUkJA6+eME8gZF3FpLULCSpBYxMqxg1ilOLylKLdI2M9ZKKMtMzSnITM3N0DQ3M9HJ Ti4sT01NzEpOK9ZLzczcxAgOHAQh2MP6ZH3iIUZKDSUmUt8XQNUKILyk/pTIjsTgjvqg0J7X4 EKMMB4eSBK8CMBCFBItS01Mr0jJzgCEMk5bg4FES4c0ESfMWFyTmFmemQ6ROMRpzTHu2+CkTx 63jS54yCbHk5eelSonznrAGKhUAKc0ozYMbBIutS4yyUsK8jECnCfEUpBblZpagyr9iFOdgVB LmPQMyhSczrwRu3yugU5iATjlx3BnklJJEhJRUA+PMc5kuS0s6Ztm9Eno99frFrpfX/G5cUDJ ZNd/4424Orj/1j0sY4yal1OxbOk3o5pq3B7R1dwi1ZW33XyAgcuhQvPLUqbvKbWs/908v/cji o1qwu7P7xVeu01J7Hp5QPBQnPTNn304nUW3Xx2ViHpKcxv3NdrpbKtssu+ZsOrJTb+U29qASh 49KLMUZiYZazEXFiQBHHwRQqAIAAA== X-Env-Sender: prvs=140b549e8=Andrew.Cooper3@citrix.com X-Msg-Ref: server-2.tower-27.messagelabs.com!1480932573!23204456!3 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 9.0.16; banners=-,-,- X-VirusChecked: Checked Received: (qmail 7788 invoked from network); 5 Dec 2016 10:09:35 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP; 5 Dec 2016 10:09:35 -0000 X-IronPort-AV: E=Sophos;i="5.33,747,1477958400"; d="scan'208";a="401794093" From: Andrew Cooper To: Xen-devel Date: Mon, 5 Dec 2016 10:09:28 +0000 Message-ID: <1480932571-23547-6-git-send-email-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1480932571-23547-1-git-send-email-andrew.cooper3@citrix.com> References: <1480932571-23547-1-git-send-email-andrew.cooper3@citrix.com> MIME-Version: 1.0 Cc: Kevin Tian , Jan Beulich , Andrew Cooper , Paul Durrant , Jun Nakajima , Boris Ostrovsky , Suravee Suthikulpanit Subject: [Xen-devel] [PATCH 5/8] x86/hvm: Don't raise #GP behind the emulators back for MSR accesses X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP The current hvm_msr_{read,write}_intercept() infrastructure calls hvm_inject_hw_exception() directly to latch a fault, and returns X86EMUL_EXCEPTION to its caller. This behaviour is problematic for the hvmemul_{read,write}_msr() paths, as the fault is raised behind the back of the x86 emulator. Alter the behaviour so hvm_msr_{read,write}_intercept() simply returns X86EMUL_EXCEPTION, leaving the callers to actually inject the #GP fault. Signed-off-by: Andrew Cooper Reviewed-by: Jan Beulich Acked-by: Kevin Tian --- CC: Jan Beulich CC: Paul Durrant CC: Jun Nakajima CC: Kevin Tian CC: Boris Ostrovsky CC: Suravee Suthikulpanit --- xen/arch/x86/hvm/emulate.c | 14 ++++++++++++-- xen/arch/x86/hvm/hvm.c | 8 +++++--- xen/arch/x86/hvm/svm/svm.c | 4 ++-- xen/arch/x86/hvm/vmx/vmx.c | 32 +++++++++++++++++++++----------- xen/arch/x86/hvm/vmx/vvmx.c | 19 ++++++++++++++----- xen/include/asm-x86/hvm/support.h | 11 ++++++++--- 6 files changed, 62 insertions(+), 26 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index d0a043b..b182d57 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -1531,7 +1531,12 @@ static int hvmemul_read_msr( uint64_t *val, struct x86_emulate_ctxt *ctxt) { - return hvm_msr_read_intercept(reg, val); + int rc = hvm_msr_read_intercept(reg, val); + + if ( rc == X86EMUL_EXCEPTION ) + x86_emul_hw_exception(TRAP_gp_fault, 0, ctxt); + + return rc; } static int hvmemul_write_msr( @@ -1539,7 +1544,12 @@ static int hvmemul_write_msr( uint64_t val, struct x86_emulate_ctxt *ctxt) { - return hvm_msr_write_intercept(reg, val, 1); + int rc = hvm_msr_write_intercept(reg, val, 1); + + if ( rc == X86EMUL_EXCEPTION ) + x86_emul_hw_exception(TRAP_gp_fault, 0, ctxt); + + return rc; } static int hvmemul_wbinvd( diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index ac207e4..863adfc 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -509,7 +509,11 @@ void hvm_do_resume(struct vcpu *v) if ( w->do_write.msr ) { - hvm_msr_write_intercept(w->msr, w->value, 0); + int rc = hvm_msr_write_intercept(w->msr, w->value, 0); + + if ( rc == X86EMUL_EXCEPTION ) + hvm_inject_hw_exception(TRAP_gp_fault, 0); + w->do_write.msr = 0; } @@ -3896,7 +3900,6 @@ int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content) return ret; gp_fault: - hvm_inject_hw_exception(TRAP_gp_fault, 0); ret = X86EMUL_EXCEPTION; *msr_content = -1ull; goto out; @@ -4054,7 +4057,6 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content, return ret; gp_fault: - hvm_inject_hw_exception(TRAP_gp_fault, 0); return X86EMUL_EXCEPTION; } diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c index 1588b2f..810b0d4 100644 --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -1788,7 +1788,6 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content) return X86EMUL_OKAY; gpf: - hvm_inject_hw_exception(TRAP_gp_fault, 0); return X86EMUL_EXCEPTION; } @@ -1945,7 +1944,6 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content) return result; gpf: - hvm_inject_hw_exception(TRAP_gp_fault, 0); return X86EMUL_EXCEPTION; } @@ -1976,6 +1974,8 @@ static void svm_do_msr_access(struct cpu_user_regs *regs) if ( rc == X86EMUL_OKAY ) __update_guest_eip(regs, inst_len); + else if ( rc == X86EMUL_EXCEPTION ) + hvm_inject_hw_exception(TRAP_gp_fault, 0); } static void svm_vmexit_do_hlt(struct vmcb_struct *vmcb, diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index afde634..ddfb410 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -2691,7 +2691,6 @@ static int vmx_msr_read_intercept(unsigned int msr, uint64_t *msr_content) return X86EMUL_OKAY; gp_fault: - hvm_inject_hw_exception(TRAP_gp_fault, 0); return X86EMUL_EXCEPTION; } @@ -2920,7 +2919,6 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content) return X86EMUL_OKAY; gp_fault: - hvm_inject_hw_exception(TRAP_gp_fault, 0); return X86EMUL_EXCEPTION; } @@ -3632,23 +3630,35 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs) break; case EXIT_REASON_MSR_READ: { - uint64_t msr_content; - if ( hvm_msr_read_intercept(regs->ecx, &msr_content) == X86EMUL_OKAY ) + uint64_t msr_content = 0; + + switch ( hvm_msr_read_intercept(regs->_ecx, &msr_content) ) { - regs->eax = (uint32_t)msr_content; - regs->edx = (uint32_t)(msr_content >> 32); + case X86EMUL_OKAY: + regs->rax = (uint32_t)msr_content; + regs->rdx = (uint32_t)(msr_content >> 32); update_guest_eip(); /* Safe: RDMSR */ + break; + + case X86EMUL_EXCEPTION: + hvm_inject_hw_exception(TRAP_gp_fault, 0); + break; } break; } case EXIT_REASON_MSR_WRITE: - { - uint64_t msr_content; - msr_content = ((uint64_t)regs->edx << 32) | (uint32_t)regs->eax; - if ( hvm_msr_write_intercept(regs->ecx, msr_content, 1) == X86EMUL_OKAY ) + switch ( hvm_msr_write_intercept( + regs->_ecx, (regs->rdx << 32) | regs->_eax, 1) ) + { + case X86EMUL_OKAY: update_guest_eip(); /* Safe: WRMSR */ + break; + + case X86EMUL_EXCEPTION: + hvm_inject_hw_exception(TRAP_gp_fault, 0); + break; + } break; - } case EXIT_REASON_VMXOFF: if ( nvmx_handle_vmxoff(regs) == X86EMUL_OKAY ) diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c index e6e9ebd..87f02ef 100644 --- a/xen/arch/x86/hvm/vmx/vvmx.c +++ b/xen/arch/x86/hvm/vmx/vvmx.c @@ -1000,6 +1000,7 @@ static void load_shadow_guest_state(struct vcpu *v) struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v); u32 control; u64 cr_gh_mask, cr_read_shadow; + int rc; static const u16 vmentry_fields[] = { VM_ENTRY_INTR_INFO, @@ -1021,8 +1022,12 @@ static void load_shadow_guest_state(struct vcpu *v) if ( control & VM_ENTRY_LOAD_GUEST_PAT ) hvm_set_guest_pat(v, get_vvmcs(v, GUEST_PAT)); if ( control & VM_ENTRY_LOAD_PERF_GLOBAL_CTRL ) - hvm_msr_write_intercept(MSR_CORE_PERF_GLOBAL_CTRL, - get_vvmcs(v, GUEST_PERF_GLOBAL_CTRL), 0); + { + rc = hvm_msr_write_intercept(MSR_CORE_PERF_GLOBAL_CTRL, + get_vvmcs(v, GUEST_PERF_GLOBAL_CTRL), 0); + if ( rc == X86EMUL_EXCEPTION ) + hvm_inject_hw_exception(TRAP_gp_fault, 0); + } hvm_funcs.set_tsc_offset(v, v->arch.hvm_vcpu.cache_tsc_offset, 0); @@ -1193,7 +1198,7 @@ static void sync_vvmcs_ro(struct vcpu *v) static void load_vvmcs_host_state(struct vcpu *v) { - int i; + int i, rc; u64 r; u32 control; @@ -1211,8 +1216,12 @@ static void load_vvmcs_host_state(struct vcpu *v) if ( control & VM_EXIT_LOAD_HOST_PAT ) hvm_set_guest_pat(v, get_vvmcs(v, HOST_PAT)); if ( control & VM_EXIT_LOAD_PERF_GLOBAL_CTRL ) - hvm_msr_write_intercept(MSR_CORE_PERF_GLOBAL_CTRL, - get_vvmcs(v, HOST_PERF_GLOBAL_CTRL), 1); + { + rc = hvm_msr_write_intercept(MSR_CORE_PERF_GLOBAL_CTRL, + get_vvmcs(v, HOST_PERF_GLOBAL_CTRL), 1); + if ( rc == X86EMUL_EXCEPTION ) + hvm_inject_hw_exception(TRAP_gp_fault, 0); + } hvm_funcs.set_tsc_offset(v, v->arch.hvm_vcpu.cache_tsc_offset, 0); diff --git a/xen/include/asm-x86/hvm/support.h b/xen/include/asm-x86/hvm/support.h index 3d767d7..2bff1f4 100644 --- a/xen/include/asm-x86/hvm/support.h +++ b/xen/include/asm-x86/hvm/support.h @@ -122,13 +122,18 @@ int hvm_set_efer(uint64_t value); int hvm_set_cr0(unsigned long value, bool_t may_defer); int hvm_set_cr3(unsigned long value, bool_t may_defer); int hvm_set_cr4(unsigned long value, bool_t may_defer); -int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content); -int hvm_msr_write_intercept( - unsigned int msr, uint64_t msr_content, bool_t may_defer); int hvm_mov_to_cr(unsigned int cr, unsigned int gpr); int hvm_mov_from_cr(unsigned int cr, unsigned int gpr); void hvm_ud_intercept(struct cpu_user_regs *); +/* + * May return X86EMUL_EXCEPTION, at which point the caller is responsible for + * injecting a #GP fault. Used to support speculative reads. + */ +int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content); +int hvm_msr_write_intercept( + unsigned int msr, uint64_t msr_content, bool_t may_defer); + #endif /* __ASM_X86_HVM_SUPPORT_H__ */ /*