From patchwork Thu Jun 16 14:09:47 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Corneliu ZUZU X-Patchwork-Id: 9181015 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id D48D460776 for ; Thu, 16 Jun 2016 14:12:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C46E0280B0 for ; Thu, 16 Jun 2016 14:12:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B928B28357; Thu, 16 Jun 2016 14:12:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A7CC1280B0 for ; Thu, 16 Jun 2016 14:12:07 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bDXzS-0004Oj-Dc; Thu, 16 Jun 2016 14:09:50 +0000 Received: from mail6.bemta6.messagelabs.com ([85.158.143.247]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bDXzQ-0004OR-Kb for xen-devel@lists.xen.org; Thu, 16 Jun 2016 14:09:48 +0000 Received: from [85.158.143.35] by server-1.bemta-6.messagelabs.com id 95/47-09256-C23B2675; Thu, 16 Jun 2016 14:09:48 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrBIsWRWlGSWpSXmKPExsUSfTxjoa725qR wgwO75SyWfFzM4sDocXT3b6YAxijWzLyk/IoE1oyGX43sBXP9K54972BtYNxt1cXIySEk4CEx oXstYxcjF5C9llHiw+SlTBDOSUaJ72tusHYxcoBVrZvADxFfzSjRfuMDI0g3m4C2xLlD95hAb BEBaYlrny+DTWIW+Ag0aUsvK0hCWMBZ4mnTAXYQm0VAVWLVgXlgcV4BF4mDk7+CLZAQkJNYcC EdJMwp4CrxYe9ENoi9LhJdByMhKnIkrn9XgzClJP63KoEskhBYwiJx9tgasOESAjISjybeZJv AKLSAkWEVo3pxalFZapGusV5SUWZ6RkluYmaOrqGBmV5uanFxYnpqTmJSsV5yfu4mRmAIMgDB DsaOf06HGCU5mJREeevrk8KF+JLyUyozEosz4otKc1KLDzHKcHAoSfC+2giUEyxKTU+tSMvMA UYDTFqCg0dJhPcnSJq3uCAxtzgzHSJ1ilFRSpy3HiQhAJLIKM2Da4NF4CVGWSlhXkagQ4R4Cl KLcjNLUOVfMYpzMCoJ834BmcKTmVcCN/0V0GImoMU20+NBFpckIqSkGhgDCid7TqrIN9sv/UD P+d5qrmg/5VWRc3w8X561YzSWl7RZKe3pql+uNnE5w/ftXRLB5v/WPDxjPueO35znu86x9FR9 XXev4UPG1tP9M4Jf/JDs2FMzTeVk+vXkQ2Gn+Rmq58gVb2KQePKhmclko+z50A2+WQXq6e0/F blXb/HyPH8s08C5M0GJpTgj0VCLuag4EQCguVAJuwIAAA== X-Env-Sender: czuzu@bitdefender.com X-Msg-Ref: server-6.tower-21.messagelabs.com!1466086186!19272423!1 X-Originating-IP: [91.199.104.161] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 8.46; banners=-,-,- X-VirusChecked: Checked Received: (qmail 58570 invoked from network); 16 Jun 2016 14:09:47 -0000 Received: from mx01.bbu.dsd.mx.bitdefender.com (HELO mx01.bbu.dsd.mx.bitdefender.com) (91.199.104.161) by server-6.tower-21.messagelabs.com with DHE-RSA-AES128-GCM-SHA256 encrypted SMTP; 16 Jun 2016 14:09:47 -0000 Received: (qmail 19784 invoked from network); 16 Jun 2016 17:09:46 +0300 Received: from unknown (HELO mx-sr.buh.bitdefender.com) (10.17.80.103) by mx01.bbu.dsd.mx.bitdefender.com with AES256-GCM-SHA384 encrypted SMTP; 16 Jun 2016 17:09:46 +0300 Received: from smtp02.buh.bitdefender.net (unknown [10.17.80.76]) by mx-sr.buh.bitdefender.com (Postfix) with ESMTP id 746937FC37 for ; Thu, 16 Jun 2016 17:09:46 +0300 (EEST) Received: (qmail 31289 invoked from network); 16 Jun 2016 17:09:46 +0300 Received: from unknown (HELO czuzu-dev-vm.clj.bitdefender.biz) (czuzu@bitdefender.com@91.199.104.243) by smtp02.buh.bitdefender.net with SMTP; 16 Jun 2016 17:09:46 +0300 From: Corneliu ZUZU To: xen-devel@lists.xen.org Date: Thu, 16 Jun 2016 17:09:47 +0300 Message-Id: <1466086187-7607-1-git-send-email-czuzu@bitdefender.com> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1466085888-7428-1-git-send-email-czuzu@bitdefender.com> References: <1466085888-7428-1-git-send-email-czuzu@bitdefender.com> X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.6 on smtp02.buh.bitdefender.net, sigver: 7.65934 X-BitDefender-Spam: No (0) X-BitDefender-SpamStamp: Build: [Engines: 2.15.6.911, Dats: 424498, Stamp: 3], Multi: [Enabled, t: (0.000011, 0.008371)], BW: [Enabled, t: (0.000007,0.000001)], RBL DNSBL: [Disabled], APM: [Enabled, Score: 500, t: (0.006152), Flags: BB9BAF5C; NN_NO_CONTENT_TYPE; NN_LEGIT_SUMM_400_WORDS; NN_NO_LINK_NMD; NN_LEGIT_BITDEFENDER; NN_LEGIT_S_SQARE_BRACKETS; NN_LEGIT_MAILING_LIST_TO], SGN: [Enabled, t: (0.017182)], URL: [Enabled, t: (0.000006)], RTDA: [Enabled, t: (0.009764), Hit: No, Details: v2.4.2; Id: 2m1ghm1.1akqg1u25.qnnc], total: 0(775) X-BitDefender-CF-Stamp: none Cc: Kevin Tian , Tamas K Lengyel , Jan Beulich , Razvan Cojocaru , Andrew Cooper , Jun Nakajima Subject: [Xen-devel] [PATCH 4/7] vm-event/x86: use vm_event_vcpu_enter properly X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP After introducing vm_event_vcpu_enter, it makes sense to move the following code there: - handling of monitor_write_data from hvm_do_resume - enabling/disabling CPU_BASED_CR3_LOAD_EXITING from vmx_update_guest_cr(v, 0) Signed-off-by: Corneliu ZUZU --- xen/arch/x86/hvm/hvm.c | 62 +++++-------------------- xen/arch/x86/hvm/vmx/vmx.c | 12 ++--- xen/arch/x86/monitor.c | 9 ---- xen/arch/x86/vm_event.c | 102 +++++++++++++++++++++++++++++++++++++++++ xen/include/asm-x86/vm_event.h | 5 +- 5 files changed, 119 insertions(+), 71 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 770bb50..2f48846 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -462,52 +462,6 @@ void hvm_do_resume(struct vcpu *v) if ( !handle_hvm_io_completion(v) ) return; - if ( unlikely(v->arch.vm_event) ) - { - struct monitor_write_data *w = &v->arch.vm_event->write_data; - - if ( v->arch.vm_event->emulate_flags ) - { - enum emul_kind kind = EMUL_KIND_NORMAL; - - if ( v->arch.vm_event->emulate_flags & - VM_EVENT_FLAG_SET_EMUL_READ_DATA ) - kind = EMUL_KIND_SET_CONTEXT; - else if ( v->arch.vm_event->emulate_flags & - VM_EVENT_FLAG_EMULATE_NOWRITE ) - kind = EMUL_KIND_NOWRITE; - - hvm_mem_access_emulate_one(kind, TRAP_invalid_op, - HVM_DELIVER_NO_ERROR_CODE); - - v->arch.vm_event->emulate_flags = 0; - } - - if ( w->do_write.msr ) - { - hvm_msr_write_intercept(w->msr, w->value, 0); - w->do_write.msr = 0; - } - - if ( w->do_write.cr0 ) - { - hvm_set_cr0(w->cr0, 0); - w->do_write.cr0 = 0; - } - - if ( w->do_write.cr4 ) - { - hvm_set_cr4(w->cr4, 0); - w->do_write.cr4 = 0; - } - - if ( w->do_write.cr3 ) - { - hvm_set_cr3(w->cr3, 0); - w->do_write.cr3 = 0; - } - } - vm_event_vcpu_enter(v); /* Inject pending hw/sw trap */ @@ -2199,7 +2153,9 @@ int hvm_set_cr0(unsigned long value, bool_t may_defer) if ( hvm_event_crX(CR0, value, old_value) ) { - /* The actual write will occur in hvm_do_resume(), if permitted. */ + /* The actual write will occur in vcpu_enter_write_data(), if + * permitted. + */ v->arch.vm_event->write_data.do_write.cr0 = 1; v->arch.vm_event->write_data.cr0 = value; @@ -2301,7 +2257,9 @@ int hvm_set_cr3(unsigned long value, bool_t may_defer) if ( hvm_event_crX(CR3, value, old) ) { - /* The actual write will occur in hvm_do_resume(), if permitted. */ + /* The actual write will occur in vcpu_enter_write_data(), if + * permitted. + */ v->arch.vm_event->write_data.do_write.cr3 = 1; v->arch.vm_event->write_data.cr3 = value; @@ -2381,7 +2339,9 @@ int hvm_set_cr4(unsigned long value, bool_t may_defer) if ( hvm_event_crX(CR4, value, old_cr) ) { - /* The actual write will occur in hvm_do_resume(), if permitted. */ + /* The actual write will occur in vcpu_enter_write_data(), if + * permitted. + */ v->arch.vm_event->write_data.do_write.cr4 = 1; v->arch.vm_event->write_data.cr4 = value; @@ -3761,7 +3721,9 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content, { ASSERT(v->arch.vm_event); - /* The actual write will occur in hvm_do_resume() (if permitted). */ + /* The actual write will occur in vcpu_enter_write_data(), if + * permitted. + */ v->arch.vm_event->write_data.do_write.msr = 1; v->arch.vm_event->write_data.msr = msr; v->arch.vm_event->write_data.value = msr_content; diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index b43b94a..8b76ef9 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -35,7 +35,6 @@ #include #include #include -#include #include #include #include @@ -58,7 +57,6 @@ #include #include #include -#include #include static bool_t __initdata opt_force_ept; @@ -1432,18 +1430,16 @@ static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr) if ( paging_mode_hap(v->domain) ) { /* Manage GUEST_CR3 when CR0.PE=0. */ + uint32_t old_ctls = v->arch.hvm_vmx.exec_control; uint32_t cr3_ctls = (CPU_BASED_CR3_LOAD_EXITING | CPU_BASED_CR3_STORE_EXITING); + v->arch.hvm_vmx.exec_control &= ~cr3_ctls; if ( !hvm_paging_enabled(v) && !vmx_unrestricted_guest(v) ) v->arch.hvm_vmx.exec_control |= cr3_ctls; - /* Trap CR3 updates if CR3 memory events are enabled. */ - if ( v->domain->arch.monitor.write_ctrlreg_enabled & - monitor_ctrlreg_bitmask(VM_EVENT_X86_CR3) ) - v->arch.hvm_vmx.exec_control |= CPU_BASED_CR3_LOAD_EXITING; - - vmx_update_cpu_exec_control(v); + if ( old_ctls != v->arch.hvm_vmx.exec_control ) + vmx_update_cpu_exec_control(v); } if ( !nestedhvm_vcpu_in_guestmode(v) ) diff --git a/xen/arch/x86/monitor.c b/xen/arch/x86/monitor.c index 1fec412..1e5445f 100644 --- a/xen/arch/x86/monitor.c +++ b/xen/arch/x86/monitor.c @@ -20,7 +20,6 @@ */ #include -#include int arch_monitor_domctl_event(struct domain *d, struct xen_domctl_monitor_op *mop) @@ -62,14 +61,6 @@ int arch_monitor_domctl_event(struct domain *d, else ad->monitor.write_ctrlreg_enabled &= ~ctrlreg_bitmask; - if ( VM_EVENT_X86_CR3 == mop->u.mov_to_cr.index ) - { - struct vcpu *v; - /* Latches new CR3 mask through CR0 code. */ - for_each_vcpu ( d, v ) - hvm_update_guest_cr(v, 0); - } - domain_unpause(d); break; diff --git a/xen/arch/x86/vm_event.c b/xen/arch/x86/vm_event.c index f7eb24a..94b50fc 100644 --- a/xen/arch/x86/vm_event.c +++ b/xen/arch/x86/vm_event.c @@ -19,6 +19,9 @@ */ #include +#include +#include +#include /* Implicitly serialized by the domctl lock. */ int vm_event_init_domain(struct domain *d) @@ -179,6 +182,105 @@ void vm_event_fill_regs(vm_event_request_t *req) req->data.regs.x86.cs_arbytes = seg.attr.bytes; } +static inline void vcpu_enter_write_data(struct vcpu *v) +{ + struct monitor_write_data *w; + + if ( likely(!v->arch.vm_event) ) + return; + + w = &v->arch.vm_event->write_data; + + if ( unlikely(v->arch.vm_event->emulate_flags) ) + { + enum emul_kind kind = EMUL_KIND_NORMAL; + + if ( v->arch.vm_event->emulate_flags & + VM_EVENT_FLAG_SET_EMUL_READ_DATA ) + kind = EMUL_KIND_SET_CONTEXT; + else if ( v->arch.vm_event->emulate_flags & + VM_EVENT_FLAG_EMULATE_NOWRITE ) + kind = EMUL_KIND_NOWRITE; + + hvm_mem_access_emulate_one(kind, TRAP_invalid_op, + HVM_DELIVER_NO_ERROR_CODE); + + v->arch.vm_event->emulate_flags = 0; + } + + if ( w->do_write.msr ) + { + hvm_msr_write_intercept(w->msr, w->value, 0); + w->do_write.msr = 0; + } + + if ( w->do_write.cr0 ) + { + hvm_set_cr0(w->cr0, 0); + w->do_write.cr0 = 0; + } + + if ( w->do_write.cr4 ) + { + hvm_set_cr4(w->cr4, 0); + w->do_write.cr4 = 0; + } + + if ( w->do_write.cr3 ) + { + hvm_set_cr3(w->cr3, 0); + w->do_write.cr3 = 0; + } +} + +static inline void vcpu_enter_adjust_traps(struct vcpu *v) +{ + struct domain *d = v->domain; + struct arch_vmx_struct *avmx = &v->arch.hvm_vmx; + bool_t cr3_ldexit, cr3_vmevent; + unsigned int cr3_bitmask; + + /* Adjust CR3 load-exiting (for monitor vm-events). */ + + cr3_bitmask = monitor_ctrlreg_bitmask(VM_EVENT_X86_CR3); + cr3_vmevent = !!(d->arch.monitor.write_ctrlreg_enabled & cr3_bitmask); + cr3_ldexit = !!(avmx->exec_control & CPU_BASED_CR3_LOAD_EXITING); + + if ( likely(cr3_vmevent == cr3_ldexit) ) + return; + + if ( !paging_mode_hap(d) ) + { + /* non-hap domains trap CR3 writes unconditionally */ + ASSERT(cr3_ldexit); + return; + } + + /* + * If CR0.PE=0, CR3 load exiting must remain enabled. + * See vmx_update_guest_cr code motion for cr = 0. + */ + if ( cr3_ldexit && !hvm_paging_enabled(v) && !vmx_unrestricted_guest(v) ) + return; + + if ( cr3_vmevent ) + avmx->exec_control |= CPU_BASED_CR3_LOAD_EXITING; + else + avmx->exec_control &= ~CPU_BASED_CR3_LOAD_EXITING; + + vmx_vmcs_enter(v); + vmx_update_cpu_exec_control(v); + vmx_vmcs_exit(v); +} + +void arch_vm_event_vcpu_enter(struct vcpu *v) +{ + /* vmx only */ + ASSERT( cpu_has_vmx ); + vcpu_enter_write_data(v); + vcpu_enter_adjust_traps(v); +} + /* * Local variables: * mode: C diff --git a/xen/include/asm-x86/vm_event.h b/xen/include/asm-x86/vm_event.h index 6fb3b58..c4b5def 100644 --- a/xen/include/asm-x86/vm_event.h +++ b/xen/include/asm-x86/vm_event.h @@ -43,10 +43,7 @@ void vm_event_set_registers(struct vcpu *v, vm_event_response_t *rsp); void vm_event_fill_regs(vm_event_request_t *req); -static inline void arch_vm_event_vcpu_enter(struct vcpu *v) -{ - /* Nothing to do. */ -} +void arch_vm_event_vcpu_enter(struct vcpu *v); /* * Monitor vm-events.