From patchwork Wed Nov 23 15:38:44 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Cooper X-Patchwork-Id: 9443657 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3920360235 for ; Wed, 23 Nov 2016 15:41:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2B3B226419 for ; Wed, 23 Nov 2016 15:41:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1FF69276AE; Wed, 23 Nov 2016 15:41:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 6572126419 for ; Wed, 23 Nov 2016 15:41:45 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1c9Zda-0001Th-Nq; Wed, 23 Nov 2016 15:39:06 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1c9ZdZ-0001S8-9f for xen-devel@lists.xen.org; Wed, 23 Nov 2016 15:39:05 +0000 Received: from [193.109.254.147] by server-7.bemta-6.messagelabs.com id 02/B2-29519-818B5385; Wed, 23 Nov 2016 15:39:04 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFmphkeJIrShJLcpLzFFi42JxWrohUldih2m Ewe/tnBZLPi5mcWD0OLr7N1MAYxRrZl5SfkUCa8aJvkdsBbf8Kp7tCWxgnGvTxcjJISHgL9HV 1sAMYrMJ6EvsfvGJCcQWEVCXON1xkbWLkYuDWeAEk8TuT8/YQBLCAvES07ceZQWxWQRUJTZtv MUOYvMKeEpMeXCZFWKonMT54z/BhnIKeEkseXgMyObgEAKqWb48CyQsJKAmca3/ElSroMTJmU 9YQGxmAQmJgy9eME9g5J2FJDULSWoBI9MqRo3i1KKy1CJdQxO9pKLM9IyS3MTMHF1DAzO93NT i4sT01JzEpGK95PzcTYzA0GEAgh2M1zcGHGKU5GBSEuU9vdE0QogvKT+lMiOxOCO+qDQntfgQ owwHh5IE75ptQDnBotT01Iq0zBxgEMOkJTh4lER4S0DSvMUFibnFmekQqVOMilLivGLbgRICI ImM0jy4NljkXGKUlRLmZQQ6RIinILUoN7MEVf4VozgHo5IwxHiezLwSuOmvgBYzAS2W/GYMsr gkESEl1cBoLPL939r8/1/DNv83iBBXnqHDMIFT6mDdvdwreT9nmGznM3ZPtNhy+Wg/74btqlW WJ2RN9N3+FHd+u9x8IKJ1j8OJmd2tkU/5Yj95f7aY717453zrrAsuebuy3P3rDlgpXN/iU5gu pj41fG21cXwcT+URs2daL7pb5ySlBK4q+PRYYef5lCVKLMUZiYZazEXFiQCOniBblwIAAA== X-Env-Sender: prvs=12891aa85=Andrew.Cooper3@citrix.com X-Msg-Ref: server-15.tower-27.messagelabs.com!1479915541!20725032!3 X-Originating-IP: [66.165.176.89] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 9.0.16; banners=-,-,- X-VirusChecked: Checked Received: (qmail 8207 invoked from network); 23 Nov 2016 15:39:03 -0000 Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89) by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP; 23 Nov 2016 15:39:03 -0000 X-IronPort-AV: E=Sophos;i="5.31,538,1473120000"; d="scan'208";a="391096596" From: Andrew Cooper To: Xen-devel Date: Wed, 23 Nov 2016 15:38:44 +0000 Message-ID: <1479915538-15282-2-git-send-email-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1479915538-15282-1-git-send-email-andrew.cooper3@citrix.com> References: <1479915538-15282-1-git-send-email-andrew.cooper3@citrix.com> MIME-Version: 1.0 Cc: Kevin Tian , Wei Liu , Jan Beulich , Andrew Cooper , Paul Durrant , Jun Nakajima , Boris Ostrovsky , Suravee Suthikulpanit Subject: [Xen-devel] [PATCH 01/15] x86/hvm: Rename hvm_emulate_init() and hvm_emulate_prepare() for clarity X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP * Move hvm_emulate_init() to immediately hvm_emulate_prepare(), as they are very closely related. * Rename hvm_emulate_prepare() to hvm_emulate_init_once() and hvm_emulate_init() to hvm_emulate_init_per_insn() to make it clearer how to and when to use them. No functional change. Signed-off-by: Andrew Cooper Reviewed-by: Paul Durrant Reviewed-by: Wei Liu Reviewed-by: Jan Beulich Reviewed-by: Boris Ostrovsky reviewed-by: Kevin Tian --- CC: Jan Beulich CC: Paul Durrant CC: Jun Nakajima CC: Kevin Tian CC: Boris Ostrovsky CC: Suravee Suthikulpanit CC: Wei Liu As hvm_emulate_prepare() was new in 4.8, it would be a good idea to take this patch to avoid future confusion on the stable-4.8 branch --- xen/arch/x86/hvm/emulate.c | 111 +++++++++++++++++++------------------- xen/arch/x86/hvm/hvm.c | 2 +- xen/arch/x86/hvm/io.c | 2 +- xen/arch/x86/hvm/ioreq.c | 2 +- xen/arch/x86/hvm/svm/emulate.c | 4 +- xen/arch/x86/hvm/vmx/realmode.c | 2 +- xen/include/asm-x86/hvm/emulate.h | 6 ++- 7 files changed, 66 insertions(+), 63 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index e9b8f8c..3ab0e8e 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -1755,57 +1755,6 @@ static const struct x86_emulate_ops hvm_emulate_ops_no_write = { .vmfunc = hvmemul_vmfunc, }; -void hvm_emulate_init( - struct hvm_emulate_ctxt *hvmemul_ctxt, - const unsigned char *insn_buf, - unsigned int insn_bytes) -{ - struct vcpu *curr = current; - unsigned int pfec = PFEC_page_present; - unsigned long addr; - - if ( hvm_long_mode_enabled(curr) && - hvmemul_ctxt->seg_reg[x86_seg_cs].attr.fields.l ) - { - hvmemul_ctxt->ctxt.addr_size = hvmemul_ctxt->ctxt.sp_size = 64; - } - else - { - hvmemul_ctxt->ctxt.addr_size = - hvmemul_ctxt->seg_reg[x86_seg_cs].attr.fields.db ? 32 : 16; - hvmemul_ctxt->ctxt.sp_size = - hvmemul_ctxt->seg_reg[x86_seg_ss].attr.fields.db ? 32 : 16; - } - - if ( hvmemul_ctxt->seg_reg[x86_seg_ss].attr.fields.dpl == 3 ) - pfec |= PFEC_user_mode; - - hvmemul_ctxt->insn_buf_eip = hvmemul_ctxt->ctxt.regs->eip; - if ( !insn_bytes ) - { - hvmemul_ctxt->insn_buf_bytes = - hvm_get_insn_bytes(curr, hvmemul_ctxt->insn_buf) ?: - (hvm_virtual_to_linear_addr(x86_seg_cs, - &hvmemul_ctxt->seg_reg[x86_seg_cs], - hvmemul_ctxt->insn_buf_eip, - sizeof(hvmemul_ctxt->insn_buf), - hvm_access_insn_fetch, - hvmemul_ctxt->ctxt.addr_size, - &addr) && - hvm_fetch_from_guest_virt_nofault(hvmemul_ctxt->insn_buf, addr, - sizeof(hvmemul_ctxt->insn_buf), - pfec) == HVMCOPY_okay) ? - sizeof(hvmemul_ctxt->insn_buf) : 0; - } - else - { - hvmemul_ctxt->insn_buf_bytes = insn_bytes; - memcpy(hvmemul_ctxt->insn_buf, insn_buf, insn_bytes); - } - - hvmemul_ctxt->exn_pending = 0; -} - static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt, const struct x86_emulate_ops *ops) { @@ -1815,7 +1764,8 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt, struct hvm_vcpu_io *vio = &curr->arch.hvm_vcpu.hvm_io; int rc; - hvm_emulate_init(hvmemul_ctxt, vio->mmio_insn, vio->mmio_insn_bytes); + hvm_emulate_init_per_insn(hvmemul_ctxt, vio->mmio_insn, + vio->mmio_insn_bytes); vio->mmio_retry = 0; @@ -1915,7 +1865,7 @@ int hvm_emulate_one_mmio(unsigned long mfn, unsigned long gla) else ops = &hvm_ro_emulate_ops_mmio; - hvm_emulate_prepare(&ctxt, guest_cpu_user_regs()); + hvm_emulate_init_once(&ctxt, guest_cpu_user_regs()); ctxt.ctxt.data = &mmio_ro_ctxt; rc = _hvm_emulate_one(&ctxt, ops); switch ( rc ) @@ -1940,7 +1890,7 @@ void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr, struct hvm_emulate_ctxt ctx = {{ 0 }}; int rc; - hvm_emulate_prepare(&ctx, guest_cpu_user_regs()); + hvm_emulate_init_once(&ctx, guest_cpu_user_regs()); switch ( kind ) { @@ -1992,7 +1942,7 @@ void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr, hvm_emulate_writeback(&ctx); } -void hvm_emulate_prepare( +void hvm_emulate_init_once( struct hvm_emulate_ctxt *hvmemul_ctxt, struct cpu_user_regs *regs) { @@ -2006,6 +1956,57 @@ void hvm_emulate_prepare( hvmemul_get_seg_reg(x86_seg_ss, hvmemul_ctxt); } +void hvm_emulate_init_per_insn( + struct hvm_emulate_ctxt *hvmemul_ctxt, + const unsigned char *insn_buf, + unsigned int insn_bytes) +{ + struct vcpu *curr = current; + unsigned int pfec = PFEC_page_present; + unsigned long addr; + + if ( hvm_long_mode_enabled(curr) && + hvmemul_ctxt->seg_reg[x86_seg_cs].attr.fields.l ) + { + hvmemul_ctxt->ctxt.addr_size = hvmemul_ctxt->ctxt.sp_size = 64; + } + else + { + hvmemul_ctxt->ctxt.addr_size = + hvmemul_ctxt->seg_reg[x86_seg_cs].attr.fields.db ? 32 : 16; + hvmemul_ctxt->ctxt.sp_size = + hvmemul_ctxt->seg_reg[x86_seg_ss].attr.fields.db ? 32 : 16; + } + + if ( hvmemul_ctxt->seg_reg[x86_seg_ss].attr.fields.dpl == 3 ) + pfec |= PFEC_user_mode; + + hvmemul_ctxt->insn_buf_eip = hvmemul_ctxt->ctxt.regs->eip; + if ( !insn_bytes ) + { + hvmemul_ctxt->insn_buf_bytes = + hvm_get_insn_bytes(curr, hvmemul_ctxt->insn_buf) ?: + (hvm_virtual_to_linear_addr(x86_seg_cs, + &hvmemul_ctxt->seg_reg[x86_seg_cs], + hvmemul_ctxt->insn_buf_eip, + sizeof(hvmemul_ctxt->insn_buf), + hvm_access_insn_fetch, + hvmemul_ctxt->ctxt.addr_size, + &addr) && + hvm_fetch_from_guest_virt_nofault(hvmemul_ctxt->insn_buf, addr, + sizeof(hvmemul_ctxt->insn_buf), + pfec) == HVMCOPY_okay) ? + sizeof(hvmemul_ctxt->insn_buf) : 0; + } + else + { + hvmemul_ctxt->insn_buf_bytes = insn_bytes; + memcpy(hvmemul_ctxt->insn_buf, insn_buf, insn_bytes); + } + + hvmemul_ctxt->exn_pending = 0; +} + void hvm_emulate_writeback( struct hvm_emulate_ctxt *hvmemul_ctxt) { diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index f76dd90..25dc759 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -4058,7 +4058,7 @@ void hvm_ud_intercept(struct cpu_user_regs *regs) { struct hvm_emulate_ctxt ctxt; - hvm_emulate_prepare(&ctxt, regs); + hvm_emulate_init_once(&ctxt, regs); if ( opt_hvm_fep ) { diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index 1e7a5f9..7305801 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -87,7 +87,7 @@ int handle_mmio(void) ASSERT(!is_pvh_vcpu(curr)); - hvm_emulate_prepare(&ctxt, guest_cpu_user_regs()); + hvm_emulate_init_once(&ctxt, guest_cpu_user_regs()); rc = hvm_emulate_one(&ctxt); diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index d2245e2..88071ab 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -167,7 +167,7 @@ bool_t handle_hvm_io_completion(struct vcpu *v) { struct hvm_emulate_ctxt ctxt; - hvm_emulate_prepare(&ctxt, guest_cpu_user_regs()); + hvm_emulate_init_once(&ctxt, guest_cpu_user_regs()); vmx_realmode_emulate_one(&ctxt); hvm_emulate_writeback(&ctxt); diff --git a/xen/arch/x86/hvm/svm/emulate.c b/xen/arch/x86/hvm/svm/emulate.c index a5545ea..9cdbe9e 100644 --- a/xen/arch/x86/hvm/svm/emulate.c +++ b/xen/arch/x86/hvm/svm/emulate.c @@ -107,8 +107,8 @@ int __get_instruction_length_from_list(struct vcpu *v, #endif ASSERT(v == current); - hvm_emulate_prepare(&ctxt, guest_cpu_user_regs()); - hvm_emulate_init(&ctxt, NULL, 0); + hvm_emulate_init_once(&ctxt, guest_cpu_user_regs()); + hvm_emulate_init_per_insn(&ctxt, NULL, 0); state = x86_decode_insn(&ctxt.ctxt, hvmemul_insn_fetch); if ( IS_ERR_OR_NULL(state) ) return 0; diff --git a/xen/arch/x86/hvm/vmx/realmode.c b/xen/arch/x86/hvm/vmx/realmode.c index e83a61f..9002638 100644 --- a/xen/arch/x86/hvm/vmx/realmode.c +++ b/xen/arch/x86/hvm/vmx/realmode.c @@ -179,7 +179,7 @@ void vmx_realmode(struct cpu_user_regs *regs) if ( intr_info & INTR_INFO_VALID_MASK ) __vmwrite(VM_ENTRY_INTR_INFO, 0); - hvm_emulate_prepare(&hvmemul_ctxt, regs); + hvm_emulate_init_once(&hvmemul_ctxt, regs); /* Only deliver interrupts into emulated real mode. */ if ( !(curr->arch.hvm_vcpu.guest_cr[0] & X86_CR0_PE) && diff --git a/xen/include/asm-x86/hvm/emulate.h b/xen/include/asm-x86/hvm/emulate.h index f610673..d4186a2 100644 --- a/xen/include/asm-x86/hvm/emulate.h +++ b/xen/include/asm-x86/hvm/emulate.h @@ -51,10 +51,12 @@ int hvm_emulate_one_no_write( void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr, unsigned int errcode); -void hvm_emulate_prepare( +/* Must be called once to set up hvmemul state. */ +void hvm_emulate_init_once( struct hvm_emulate_ctxt *hvmemul_ctxt, struct cpu_user_regs *regs); -void hvm_emulate_init( +/* Must be called once before each instruction emulated. */ +void hvm_emulate_init_per_insn( struct hvm_emulate_ctxt *hvmemul_ctxt, const unsigned char *insn_buf, unsigned int insn_bytes);