From patchwork Wed Jun 8 13:43:39 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 9164691 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B6C9960467 for ; Wed, 8 Jun 2016 13:45:43 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A961325D91 for ; Wed, 8 Jun 2016 13:45:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9E7CD2824F; Wed, 8 Jun 2016 13:45:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 4F5E025D91 for ; Wed, 8 Jun 2016 13:45:43 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bAdlq-0005d0-Ij; Wed, 08 Jun 2016 13:43:46 +0000 Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bAdlp-0005cq-GW for xen-devel@lists.xenproject.org; Wed, 08 Jun 2016 13:43:45 +0000 Received: from [193.109.254.147] by server-3.bemta-14.messagelabs.com id 09/CB-14689-01128575; Wed, 08 Jun 2016 13:43:44 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrBIsWRWlGSWpSXmKPExsXS6fjDS5dfMSL cYO8CdYvvWyYzOTB6HP5whSWAMYo1My8pvyKBNWNK10rmgl8qFSv2xDcwzpXqYuTkEBLIk/j0 4iZzFyMHB6+AncS8r0wgYQkBQ4l981exgdgsAqoS9/obGUFsNgF1ibZn21lBykUEDCTOHU0CM ZkF9CW2rWMBMYUF3CWeL42GmG0n8ax5HSuIzSlgL3FsUxsrxB5Bib87hEHCzEAlG+/OZp/AyD MLITMLSQbC1pJ4+OsWC4StLbFs4WvmWWBrpSWW/+OACFtJbOqYyo6qBMR2lZj6Yx3jAkaOVYz qxalFZalFuuZ6SUWZ6RkluYmZObqGhiZ6uanFxYnpqTmJScV6yfm5mxiBIcoABDsYvyxxPsQo ycGkJMqr6B4eLsSXlJ9SmZFYnBFfVJqTWnyIUYaDQ0mC11UhIlxIsCg1PbUiLTMHGC0waQkOH iUR3niQNG9xQWJucWY6ROoUo6KUOO8NeaCEAEgiozQPrg0WoZcYZaWEeRmBDhHiKUgtys0sQZ V/xSjOwagkzGsGMp4nM68EbvoroMVMQIuXHwkHWVySiJCSamCM/JD0SW3nudbvS6Pywir2cku /eGDTE99wJCG4drNPt8OLnJaVG3dzazmembT+inpr+gHve0n5q6LW9OwOKky3mW18IcDCNprP m/fA3ztb6xZl5bB1TTpfmjuzSjntsfJpoYvSN/2m33F7qpU7/0bJ5Tu/43g4Kk25lonIfyy74 nlBn+v0cRUlluKMREMt5qLiRACpcR6pywIAAA== X-Env-Sender: JBeulich@suse.com X-Msg-Ref: server-4.tower-27.messagelabs.com!1465393421!45681708!1 X-Originating-IP: [137.65.248.74] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 8.46; banners=-,-,- X-VirusChecked: Checked Received: (qmail 35219 invoked from network); 8 Jun 2016 13:43:43 -0000 Received: from prv-mh.provo.novell.com (HELO prv-mh.provo.novell.com) (137.65.248.74) by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 8 Jun 2016 13:43:43 -0000 Received: from INET-PRV-MTA by prv-mh.provo.novell.com with Novell_GroupWise; Wed, 08 Jun 2016 07:43:41 -0600 Message-Id: <57583D2B02000078000F31CE@prv-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 14.2.0 Date: Wed, 08 Jun 2016 07:43:39 -0600 From: "Jan Beulich" To: "xen-devel" References: <57583B4E02000078000F3198@prv-mh.provo.novell.com> In-Reply-To: <57583B4E02000078000F3198@prv-mh.provo.novell.com> Mime-Version: 1.0 Cc: Andrew Cooper Subject: [Xen-devel] [PATCH 2/2] x86/HVM: re-order operations in hvm_ud_intercept() X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Don't fetch CS explicitly, leverage the fact that hvm_emulate_prepare() already does (and that hvm_virtual_to_linear_addr() doesn't alter it). At once increase the length passed to hvm_virtual_to_linear_addr() by one: There definitely needs to be at least one more opcode byte, and we can avoid missing a wraparound case this way. Signed-off-by: Jan Beulich x86/HVM: re-order operations in hvm_ud_intercept() Don't fetch CS explicitly, leverage the fact that hvm_emulate_prepare() already does (and that hvm_virtual_to_linear_addr() doesn't alter it). At once increase the length passed to hvm_virtual_to_linear_addr() by one: There definitely needs to be at least one more opcode byte, and we can avoid missing a wraparound case this way. Signed-off-by: Jan Beulich --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -3834,19 +3834,20 @@ void hvm_ud_intercept(struct cpu_user_re { struct hvm_emulate_ctxt ctxt; + hvm_emulate_prepare(&ctxt, regs); + if ( opt_hvm_fep ) { struct vcpu *cur = current; - struct segment_register cs; + const struct segment_register *cs = &ctxt.seg_reg[x86_seg_cs]; unsigned long addr; char sig[5]; /* ud2; .ascii "xen" */ - hvm_get_segment_register(cur, x86_seg_cs, &cs); - if ( hvm_virtual_to_linear_addr(x86_seg_cs, &cs, regs->eip, - sizeof(sig), hvm_access_insn_fetch, + if ( hvm_virtual_to_linear_addr(x86_seg_cs, cs, regs->eip, + sizeof(sig) + 1, hvm_access_insn_fetch, (hvm_long_mode_enabled(cur) && - cs.attr.fields.l) ? 64 : - cs.attr.fields.db ? 32 : 16, &addr) && + cs->attr.fields.l) ? 64 : + cs->attr.fields.db ? 32 : 16, &addr) && (hvm_fetch_from_guest_virt_nofault(sig, addr, sizeof(sig), 0) == HVMCOPY_okay) && (memcmp(sig, "\xf\xbxen", sizeof(sig)) == 0) ) @@ -3856,8 +3857,6 @@ void hvm_ud_intercept(struct cpu_user_re } } - hvm_emulate_prepare(&ctxt, regs); - switch ( hvm_emulate_one(&ctxt) ) { case X86EMUL_UNHANDLEABLE: Reviewed-by: Andrew Cooper --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -3834,19 +3834,20 @@ void hvm_ud_intercept(struct cpu_user_re { struct hvm_emulate_ctxt ctxt; + hvm_emulate_prepare(&ctxt, regs); + if ( opt_hvm_fep ) { struct vcpu *cur = current; - struct segment_register cs; + const struct segment_register *cs = &ctxt.seg_reg[x86_seg_cs]; unsigned long addr; char sig[5]; /* ud2; .ascii "xen" */ - hvm_get_segment_register(cur, x86_seg_cs, &cs); - if ( hvm_virtual_to_linear_addr(x86_seg_cs, &cs, regs->eip, - sizeof(sig), hvm_access_insn_fetch, + if ( hvm_virtual_to_linear_addr(x86_seg_cs, cs, regs->eip, + sizeof(sig) + 1, hvm_access_insn_fetch, (hvm_long_mode_enabled(cur) && - cs.attr.fields.l) ? 64 : - cs.attr.fields.db ? 32 : 16, &addr) && + cs->attr.fields.l) ? 64 : + cs->attr.fields.db ? 32 : 16, &addr) && (hvm_fetch_from_guest_virt_nofault(sig, addr, sizeof(sig), 0) == HVMCOPY_okay) && (memcmp(sig, "\xf\xbxen", sizeof(sig)) == 0) ) @@ -3856,8 +3857,6 @@ void hvm_ud_intercept(struct cpu_user_re } } - hvm_emulate_prepare(&ctxt, regs); - switch ( hvm_emulate_one(&ctxt) ) { case X86EMUL_UNHANDLEABLE: