From patchwork Thu Dec 1 16:55:59 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Cooper X-Patchwork-Id: 9456537 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 7913060235 for ; Thu, 1 Dec 2016 17:03:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 68B8528516 for ; Thu, 1 Dec 2016 17:03:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5D5FD28517; Thu, 1 Dec 2016 17:03:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D613D2851B for ; Thu, 1 Dec 2016 17:03:09 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cCUiS-00081t-QC; Thu, 01 Dec 2016 17:00:12 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cCUiS-00081V-7z for xen-devel@lists.xen.org; Thu, 01 Dec 2016 17:00:12 +0000 Received: from [85.158.139.211] by server-5.bemta-5.messagelabs.com id 91/E3-02084-B1750485; Thu, 01 Dec 2016 17:00:11 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFupnkeJIrShJLcpLzFFi42JxWrohUlcy3CH C4OlGDoslHxezODB6HN39mymAMYo1My8pvyKBNePvo6WsBR8tK2a8vsDcwNiq08XIySEh4C9x u+EOO4jNJqAvsfvFJyYQW0RAXeJ0x0XWLkYODmYBP4lDD3xBTGEBT4ndv4tBKlgEVCRuLf/IA mLzAoUfPJvFDjFRTuL88Z/MIDangJfE+UNfwGqEgGpeNp5ghLDVJK71X2KH6BWUODnzCVgNs4 CExMEXL5gnMPLOQpKahSS1gJFpFaNGcWpRWWqRrqGpXlJRZnpGSW5iZo6uoYGpXm5qcXFiemp OYlKxXnJ+7iZGYOAwAMEOxobtnocYJTmYlER5H+k5RAjxJeWnVGYkFmfEF5XmpBYfYpTh4FCS 4J0UCpQTLEpNT61Iy8wBhjBMWoKDR0mEVyEEKM1bXJCYW5yZDpE6xajLMe3Z4qdMQix5+XmpU uK8x0FmCIAUZZTmwY2AxdMlRlkpYV5GoKOEeApSi3IzS1DlXzGKczAqCfP2gUzhycwrgdv0Cu gIJqAjOq7bgxxRkoiQkmpgnKil9PjZg1Mc9U8mtR/4OS2DwaFv6n+uLwsmLT+71MxqxsFOf61 1WheffnALu6aYMy959zc+fS8uv/Ktu69xJ32XOhu97rRNy5fk+GK1y4ky/yQqxJ2vL9w9NWjJ KuuODQa28wzzXc1Fk++nmn7KDjvZf1jzSYzkwyDf/TMPSvBNnlP35a9wphJLcUaioRZzUXEiA N/tYPuiAgAA X-Env-Sender: prvs=136206f05=Andrew.Cooper3@citrix.com X-Msg-Ref: server-7.tower-206.messagelabs.com!1480611601!73120036!2 X-Originating-IP: [66.165.176.89] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 9.0.16; banners=-,-,- X-VirusChecked: Checked Received: (qmail 979 invoked from network); 1 Dec 2016 17:00:04 -0000 Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89) by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP; 1 Dec 2016 17:00:04 -0000 X-IronPort-AV: E=Sophos;i="5.33,282,1477958400"; d="scan'208";a="392520355" From: Andrew Cooper To: Xen-devel Date: Thu, 1 Dec 2016 16:55:59 +0000 Message-ID: <1480611361-15294-4-git-send-email-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1480611361-15294-1-git-send-email-andrew.cooper3@citrix.com> References: <1480611361-15294-1-git-send-email-andrew.cooper3@citrix.com> MIME-Version: 1.0 Cc: Andrew Cooper , Jan Beulich Subject: [Xen-devel] [PATCH v4 11/24] x86/emul: Implement singlestep as a retire flag X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP The behaviour of singlestep is to raise #DB after the instruction has been completed, but implementing it with inject_hw_exception() causes x86_emulate() to return X86EMUL_EXCEPTION, despite succesfully completing execution of the instruction, including register writeback. Instead, use a retire flag to indicate singlestep, which causes x86_emulate() to return X86EMUL_OKAY. Update all callers of x86_emulate() to use the new retire flag. This fixes the behaviour of singlestep for shadow pagetable updates and mmcfg/mmio_ro intercepts, which previously discarded the exception. With this change, all uses of X86EMUL_EXCEPTION from x86_emulate() are believed to have strictly fault semantics. Signed-off-by: Andrew Cooper Reviewed-by: Paul Durrant Acked-by: Tim Deegan Reviewed-by: Jan Beulich --- CC: Jan Beulich v4: * s/is_hvm_vcpu/has_hvm_container_domain/ * Adjust comments and entry condition into the PAE second-half case. v3: * New --- xen/arch/x86/hvm/emulate.c | 3 +++ xen/arch/x86/mm.c | 11 ++++++++++- xen/arch/x86/mm/shadow/multi.c | 35 ++++++++++++++++++++++++++++++---- xen/arch/x86/x86_emulate/x86_emulate.c | 9 ++++----- xen/arch/x86/x86_emulate/x86_emulate.h | 6 ++++++ 5 files changed, 54 insertions(+), 10 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index fe62500..91c79fa 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -1788,6 +1788,9 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt, if ( rc != X86EMUL_OKAY ) return rc; + if ( hvmemul_ctxt->ctxt.retire.singlestep ) + hvm_inject_hw_exception(TRAP_debug, X86_EVENT_NO_EC); + new_intr_shadow = hvmemul_ctxt->intr_shadow; /* MOV-SS instruction toggles MOV-SS shadow, else we just clear it. */ diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index b7c7122..231c7bf 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -5382,6 +5382,9 @@ int ptwr_do_page_fault(struct vcpu *v, unsigned long addr, if ( rc == X86EMUL_UNHANDLEABLE ) goto bail; + if ( ptwr_ctxt.ctxt.retire.singlestep ) + pv_inject_hw_exception(TRAP_debug, X86_EVENT_NO_EC); + perfc_incr(ptwr_emulations); return EXCRET_fault_fixed; @@ -5503,7 +5506,13 @@ int mmio_ro_do_page_fault(struct vcpu *v, unsigned long addr, else rc = x86_emulate(&ctxt, &mmio_ro_emulate_ops); - return rc != X86EMUL_UNHANDLEABLE ? EXCRET_fault_fixed : 0; + if ( rc == X86EMUL_UNHANDLEABLE ) + return 0; + + if ( ctxt.retire.singlestep ) + pv_inject_hw_exception(TRAP_debug, X86_EVENT_NO_EC); + + return EXCRET_fault_fixed; } void *alloc_xen_pagetable(void) diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c index 9ee48a8..eac2330 100644 --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -3422,18 +3422,36 @@ static int sh_page_fault(struct vcpu *v, v->arch.paging.last_write_emul_ok = 0; #endif + if ( emul_ctxt.ctxt.retire.singlestep ) + { + if ( has_hvm_container_domain(d) ) + hvm_inject_hw_exception(TRAP_debug, X86_EVENT_NO_EC); + else + pv_inject_hw_exception(TRAP_debug, X86_EVENT_NO_EC); + } + #if GUEST_PAGING_LEVELS == 3 /* PAE guest */ - if ( r == X86EMUL_OKAY ) { + /* + * If there are no pending actions, emulate up to four extra instructions + * in the hope of catching the "second half" of a 64-bit pagetable write. + */ + if ( r == X86EMUL_OKAY && !emul_ctxt.ctxt.retire.raw ) + { int i, emulation_count=0; this_cpu(trace_emulate_initial_va) = va; - /* Emulate up to four extra instructions in the hope of catching - * the "second half" of a 64-bit pagetable write. */ + for ( i = 0 ; i < 4 ; i++ ) { shadow_continue_emulation(&emul_ctxt, regs); v->arch.paging.last_write_was_pt = 0; r = x86_emulate(&emul_ctxt.ctxt, emul_ops); - if ( r == X86EMUL_OKAY ) + + /* + * Only continue the search for the second half if there are no + * exceptions or pending actions. Otherwise, give up and re-enter + * the guest. + */ + if ( r == X86EMUL_OKAY && !emul_ctxt.ctxt.retire.raw ) { emulation_count++; if ( v->arch.paging.last_write_was_pt ) @@ -3449,6 +3467,15 @@ static int sh_page_fault(struct vcpu *v, { perfc_incr(shadow_em_ex_fail); TRACE_SHADOW_PATH_FLAG(TRCE_SFLAG_EMULATION_LAST_FAILED); + + if ( emul_ctxt.ctxt.retire.singlestep ) + { + if ( has_hvm_container_domain(d) ) + hvm_inject_hw_exception(TRAP_debug, X86_EVENT_NO_EC); + else + pv_inject_hw_exception(TRAP_debug, X86_EVENT_NO_EC); + } + break; /* Don't emulate again if we failed! */ } } diff --git a/xen/arch/x86/x86_emulate/x86_emulate.c b/xen/arch/x86/x86_emulate/x86_emulate.c index fa8d98f..7bc1cd9 100644 --- a/xen/arch/x86/x86_emulate/x86_emulate.c +++ b/xen/arch/x86/x86_emulate/x86_emulate.c @@ -2415,7 +2415,6 @@ x86_emulate( struct x86_emulate_state state; int rc; uint8_t b, d; - bool tf = ctxt->regs->eflags & EFLG_TF; struct operand src = { .reg = PTR_POISON }; struct operand dst = { .reg = PTR_POISON }; enum x86_swint_type swint_type; @@ -5413,11 +5412,11 @@ x86_emulate( if ( !mode_64bit() ) _regs.eip = (uint32_t)_regs.eip; - *ctxt->regs = _regs; + /* Was singestepping active at the start of this instruction? */ + if ( (rc == X86EMUL_OKAY) && (ctxt->regs->eflags & EFLG_TF) ) + ctxt->retire.singlestep = true; - /* Inject #DB if single-step tracing was enabled at instruction start. */ - if ( tf && (rc == X86EMUL_OKAY) && ops->inject_hw_exception ) - rc = ops->inject_hw_exception(EXC_DB, -1, ctxt) ? : X86EMUL_EXCEPTION; + *ctxt->regs = _regs; done: _put_fpu(); diff --git a/xen/arch/x86/x86_emulate/x86_emulate.h b/xen/arch/x86/x86_emulate/x86_emulate.h index f4bcf36..5a4f9b7 100644 --- a/xen/arch/x86/x86_emulate/x86_emulate.h +++ b/xen/arch/x86/x86_emulate/x86_emulate.h @@ -483,6 +483,7 @@ struct x86_emulate_ctxt bool hlt:1; /* Instruction HLTed. */ bool mov_ss:1; /* Instruction sets MOV-SS irq shadow. */ bool sti:1; /* Instruction sets STI irq shadow. */ + bool singlestep:1; /* Singlestepping was active. */ }; } retire; }; @@ -572,12 +573,17 @@ static inline int x86_emulate_wrapper( struct x86_emulate_ctxt *ctxt, const struct x86_emulate_ops *ops) { + unsigned long orig_eip = ctxt->regs->eip; int rc = x86_emulate(ctxt, ops); /* Retire flags should only be set for successful instruction emulation. */ if ( rc != X86EMUL_OKAY ) ASSERT(ctxt->retire.raw == 0); + /* All cases returning X86EMUL_EXCEPTION should have fault semantics. */ + if ( rc == X86EMUL_EXCEPTION ) + ASSERT(ctxt->regs->eip == orig_eip); + return rc; }