From patchwork Tue Sep 13 18:12:23 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tamas Lengyel X-Patchwork-Id: 9329755 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 232A36077F for ; Tue, 13 Sep 2016 18:15:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 197EE28FC2 for ; Tue, 13 Sep 2016 18:15:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0DF8529310; Tue, 13 Sep 2016 18:15:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 24B1428FC2 for ; Tue, 13 Sep 2016 18:15:21 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bjsCO-00087h-BP; Tue, 13 Sep 2016 18:12:48 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bjsCM-00087b-AQ for xen-devel@lists.xenproject.org; Tue, 13 Sep 2016 18:12:46 +0000 Received: from [85.158.137.68] by server-13.bemta-3.messagelabs.com id BF/4C-06162-D9148D75; Tue, 13 Sep 2016 18:12:45 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrHIsWRWlGSWpSXmKPExsVyMfSOi+4cxxv hBs0rdC2+b5nM5MDocfjDFZYAxijWzLyk/IoE1oxDE6awFmyPrph7dRZjA+NP9y5GLg4hgWmM Es+Xf2cEcVgEmlklGo50MYM4EgL9rBL7bzxk6WLkBHJiJFb+P8YKYVdLzP99F8wWEtCUOLjhD DPEqD4mic2/ljGBJNgEjCSuXu1hA7FFBJQk7q2azARSxCywmFnix/87zCAJYQEPif5FM8AmsQ ioSuzovskIYvMK2En0r3/BDLFNTuLSti9gNqeAvcTVpe0sEJvtJE5s/so0gVFgASPDKkaN4tS istQiXSNjvaSizPSMktzEzBxdQwNjvdzU4uLE9NScxKRiveT83E2MwPCqZ2Bg3MHYt9fvEKMk B5OSKC+rxY1wIb6k/JTKjMTijPii0pzU4kOMMhwcShK8+g5AOcGi1PTUirTMHGCgw6QlOHiUR HhVQdK8xQWJucWZ6RCpU4yWHFt+X1vLxLFt6j0geeDpm71MQix5+XmpUuK8x+2BGgRAGjJK8+ DGwaLxEqOslDAvIwMDgxBPQWpRbmYJqvwrRnEORiVhXmOQtTyZeSVwW18BHcQEdNCWNddBDip JREhJNTC6MezjyMjs/qcZJs7x09Hwgw7v3tKnz//W/tSN1ZpXmnQ6KUc2RMf2a93MUgmhoGlt fBsslqwNSzisbPD75okNu+pm/LT/ltVvohm68Pb9Y7uenZmSs/H6zx2Zfl9n6Pg94r7Cvy495 sc5X/6okn28RUsednwxLp0ya8/2t4cl9r93EFbrEfijxFKckWioxVxUnAgAZ0q5asECAAA= X-Env-Sender: tamas.lengyel@zentific.com X-Msg-Ref: server-14.tower-31.messagelabs.com!1473790363!60533707!1 X-Originating-IP: [209.85.220.68] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 8.84; banners=-,-,- X-VirusChecked: Checked Received: (qmail 49684 invoked from network); 13 Sep 2016 18:12:44 -0000 Received: from mail-pa0-f68.google.com (HELO mail-pa0-f68.google.com) (209.85.220.68) by server-14.tower-31.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 13 Sep 2016 18:12:44 -0000 Received: by mail-pa0-f68.google.com with SMTP id p2so3264723pap.3 for ; Tue, 13 Sep 2016 11:12:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zentific-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Ty+/ZhmXW2FGfNLOGh5bHjV+H4USxlD2P5lzmx4RtCw=; b=k0U1QEgm1cMf51pSt5seahb2BCyMGFGvdy0jln6jVT6LCyUyZz0dINUdBQG5GqQHPX RQfhWZ0+n4y1Kv1V6jcJvopBsk3NqKZx01GJN1nDxkERS+OLmCg+tsisfiMsCPGRxucK y9XKjTpRr+tzvXURqpA0PQ6J2QPT2ZDExLiWj6xRzNA4TIOxipaSS0KaaA3rh5EPHL/x HLQsf0lfOtia+fhsylYc03N0C94Az4wEDzu+QU4y51to+9sorJcvRluLRRs1CUDsbxmh HyHOQs5m/5Y9VfkgEgsbQW8eluwczC70bBWEtXbPKVFREFQYiZulw95W1MK+P+CynrDX rseg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Ty+/ZhmXW2FGfNLOGh5bHjV+H4USxlD2P5lzmx4RtCw=; b=i3CvhA4C5RHjhSFnPUm6zNpB1B7h0jAk1T53QGoNR0NN3Y4+/JNnCYgZt8jSm4t6PQ tq4LSksdQlEsmj9fK5D7q/WPkjqY2v2kTwuxZJgUeq5XAzzZLsZj7BvYfDlSBuooi0Ut Lk1z6lTUOpY0MrfSJ+YkqjEUm5luOZKfWKoxFGpZVIgaGOOdHuOwgdtOh0BX7FT5o1ru AjTnk3znIWtSkmPjKCAieB+36biOkxlHuN6E6TZxPj6RAZJY9boC9KKVKUMCsAnkERqY DZ5vbJspobpAYO0cjkBy4LA+8XRf9Lt3lPHuVqvygTSqUAQgRiECmZt6MkiPYbfdD8gz 1kxw== X-Gm-Message-State: AE9vXwN+RUOR/8VOeDH0ZnElQFX4EBjEJpRnX+BoCLoEZ5NQImqhYuocfbHWbuEJ3Bdgsg== X-Received: by 10.66.180.111 with SMTP id dn15mr3541195pac.125.1473790362882; Tue, 13 Sep 2016 11:12:42 -0700 (PDT) Received: from l1.lan (c-73-14-35-59.hsd1.co.comcast.net. [73.14.35.59]) by smtp.gmail.com with ESMTPSA id fk2sm11055pad.2.2016.09.13.11.12.36 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 13 Sep 2016 11:12:42 -0700 (PDT) From: Tamas K Lengyel To: xen-devel@lists.xenproject.org Date: Tue, 13 Sep 2016 12:12:23 -0600 Message-Id: <20160913181223.1459-2-tamas.lengyel@zentific.com> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20160913181223.1459-1-tamas.lengyel@zentific.com> References: <20160913181223.1459-1-tamas.lengyel@zentific.com> Cc: Kevin Tian , Stefano Stabellini , Jan Beulich , Razvan Cojocaru , George Dunlap , Tamas K Lengyel , Julien Grall , Paul Durrant , Jun Nakajima , Andrew Cooper Subject: [Xen-devel] [PATCH 2/2] x86/vm_event: Allow returning i-cache for emulation X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP When emulating instructions the emulator maintains a small i-cache fetched from the guest memory. This patch extends the vm_event interface to allow returning this i-cache via the vm_event response instead. When responding to a SOFTWARE_BREAKPOINT event (INT3) the monitor subscriber normally has to remove the INT3 from memory - singlestep - place back INT3 to allow the guest to continue execution. This routine however is susceptible to a race-condition on multi-vCPU guests. By allowing the subscriber to return the i-cache to be used for emulation it can side-step the problem by returning a clean buffer without the INT3 present. As part of this patch we rename hvm_mem_access_emulate_one to hvm_emulate_one_vm_event to better reflect that it is used in various vm_event scenarios now, not just in response to mem_access events. Signed-off-by: Tamas K Lengyel --- Cc: Paul Durrant Cc: Jan Beulich Cc: Andrew Cooper Cc: Jun Nakajima Cc: Kevin Tian Cc: George Dunlap Cc: Razvan Cojocaru Cc: Stefano Stabellini Cc: Julien Grall Note: This patch only has been compile-tested. --- xen/arch/x86/hvm/emulate.c | 44 ++++++++++++++++++++++++++------------- xen/arch/x86/hvm/hvm.c | 9 +++++--- xen/arch/x86/hvm/vmx/vmx.c | 1 + xen/arch/x86/vm_event.c | 9 +++++++- xen/common/vm_event.c | 1 - xen/include/asm-x86/hvm/emulate.h | 8 ++++--- xen/include/asm-x86/vm_event.h | 3 ++- xen/include/public/vm_event.h | 16 +++++++++++++- 8 files changed, 67 insertions(+), 24 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index cc25676..504ed35 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -76,9 +76,9 @@ static int set_context_data(void *buffer, unsigned int size) if ( curr->arch.vm_event ) { unsigned int safe_size = - min(size, curr->arch.vm_event->emul_read_data.size); + min(size, curr->arch.vm_event->emul_read.size); - memcpy(buffer, curr->arch.vm_event->emul_read_data.data, safe_size); + memcpy(buffer, curr->arch.vm_event->emul_read.data, safe_size); memset(buffer + safe_size, 0, size - safe_size); return X86EMUL_OKAY; } @@ -827,7 +827,7 @@ static int hvmemul_read( struct hvm_emulate_ctxt *hvmemul_ctxt = container_of(ctxt, struct hvm_emulate_ctxt, ctxt); - if ( unlikely(hvmemul_ctxt->set_context) ) + if ( unlikely(hvmemul_ctxt->set_context_data) ) return set_context_data(p_data, bytes); return __hvmemul_read( @@ -1029,7 +1029,7 @@ static int hvmemul_cmpxchg( struct hvm_emulate_ctxt *hvmemul_ctxt = container_of(ctxt, struct hvm_emulate_ctxt, ctxt); - if ( unlikely(hvmemul_ctxt->set_context) ) + if ( unlikely(hvmemul_ctxt->set_context_data) ) { int rc = set_context_data(p_new, bytes); @@ -1122,7 +1122,7 @@ static int hvmemul_rep_outs( p2m_type_t p2mt; int rc; - if ( unlikely(hvmemul_ctxt->set_context) ) + if ( unlikely(hvmemul_ctxt->set_context_data) ) return hvmemul_rep_outs_set_context(src_seg, src_offset, dst_port, bytes_per_rep, reps, ctxt); @@ -1264,7 +1264,7 @@ static int hvmemul_rep_movs( if ( buf == NULL ) return X86EMUL_UNHANDLEABLE; - if ( unlikely(hvmemul_ctxt->set_context) ) + if ( unlikely(hvmemul_ctxt->set_context_data) ) { rc = set_context_data(buf, bytes); @@ -1470,7 +1470,7 @@ static int hvmemul_read_io( *val = 0; - if ( unlikely(hvmemul_ctxt->set_context) ) + if ( unlikely(hvmemul_ctxt->set_context_data) ) return set_context_data(val, bytes); return hvmemul_do_pio_buffer(port, bytes, IOREQ_READ, val); @@ -1793,7 +1793,14 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt, pfec |= PFEC_user_mode; hvmemul_ctxt->insn_buf_eip = regs->eip; - if ( !vio->mmio_insn_bytes ) + + if ( unlikely(hvmemul_ctxt->set_context_insn) && curr->arch.vm_event ) + { + hvmemul_ctxt->insn_buf_bytes = sizeof(curr->arch.vm_event->emul_insn); + memcpy(hvmemul_ctxt->insn_buf, &curr->arch.vm_event->emul_insn, + hvmemul_ctxt->insn_buf_bytes); + } + else if ( !vio->mmio_insn_bytes ) { hvmemul_ctxt->insn_buf_bytes = hvm_get_insn_bytes(curr, hvmemul_ctxt->insn_buf) ?: @@ -1931,7 +1938,7 @@ int hvm_emulate_one_mmio(unsigned long mfn, unsigned long gla) return rc; } -void hvm_mem_access_emulate_one(enum emul_kind kind, unsigned int trapnr, +void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr, unsigned int errcode) { struct hvm_emulate_ctxt ctx = {{ 0 }}; @@ -1944,11 +1951,19 @@ void hvm_mem_access_emulate_one(enum emul_kind kind, unsigned int trapnr, case EMUL_KIND_NOWRITE: rc = hvm_emulate_one_no_write(&ctx); break; - case EMUL_KIND_SET_CONTEXT: - ctx.set_context = 1; - /* Intentional fall-through. */ - default: + case EMUL_KIND_SET_CONTEXT_DATA: + ctx.set_context_data = 1; + rc = hvm_emulate_one(&ctx); + break; + case EMUL_KIND_SET_CONTEXT_INSN: + ctx.set_context_insn = 1; rc = hvm_emulate_one(&ctx); + break; + case EMUL_KIND_NORMAL: + rc = hvm_emulate_one(&ctx); + break; + default: + return; } switch ( rc ) @@ -1983,7 +1998,8 @@ void hvm_emulate_prepare( hvmemul_ctxt->ctxt.force_writeback = 1; hvmemul_ctxt->seg_reg_accessed = 0; hvmemul_ctxt->seg_reg_dirty = 0; - hvmemul_ctxt->set_context = 0; + hvmemul_ctxt->set_context_data = 0; + hvmemul_ctxt->set_context_insn = 0; hvmemul_get_seg_reg(x86_seg_cs, hvmemul_ctxt); hvmemul_get_seg_reg(x86_seg_ss, hvmemul_ctxt); } diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index ca96643..7462794 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -489,13 +489,16 @@ void hvm_do_resume(struct vcpu *v) if ( v->arch.vm_event->emulate_flags & VM_EVENT_FLAG_SET_EMUL_READ_DATA ) - kind = EMUL_KIND_SET_CONTEXT; + kind = EMUL_KIND_SET_CONTEXT_DATA; else if ( v->arch.vm_event->emulate_flags & VM_EVENT_FLAG_EMULATE_NOWRITE ) kind = EMUL_KIND_NOWRITE; + else if ( v->arch.vm_event->emulate_flags & + VM_EVENT_FLAG_SET_EMUL_INSN_DATA ) + kind = EMUL_KIND_SET_CONTEXT_INSN; - hvm_mem_access_emulate_one(kind, TRAP_invalid_op, - HVM_DELIVER_NO_ERROR_CODE); + hvm_emulate_one_vm_event(kind, TRAP_invalid_op, + HVM_DELIVER_NO_ERROR_CODE); v->arch.vm_event->emulate_flags = 0; } diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index 2759e6f..d214716 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -57,6 +57,7 @@ #include #include #include +#include #include static bool_t __initdata opt_force_ept; diff --git a/xen/arch/x86/vm_event.c b/xen/arch/x86/vm_event.c index 343b9c8..03beed3 100644 --- a/xen/arch/x86/vm_event.c +++ b/xen/arch/x86/vm_event.c @@ -209,11 +209,18 @@ void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp) if ( p2m_mem_access_emulate_check(v, rsp) ) { if ( rsp->flags & VM_EVENT_FLAG_SET_EMUL_READ_DATA ) - v->arch.vm_event->emul_read_data = rsp->data.emul_read_data; + v->arch.vm_event->emul_read = rsp->data.emul.read; v->arch.vm_event->emulate_flags = rsp->flags; } break; + case VM_EVENT_REASON_SOFTWARE_BREAKPOINT: + if ( rsp->flags & VM_EVENT_FLAG_SET_EMUL_INSN_DATA ) + { + v->arch.vm_event->emul_insn = rsp->data.emul.insn; + v->arch.vm_event->emulate_flags = rsp->flags; + } + break; default: break; }; diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c index 907ab40..d8ee7f3 100644 --- a/xen/common/vm_event.c +++ b/xen/common/vm_event.c @@ -398,7 +398,6 @@ void vm_event_resume(struct domain *d, struct vm_event_domain *ved) * In some cases the response type needs extra handling, so here * we call the appropriate handlers. */ - /* Check flags which apply only when the vCPU is paused */ if ( atomic_read(&v->vm_event_pause_count) ) { diff --git a/xen/include/asm-x86/hvm/emulate.h b/xen/include/asm-x86/hvm/emulate.h index 3aabcbe..b52f99e 100644 --- a/xen/include/asm-x86/hvm/emulate.h +++ b/xen/include/asm-x86/hvm/emulate.h @@ -34,20 +34,22 @@ struct hvm_emulate_ctxt { uint32_t intr_shadow; - bool_t set_context; + bool_t set_context_data; + bool_t set_context_insn; }; enum emul_kind { EMUL_KIND_NORMAL, EMUL_KIND_NOWRITE, - EMUL_KIND_SET_CONTEXT + EMUL_KIND_SET_CONTEXT_DATA, + EMUL_KIND_SET_CONTEXT_INSN }; int hvm_emulate_one( struct hvm_emulate_ctxt *hvmemul_ctxt); int hvm_emulate_one_no_write( struct hvm_emulate_ctxt *hvmemul_ctxt); -void hvm_mem_access_emulate_one(enum emul_kind kind, +void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr, unsigned int errcode); void hvm_emulate_prepare( diff --git a/xen/include/asm-x86/vm_event.h b/xen/include/asm-x86/vm_event.h index ebb5d88..a672784 100644 --- a/xen/include/asm-x86/vm_event.h +++ b/xen/include/asm-x86/vm_event.h @@ -27,7 +27,8 @@ */ struct arch_vm_event { uint32_t emulate_flags; - struct vm_event_emul_read_data emul_read_data; + struct vm_event_emul_read_data emul_read; + struct vm_event_emul_insn_data emul_insn; struct monitor_write_data write_data; }; diff --git a/xen/include/public/vm_event.h b/xen/include/public/vm_event.h index f756126..ef62932 100644 --- a/xen/include/public/vm_event.h +++ b/xen/include/public/vm_event.h @@ -97,6 +97,13 @@ * Requires the vCPU to be paused already (synchronous events only). */ #define VM_EVENT_FLAG_SET_REGISTERS (1 << 8) +/* + * Instruction cache is being sent back to the hypervisor in the event response + * to be used by the emulator. This flag is only useful when combined with + * VM_EVENT_FLAG_EMULATE and is incompatible with also setting + * VM_EVENT_FLAG_EMULATE_NOWRITE or VM_EVENT_FLAG_SET_EMUL_READ_DATA. + */ +#define VM_EVENT_FLAG_SET_EMUL_INSN_DATA (1 << 9) /* * Reasons for the vm event request @@ -265,6 +272,10 @@ struct vm_event_emul_read_data { uint8_t data[sizeof(struct vm_event_regs_x86) - sizeof(uint32_t)]; }; +struct vm_event_emul_insn_data { + uint8_t data[16]; /* Has to be completely filled */ +}; + typedef struct vm_event_st { uint32_t version; /* VM_EVENT_INTERFACE_VERSION */ uint32_t flags; /* VM_EVENT_FLAG_* */ @@ -291,7 +302,10 @@ typedef struct vm_event_st { struct vm_event_regs_arm arm; } regs; - struct vm_event_emul_read_data emul_read_data; + union { + struct vm_event_emul_read_data read; + struct vm_event_emul_insn_data insn; + } emul; } data; } vm_event_request_t, vm_event_response_t;