From patchwork Thu Sep 22 18:54:20 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tamas Lengyel X-Patchwork-Id: 9346603 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3405B601C2 for ; Thu, 22 Sep 2016 18:56:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 282C32ABF2 for ; Thu, 22 Sep 2016 18:56:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1C60728499; Thu, 22 Sep 2016 18:56:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.6 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RCVD_IN_SORBS_SPAM,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1B84928499 for ; Thu, 22 Sep 2016 18:56:45 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bn98o-00089J-0v; Thu, 22 Sep 2016 18:54:38 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bn98m-00088L-9Q for xen-devel@lists.xenproject.org; Thu, 22 Sep 2016 18:54:36 +0000 Received: from [85.158.139.211] by server-14.bemta-5.messagelabs.com id DD/77-11508-BE824E75; Thu, 22 Sep 2016 18:54:35 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprCIsWRWlGSWpSXmKPExsVyMfSOs+4rjSf hBpO+S1h83zKZyYHR4/CHKywBjFGsmXlJ+RUJrBlvL69iKtjkWrH8czdjA+NEsy5GLg4hgamM EhePHWcFcVgE3rBIPNo3hwXEkRB4xyJx+Pl5pi5GTiAnRuL9w3esEHalxJzLE9lAbCEBTYmDG 84wQ9gTmSS6p+aA2GwCRhJXr/aA1YgIKEncWzWZCWQos8BXJomJL3+wgCSEBUIkGjedABrKAb RaVeL/4zyQMK+AncTOFfuZIXbJSVza9oV5AiPfAkaGVYwaxalFZalFukYmeklFmekZJbmJmTm 6hgamermpxcWJ6ak5iUnFesn5uZsYgaFSz8DAuIPx5mS/Q4ySHExKorzNSk/ChfiS8lMqMxKL M+KLSnNSiw8xynBwKEnwdqgD5QSLUtNTK9Iyc4BBC5OW4OBREuFdDJLmLS5IzC3OTIdInWK05 Njy+9paJo5tU+8ByQNP3+xlEmLJy89LlRLn7QZpEABpyCjNgxsHi6xLjLJSwryMDAwMQjwFqU W5mSWo8q8YxTkYlYR5l4FM4cnMK4Hb+groICagg7b8fAByUEkiQkqqgdHmp0M54xEO+ZfP91U 6/decV3/7yW3DC8/vr5p3R/jQy787uf3nXXuvNdX9x/58R/3IWUoi8SLG4cvuuWZGfFcW/MnU uOGjbrtl4W+diRveXjyWIHLZ7/X3DIHmO8Kr3+rHOXPFct89dMlTTydm5eU53lEKT2svG/0+d UDspcPrCcGcn16rcncpsRRnJBpqMRcVJwIAEZPX16cCAAA= X-Env-Sender: tamas.lengyel@zentific.com X-Msg-Ref: server-14.tower-206.messagelabs.com!1474570472!24625031!1 X-Originating-IP: [209.85.220.67] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 8.84; banners=-,-,- X-VirusChecked: Checked Received: (qmail 30879 invoked from network); 22 Sep 2016 18:54:34 -0000 Received: from mail-pa0-f67.google.com (HELO mail-pa0-f67.google.com) (209.85.220.67) by server-14.tower-206.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 22 Sep 2016 18:54:34 -0000 Received: by mail-pa0-f67.google.com with SMTP id i5so831798pad.1 for ; Thu, 22 Sep 2016 11:54:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zentific-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id; bh=PObGiM2diTuo6hVt7WTYVQQ89vrfb7bgGX8KQDHhQzY=; b=eT0jZ96Pqw+4YqUKGvdrnJe6G40B2Xk/S0eJtHVxYdVFMwQ0SmERuss3Ihoao4kZO4 aLot9bkem2o3+SVSEiLGM1cQ3Ia0JNgOo67qG/ID64dK9Qw1AR0TfUA9kDthlwj0NRng vt6WSOWoP+wj5d9FbSjnczFrvc9+cAYerZ2vCMX+mL0h2tpaFhQy+UFEQ+f7meqlH+iW 5iAsjpln9gY05CfYPUzM+TaGV6OfmKKGCnv2ChYTAyldh+c9HKpe0Ruos+oIorgz5ys5 BTkCxAbyT9IMLpsO3+VYXRYXWatp4n+7D2XsdSzcqz5tIu+ZfsHRsHjuL86g7MnCF6P1 X/qA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=PObGiM2diTuo6hVt7WTYVQQ89vrfb7bgGX8KQDHhQzY=; b=MKj+HaIo+2xzN+n+ppMuZV6jrrgC+th1bUOzVW+LPz96YZcxQHiRf+4bVaosIi+HXu bY/QPeZqS5c1gEiHF2YZZqiBJ9Ldmo9UGnBICxQ27hO51PxSFcKFg5M02ssCn/uAJvoE +ff8v0jJowMTl8RbuwhqbYvLy7gL3/bEunLoB2IdM9/CRjhQb/SIxbvJnMKeQE66F1NL kQyz/q1mMoSqewarRTv8uAF6dX/e4BaZ0O0WtF1YFaujvZZRB2Eb4r7ho8rK7WnC/TyO Zt4iIahcixe20LxP3NP2dqK/ulxKULe6SOiNmv1D9STts5cSEXPDC42BKFiNRqTWGMwr 5tNg== X-Gm-Message-State: AE9vXwOczjqeHHrLL3xgBLIdBdb1tdWC65UvFK8aw3MMFH1xp+xwSssGxTdbKg/8wFVaJg== X-Received: by 10.66.157.166 with SMTP id wn6mr5895775pab.42.1474570472385; Thu, 22 Sep 2016 11:54:32 -0700 (PDT) Received: from l1.lan (c-73-14-35-59.hsd1.co.comcast.net. [73.14.35.59]) by smtp.gmail.com with ESMTPSA id q25sm5816736pfj.48.2016.09.22.11.54.26 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 22 Sep 2016 11:54:31 -0700 (PDT) From: Tamas K Lengyel To: xen-devel@lists.xenproject.org Date: Thu, 22 Sep 2016 12:54:20 -0600 Message-Id: <20160922185420.6100-1-tamas.lengyel@zentific.com> X-Mailer: git-send-email 2.9.3 Cc: Kevin Tian , Stefano Stabellini , Jan Beulich , George Dunlap , Tamas K Lengyel , Julien Grall , Paul Durrant , Jun Nakajima , Andrew Cooper Subject: [Xen-devel] [PATCH v4] x86/vm_event: Allow overwriting Xen's i-cache used for emulation X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP When emulating instructions Xen's emulator maintains a small i-cache fetched from the guest memory. This patch extends the vm_event interface to allow overwriting this i-cache via a buffer returned in the vm_event response. When responding to a SOFTWARE_BREAKPOINT event (INT3) the monitor subscriber normally has to remove the INT3 from memory - singlestep - place back INT3 to allow the guest to continue execution. This routine however is susceptible to a race-condition on multi-vCPU guests. By allowing the subscriber to return the i-cache to be used for emulation it can side-step the problem by returning a clean buffer without the INT3 present. As part of this patch we rename hvm_mem_access_emulate_one to hvm_emulate_one_vm_event to better reflect that it is used in various vm_event scenarios now, not just in response to mem_access events. Signed-off-by: Tamas K Lengyel Acked-by: Razvan Cojocaru Reviewed-by: Jan Beulich Reviewed-by: Paul Durrant --- Cc: Paul Durrant Cc: Jan Beulich Cc: Andrew Cooper Cc: Jun Nakajima Cc: Kevin Tian Cc: George Dunlap Cc: Stefano Stabellini Cc: Julien Grall v4: Copy insn buffer into mmio buffer to avoid more login in hvm_emulate_one Add comment in hvm_do_resume to preserve order as described in vm_event.h --- xen/arch/x86/hvm/emulate.c | 27 +++++++++++++++++++++------ xen/arch/x86/hvm/hvm.c | 13 ++++++++++--- xen/arch/x86/vm_event.c | 11 ++++++++++- xen/include/asm-x86/hvm/emulate.h | 5 +++-- xen/include/asm-x86/vm_event.h | 5 ++++- xen/include/public/vm_event.h | 17 ++++++++++++++++- 6 files changed, 64 insertions(+), 14 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index cc25676..17f7f0d 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -76,9 +76,9 @@ static int set_context_data(void *buffer, unsigned int size) if ( curr->arch.vm_event ) { unsigned int safe_size = - min(size, curr->arch.vm_event->emul_read_data.size); + min(size, curr->arch.vm_event->emul.read.size); - memcpy(buffer, curr->arch.vm_event->emul_read_data.data, safe_size); + memcpy(buffer, curr->arch.vm_event->emul.read.data, safe_size); memset(buffer + safe_size, 0, size - safe_size); return X86EMUL_OKAY; } @@ -1931,7 +1931,7 @@ int hvm_emulate_one_mmio(unsigned long mfn, unsigned long gla) return rc; } -void hvm_mem_access_emulate_one(enum emul_kind kind, unsigned int trapnr, +void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr, unsigned int errcode) { struct hvm_emulate_ctxt ctx = {{ 0 }}; @@ -1944,10 +1944,25 @@ void hvm_mem_access_emulate_one(enum emul_kind kind, unsigned int trapnr, case EMUL_KIND_NOWRITE: rc = hvm_emulate_one_no_write(&ctx); break; - case EMUL_KIND_SET_CONTEXT: - ctx.set_context = 1; - /* Intentional fall-through. */ + case EMUL_KIND_SET_CONTEXT_INSN: { + struct vcpu *curr = current; + struct hvm_vcpu_io *vio = &curr->arch.hvm_vcpu.hvm_io; + + BUILD_BUG_ON(sizeof(vio->mmio_insn) != + sizeof(curr->arch.vm_event->emul.insn.data)); + ASSERT(!vio->mmio_insn_bytes); + + /* + * Stash insn buffer into mmio buffer here instead of ctx + * to avoid having to add more logic to hvm_emulate_one. + */ + vio->mmio_insn_bytes = sizeof(vio->mmio_insn); + memcpy(vio->mmio_insn, curr->arch.vm_event->emul.insn.data, + vio->mmio_insn_bytes); + } + /* Fall-through */ default: + ctx.set_context = (kind == EMUL_KIND_SET_CONTEXT_DATA); rc = hvm_emulate_one(&ctx); } diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 7bad845..b06e4d5 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -487,15 +487,22 @@ void hvm_do_resume(struct vcpu *v) { enum emul_kind kind = EMUL_KIND_NORMAL; + /* + * Please observ the order here to match the flag descriptions + * provided in public/vm_event.h + */ if ( v->arch.vm_event->emulate_flags & VM_EVENT_FLAG_SET_EMUL_READ_DATA ) - kind = EMUL_KIND_SET_CONTEXT; + kind = EMUL_KIND_SET_CONTEXT_DATA; else if ( v->arch.vm_event->emulate_flags & VM_EVENT_FLAG_EMULATE_NOWRITE ) kind = EMUL_KIND_NOWRITE; + else if ( v->arch.vm_event->emulate_flags & + VM_EVENT_FLAG_SET_EMUL_INSN_DATA ) + kind = EMUL_KIND_SET_CONTEXT_INSN; - hvm_mem_access_emulate_one(kind, TRAP_invalid_op, - HVM_DELIVER_NO_ERROR_CODE); + hvm_emulate_one_vm_event(kind, TRAP_invalid_op, + HVM_DELIVER_NO_ERROR_CODE); v->arch.vm_event->emulate_flags = 0; } diff --git a/xen/arch/x86/vm_event.c b/xen/arch/x86/vm_event.c index 343b9c8..1e88d67 100644 --- a/xen/arch/x86/vm_event.c +++ b/xen/arch/x86/vm_event.c @@ -209,11 +209,20 @@ void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp) if ( p2m_mem_access_emulate_check(v, rsp) ) { if ( rsp->flags & VM_EVENT_FLAG_SET_EMUL_READ_DATA ) - v->arch.vm_event->emul_read_data = rsp->data.emul_read_data; + v->arch.vm_event->emul.read = rsp->data.emul.read; v->arch.vm_event->emulate_flags = rsp->flags; } break; + + case VM_EVENT_REASON_SOFTWARE_BREAKPOINT: + if ( rsp->flags & VM_EVENT_FLAG_SET_EMUL_INSN_DATA ) + { + v->arch.vm_event->emul.insn = rsp->data.emul.insn; + v->arch.vm_event->emulate_flags = rsp->flags; + } + break; + default: break; }; diff --git a/xen/include/asm-x86/hvm/emulate.h b/xen/include/asm-x86/hvm/emulate.h index 3aabcbe..96d8f0b 100644 --- a/xen/include/asm-x86/hvm/emulate.h +++ b/xen/include/asm-x86/hvm/emulate.h @@ -40,14 +40,15 @@ struct hvm_emulate_ctxt { enum emul_kind { EMUL_KIND_NORMAL, EMUL_KIND_NOWRITE, - EMUL_KIND_SET_CONTEXT + EMUL_KIND_SET_CONTEXT_DATA, + EMUL_KIND_SET_CONTEXT_INSN }; int hvm_emulate_one( struct hvm_emulate_ctxt *hvmemul_ctxt); int hvm_emulate_one_no_write( struct hvm_emulate_ctxt *hvmemul_ctxt); -void hvm_mem_access_emulate_one(enum emul_kind kind, +void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr, unsigned int errcode); void hvm_emulate_prepare( diff --git a/xen/include/asm-x86/vm_event.h b/xen/include/asm-x86/vm_event.h index ebb5d88..ca73f99 100644 --- a/xen/include/asm-x86/vm_event.h +++ b/xen/include/asm-x86/vm_event.h @@ -27,7 +27,10 @@ */ struct arch_vm_event { uint32_t emulate_flags; - struct vm_event_emul_read_data emul_read_data; + union { + struct vm_event_emul_read_data read; + struct vm_event_emul_insn_data insn; + } emul; struct monitor_write_data write_data; }; diff --git a/xen/include/public/vm_event.h b/xen/include/public/vm_event.h index f756126..ba8e387 100644 --- a/xen/include/public/vm_event.h +++ b/xen/include/public/vm_event.h @@ -97,6 +97,14 @@ * Requires the vCPU to be paused already (synchronous events only). */ #define VM_EVENT_FLAG_SET_REGISTERS (1 << 8) +/* + * Instruction cache is being sent back to the hypervisor in the event response + * to be used by the emulator. This flag is only useful when combined with + * VM_EVENT_FLAG_EMULATE and does not take presedence if combined with + * VM_EVENT_FLAG_EMULATE_NOWRITE or VM_EVENT_FLAG_SET_EMUL_READ_DATA, (i.e. + * if any of those flags are set, only those will be honored). + */ +#define VM_EVENT_FLAG_SET_EMUL_INSN_DATA (1 << 9) /* * Reasons for the vm event request @@ -265,6 +273,10 @@ struct vm_event_emul_read_data { uint8_t data[sizeof(struct vm_event_regs_x86) - sizeof(uint32_t)]; }; +struct vm_event_emul_insn_data { + uint8_t data[16]; /* Has to be completely filled */ +}; + typedef struct vm_event_st { uint32_t version; /* VM_EVENT_INTERFACE_VERSION */ uint32_t flags; /* VM_EVENT_FLAG_* */ @@ -291,7 +303,10 @@ typedef struct vm_event_st { struct vm_event_regs_arm arm; } regs; - struct vm_event_emul_read_data emul_read_data; + union { + struct vm_event_emul_read_data read; + struct vm_event_emul_insn_data insn; + } emul; } data; } vm_event_request_t, vm_event_response_t;