From patchwork Thu Sep 15 16:51:19 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tamas Lengyel X-Patchwork-Id: 9334325 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8191F60839 for ; Thu, 15 Sep 2016 16:53:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 749C12844B for ; Thu, 15 Sep 2016 16:53:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 687EE29A80; Thu, 15 Sep 2016 16:53:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 42C002844B for ; Thu, 15 Sep 2016 16:53:52 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bkZsp-0006wT-9L; Thu, 15 Sep 2016 16:51:31 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bkZsn-0006wI-1t for xen-devel@lists.xenproject.org; Thu, 15 Sep 2016 16:51:29 +0000 Received: from [85.158.143.35] by server-9.bemta-6.messagelabs.com id 8C/6F-28857-091DAD75; Thu, 15 Sep 2016 16:51:28 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprKIsWRWlGSWpSXmKPExsVyMfSOk27/xVv hBuv7uC2+b5nM5MDocfjDFZYAxijWzLyk/IoE1owZl+awFLxNrHh85zxbA+MUjy5GLg4hgemM Es8WfmUFcVgE3rBITFvQygbiSAi8Y5G4ePEdUIYDyImR+NLh1MXICWRWSrQevccCYgsJaEoc3 HCGGWLSBCaJrva17CAJNgEjiatXe9hAbBEBJYl7qyYzgRQxCzQwSXyZsR0sISzgKvHr/TUmEJ tFQFXi8YzvjCA2r4C9xJ/Ov6wQ2+QkLm37wjyBkW8BI8MqRo3i1KKy1CJdI2O9pKLM9IyS3MT MHF1DAzO93NTi4sT01JzEpGK95PzcTYzAYGEAgh2Mf+YHHmKU5GBSEuXl2XUrXIgvKT+lMiOx OCO+qDQntfgQowwHh5IEr9UFoJxgUWp6akVaZg4wbGHSEhw8SiK8diBp3uKCxNzizHSI1ClGS 44tv6+tZeI4NvcGkNw29d5aJiGWvPy8VClxXmeQBgGQhozSPLhxsNi6xCgrJczLCHSgEE9Bal FuZgmq/CtGcQ5GJWFeJ5ApPJl5JXBbXwEdxAR00JY110EOKklESEk1MHay8IvpSX68wSlrcH2 NOMdn6y1eWxqi+YzZOo3Nvk6+ZcJycGun+5nvb/58PX3/y5+WraoKx97zqvnt3Xpwrymv8k2u 6kNmb93vdav4Pv4cXOHrZ26501SYx2XXlA+Gd/uWxdxxf1j5OXFeX4dI8UL/be8OX564m63hs oetWVY7P/Mm3YKLx5RYijMSDbWYi4oTATp9ZreoAgAA X-Env-Sender: tamas.lengyel@zentific.com X-Msg-Ref: server-11.tower-21.messagelabs.com!1473958285!33409474!1 X-Originating-IP: [209.85.220.66] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 8.84; banners=-,-,- X-VirusChecked: Checked Received: (qmail 56451 invoked from network); 15 Sep 2016 16:51:26 -0000 Received: from mail-pa0-f66.google.com (HELO mail-pa0-f66.google.com) (209.85.220.66) by server-11.tower-21.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 15 Sep 2016 16:51:26 -0000 Received: by mail-pa0-f66.google.com with SMTP id oz2so2313267pac.0 for ; Thu, 15 Sep 2016 09:51:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zentific-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id; bh=G1M9yW6f/GzM1CbHcznDiSZRaOtl8VdBm7lRKOo1I5Q=; b=ZAib9KjRqFnyXaeZeE/s1elB8HzaCw4glr4cWhulx4T9tMzgjwW3setJRTvgJvv2h0 XCAJK7VcdyaN3FkJkE47Hmov7va7SV7Jch2Ox6aakeE+zoSuDz+mQGIPDnH8XSMvi+Zu NE7fdir3FZnPvfIpA/bEBu2cxcfX+1wGl6SnJbSLQwOKhr0igRurcFtrggmEgK2LDa34 ca41OeaeCKxHFIGdHqLGHsNdp5UidomKCKJ9geEb5gUCXW3OTfhT5f6i7WTSH7grpWG8 QjQ6fkT7w4p5lI/dvaORGfUMwdsJhw5UlcSnE0Fst11WKANrqAbXqHoIWxdb6K/PdpYE qBXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=G1M9yW6f/GzM1CbHcznDiSZRaOtl8VdBm7lRKOo1I5Q=; b=JNifm0gl6iWLa5EkwPB5fu4eNqoGrVc9DdD9Qtd6PXc7vNHKdhs4v3YSDi9YOtGg8g mq8TZGDxXaNKFLPDO7X/UOi4GDXUJtL/Y+7YAiGBl1C9R09QIMDOzZxYl1v0ZqroKyc6 OjsOBdbqET9TNNeaE+s/mIbzDpQBLIN6FnD4P4nLdmnV3SS53fzcFFSEMBSeVtPVHHU1 lwS2EMTG3WWbGGJP+G/Bid4omDwO0JMofMON5JOPo8xeZ4NqCTliGurknR23EexBmYQa cN+rHPkbxMLNQtv5QPkSKcCrF1i7nuMF2gDQIxOlpqU54wAF9klsR/mrf8zSXtoaGiRZ 0cVA== X-Gm-Message-State: AE9vXwO9kKgUDRIRaJiB12dMx9eAJNIaIKOX7IzfIqyIPZ0YN4moMSa1h571+rUVVKdDXw== X-Received: by 10.66.132.38 with SMTP id or6mr15954043pab.84.1473958285276; Thu, 15 Sep 2016 09:51:25 -0700 (PDT) Received: from l1.lan (c-73-14-35-59.hsd1.co.comcast.net. [73.14.35.59]) by smtp.gmail.com with ESMTPSA id 27sm5480316pfr.29.2016.09.15.09.51.23 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Sep 2016 09:51:24 -0700 (PDT) From: Tamas K Lengyel To: xen-devel@lists.xenproject.org Date: Thu, 15 Sep 2016 10:51:19 -0600 Message-Id: <20160915165120.29753-1-tamas.lengyel@zentific.com> X-Mailer: git-send-email 2.9.3 Cc: Stefano Stabellini , Razvan Cojocaru , Tamas K Lengyel , Julien Grall , Jan Beulich , Andrew Cooper Subject: [Xen-devel] [PATCH v2 1/2] vm_event: Sanitize vm_event response handling X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Setting response flags in vm_event are only ever safe if the vCPUs are paused. To reflect this we move all checks within the if block that already checks whether this is the case. Checks that are only supported on one architecture we relocate the bitmask operations to the arch-specific handlers to avoid the overhead on architectures that don't support it. Furthermore, we clean-up the emulation checks so it more clearly represents the decision-logic when emulation should take place. As part of this we also set the stage to allow emulation in response to other types of events, not just mem_access violations. Signed-off-by: Tamas K Lengyel Acked-by: George Dunlap Acked-by: Razvan Cojocaru --- Cc: Jan Beulich Cc: Andrew Cooper Cc: Razvan Cojocaru Cc: Stefano Stabellini Cc: Julien Grall v2: use bool instead of bool_t --- xen/arch/x86/mm/p2m.c | 79 +++++++++++++++++++----------------------- xen/arch/x86/vm_event.c | 35 ++++++++++++++++++- xen/common/vm_event.c | 53 ++++++++++++++-------------- xen/include/asm-arm/p2m.h | 3 +- xen/include/asm-arm/vm_event.h | 9 ++++- xen/include/asm-x86/p2m.h | 2 +- xen/include/asm-x86/vm_event.h | 5 ++- xen/include/xen/mem_access.h | 12 ------- 8 files changed, 111 insertions(+), 87 deletions(-) diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 7d14c3b..faffc2a 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -1588,62 +1588,55 @@ void p2m_mem_paging_resume(struct domain *d, vm_event_response_t *rsp) } } -void p2m_mem_access_emulate_check(struct vcpu *v, +bool p2m_mem_access_emulate_check(struct vcpu *v, const vm_event_response_t *rsp) { - /* Mark vcpu for skipping one instruction upon rescheduling. */ - if ( rsp->flags & VM_EVENT_FLAG_EMULATE ) - { - xenmem_access_t access; - bool_t violation = 1; - const struct vm_event_mem_access *data = &rsp->u.mem_access; + xenmem_access_t access; + bool violation = 1; + const struct vm_event_mem_access *data = &rsp->u.mem_access; - if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), &access) == 0 ) + if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), &access) == 0 ) + { + switch ( access ) { - switch ( access ) - { - case XENMEM_access_n: - case XENMEM_access_n2rwx: - default: - violation = data->flags & MEM_ACCESS_RWX; - break; + case XENMEM_access_n: + case XENMEM_access_n2rwx: + default: + violation = data->flags & MEM_ACCESS_RWX; + break; - case XENMEM_access_r: - violation = data->flags & MEM_ACCESS_WX; - break; + case XENMEM_access_r: + violation = data->flags & MEM_ACCESS_WX; + break; - case XENMEM_access_w: - violation = data->flags & MEM_ACCESS_RX; - break; + case XENMEM_access_w: + violation = data->flags & MEM_ACCESS_RX; + break; - case XENMEM_access_x: - violation = data->flags & MEM_ACCESS_RW; - break; + case XENMEM_access_x: + violation = data->flags & MEM_ACCESS_RW; + break; - case XENMEM_access_rx: - case XENMEM_access_rx2rw: - violation = data->flags & MEM_ACCESS_W; - break; + case XENMEM_access_rx: + case XENMEM_access_rx2rw: + violation = data->flags & MEM_ACCESS_W; + break; - case XENMEM_access_wx: - violation = data->flags & MEM_ACCESS_R; - break; + case XENMEM_access_wx: + violation = data->flags & MEM_ACCESS_R; + break; - case XENMEM_access_rw: - violation = data->flags & MEM_ACCESS_X; - break; + case XENMEM_access_rw: + violation = data->flags & MEM_ACCESS_X; + break; - case XENMEM_access_rwx: - violation = 0; - break; - } + case XENMEM_access_rwx: + violation = 0; + break; } - - v->arch.vm_event->emulate_flags = violation ? rsp->flags : 0; - - if ( (rsp->flags & VM_EVENT_FLAG_SET_EMUL_READ_DATA) ) - v->arch.vm_event->emul_read_data = rsp->data.emul_read_data; } + + return violation; } void p2m_altp2m_check(struct vcpu *v, uint16_t idx) diff --git a/xen/arch/x86/vm_event.c b/xen/arch/x86/vm_event.c index e938ca3..343b9c8 100644 --- a/xen/arch/x86/vm_event.c +++ b/xen/arch/x86/vm_event.c @@ -18,6 +18,7 @@ * License along with this program; If not, see . */ +#include #include /* Implicitly serialized by the domctl lock. */ @@ -56,8 +57,12 @@ void vm_event_cleanup_domain(struct domain *d) d->arch.mem_access_emulate_each_rep = 0; } -void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v) +void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v, + vm_event_response_t *rsp) { + if ( !(rsp->flags & VM_EVENT_FLAG_TOGGLE_SINGLESTEP) ) + return; + if ( !is_hvm_domain(d) ) return; @@ -186,6 +191,34 @@ void vm_event_fill_regs(vm_event_request_t *req) req->data.regs.x86.cs_arbytes = seg.attr.bytes; } +void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp) +{ + if ( !(rsp->flags & VM_EVENT_FLAG_EMULATE) ) + { + v->arch.vm_event->emulate_flags = 0; + return; + } + + switch ( rsp->reason ) + { + case VM_EVENT_REASON_MEM_ACCESS: + /* + * Emulate iff this is a response to a mem_access violation and there + * are still conflicting mem_access permissions in-place. + */ + if ( p2m_mem_access_emulate_check(v, rsp) ) + { + if ( rsp->flags & VM_EVENT_FLAG_SET_EMUL_READ_DATA ) + v->arch.vm_event->emul_read_data = rsp->data.emul_read_data; + + v->arch.vm_event->emulate_flags = rsp->flags; + } + break; + default: + break; + }; +} + /* * Local variables: * mode: C diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c index 8398af7..907ab40 100644 --- a/xen/common/vm_event.c +++ b/xen/common/vm_event.c @@ -398,42 +398,41 @@ void vm_event_resume(struct domain *d, struct vm_event_domain *ved) * In some cases the response type needs extra handling, so here * we call the appropriate handlers. */ - switch ( rsp.reason ) - { -#ifdef CONFIG_X86 - case VM_EVENT_REASON_MOV_TO_MSR: -#endif - case VM_EVENT_REASON_WRITE_CTRLREG: - vm_event_register_write_resume(v, &rsp); - break; - -#ifdef CONFIG_HAS_MEM_ACCESS - case VM_EVENT_REASON_MEM_ACCESS: - mem_access_resume(v, &rsp); - break; -#endif + /* Check flags which apply only when the vCPU is paused */ + if ( atomic_read(&v->vm_event_pause_count) ) + { #ifdef CONFIG_HAS_MEM_PAGING - case VM_EVENT_REASON_MEM_PAGING: - p2m_mem_paging_resume(d, &rsp); - break; + if ( rsp.reason == VM_EVENT_REASON_MEM_PAGING ) + p2m_mem_paging_resume(d, &rsp); #endif - }; + /* + * Check emulation flags in the arch-specific handler only, as it + * has to set arch-specific flags when supported, and to avoid + * bitmask overhead when it isn't supported. + */ + vm_event_emulate_check(v, &rsp); + + /* + * Check in arch-specific handler to avoid bitmask overhead when + * not supported. + */ + vm_event_register_write_resume(v, &rsp); - /* Check for altp2m switch */ - if ( rsp.flags & VM_EVENT_FLAG_ALTERNATE_P2M ) - p2m_altp2m_check(v, rsp.altp2m_idx); + /* + * Check in arch-specific handler to avoid bitmask overhead when + * not supported. + */ + vm_event_toggle_singlestep(d, v, &rsp); + + /* Check for altp2m switch */ + if ( rsp.flags & VM_EVENT_FLAG_ALTERNATE_P2M ) + p2m_altp2m_check(v, rsp.altp2m_idx); - /* Check flags which apply only when the vCPU is paused */ - if ( atomic_read(&v->vm_event_pause_count) ) - { if ( rsp.flags & VM_EVENT_FLAG_SET_REGISTERS ) vm_event_set_registers(v, &rsp); - if ( rsp.flags & VM_EVENT_FLAG_TOGGLE_SINGLESTEP ) - vm_event_toggle_singlestep(d, v); - if ( rsp.flags & VM_EVENT_FLAG_VCPU_PAUSED ) vm_event_vcpu_unpause(v); } diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h index 53c4d78..6251b37 100644 --- a/xen/include/asm-arm/p2m.h +++ b/xen/include/asm-arm/p2m.h @@ -121,10 +121,11 @@ typedef enum { p2m_to_mask(p2m_map_foreign))) static inline -void p2m_mem_access_emulate_check(struct vcpu *v, +bool p2m_mem_access_emulate_check(struct vcpu *v, const vm_event_response_t *rsp) { /* Not supported on ARM. */ + return 0; } static inline diff --git a/xen/include/asm-arm/vm_event.h b/xen/include/asm-arm/vm_event.h index 9482636..66f2474 100644 --- a/xen/include/asm-arm/vm_event.h +++ b/xen/include/asm-arm/vm_event.h @@ -34,7 +34,8 @@ static inline void vm_event_cleanup_domain(struct domain *d) memset(&d->monitor, 0, sizeof(d->monitor)); } -static inline void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v) +static inline void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v, + vm_event_response_t *rsp) { /* Not supported on ARM. */ } @@ -45,4 +46,10 @@ void vm_event_register_write_resume(struct vcpu *v, vm_event_response_t *rsp) /* Not supported on ARM. */ } +static inline +void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp) +{ + /* Not supported on ARM. */ +} + #endif /* __ASM_ARM_VM_EVENT_H__ */ diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index 9fc9ead..7035860 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -677,7 +677,7 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla, /* Check for emulation and mark vcpu for skipping one instruction * upon rescheduling if required. */ -void p2m_mem_access_emulate_check(struct vcpu *v, +bool p2m_mem_access_emulate_check(struct vcpu *v, const vm_event_response_t *rsp); /* Sanity check for mem_access hardware support */ diff --git a/xen/include/asm-x86/vm_event.h b/xen/include/asm-x86/vm_event.h index 294def6..ebb5d88 100644 --- a/xen/include/asm-x86/vm_event.h +++ b/xen/include/asm-x86/vm_event.h @@ -35,8 +35,11 @@ int vm_event_init_domain(struct domain *d); void vm_event_cleanup_domain(struct domain *d); -void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v); +void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v, + vm_event_response_t *rsp); void vm_event_register_write_resume(struct vcpu *v, vm_event_response_t *rsp); +void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp); + #endif /* __ASM_X86_VM_EVENT_H__ */ diff --git a/xen/include/xen/mem_access.h b/xen/include/xen/mem_access.h index 3d054e0..da36e07 100644 --- a/xen/include/xen/mem_access.h +++ b/xen/include/xen/mem_access.h @@ -30,12 +30,6 @@ int mem_access_memop(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg); -static inline -void mem_access_resume(struct vcpu *v, vm_event_response_t *rsp) -{ - p2m_mem_access_emulate_check(v, rsp); -} - #else static inline @@ -45,12 +39,6 @@ int mem_access_memop(unsigned long cmd, return -ENOSYS; } -static inline -void mem_access_resume(struct vcpu *vcpu, vm_event_response_t *rsp) -{ - /* Nothing to do. */ -} - #endif /* HAS_MEM_ACCESS */ #endif /* _XEN_ASM_MEM_ACCESS_H */