From patchwork Mon Aug 1 16:52:51 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tamas Lengyel X-Patchwork-Id: 9254555 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8DFEF6075F for ; Mon, 1 Aug 2016 16:56:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7B7AA2848E for ; Mon, 1 Aug 2016 16:56:17 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6F634284B0; Mon, 1 Aug 2016 16:56:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 37DB52848E for ; Mon, 1 Aug 2016 16:56:16 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bUGSc-0004li-Cu; Mon, 01 Aug 2016 16:53:02 +0000 Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bUGSa-0004lc-HW for xen-devel@lists.xenproject.org; Mon, 01 Aug 2016 16:53:00 +0000 Received: from [193.109.254.147] by server-9.bemta-14.messagelabs.com id B5/A7-10182-B6E7F975; Mon, 01 Aug 2016 16:52:59 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFuplkeJIrShJLcpLzFFi42K5GHrNUTerbn6 4weNPFhbft0xmcmD0OPzhCksAYxRrZl5SfkUCa8aSCVOYCuZlVlz5OJ+pgfFVQBcjF4eQwAxG iSMzTzKBOCwCb1gkXj7oYAZxJATesUg0L73F1sXICeTESHzpvs4CYZdJ3Nx7mAnEFhLQlDi44 QwzhD2ZSWLRT0sQm03ASOLq1R6wXhEBJYl7qyaDbWAWaGKS+Nt5iRUkISzgK/Fiyhowm0VAVa Jnezd7FyMHB6+Al8SMhQUQu+QkLk9/wDaBkW8BI8MqRvXi1KKy1CJdC72kosz0jJLcxMwcXUN DE73c1OLixPTUnMSkYr3k/NxNjMBAYQCCHYwbJzkfYpTkYFIS5U0omB8uxJeUn1KZkVicEV9U mpNafIhRhoNDSYLXpxYoJ1iUmp5akZaZAwxZmLQEB4+SCO/MGqA0b3FBYm5xZjpE6hSjMceW3 9fWMnFsm3pvLZMQS15+XqqUOK8kyCQBkNKM0jy4QbBYusQoKyXMywh0mhBPQWpRbmYJqvwrRn EORiVh3llVQFN4MvNK4Pa9AjqFCeiURPs5IKeUJCKkpBoY16hpTo59e09rr9Hk0tZAfeNNNQo qUp0/4g/9TfjVsSBR7HrC2kxnnkNtvJe1mkLOTExvvG/7JX219X3/i3u8ZktFLVpwJGBGcdi8 /zKzmTb9mpeYx6F4unJbrePbnnmnbidE9CmyFv9S+5Fw/jHP7BtJu4/b/fzuwKazim+u9Qe3u PkNbhLzlFiKMxINtZiLihMBinfgRKACAAA= X-Env-Sender: tamas.lengyel@zentific.com X-Msg-Ref: server-4.tower-27.messagelabs.com!1470070377!56317781!1 X-Originating-IP: [209.85.214.65] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 8.77; banners=-,-,- X-VirusChecked: Checked Received: (qmail 59449 invoked from network); 1 Aug 2016 16:52:58 -0000 Received: from mail-it0-f65.google.com (HELO mail-it0-f65.google.com) (209.85.214.65) by server-4.tower-27.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 1 Aug 2016 16:52:58 -0000 Received: by mail-it0-f65.google.com with SMTP id d65so12284811ith.0 for ; Mon, 01 Aug 2016 09:52:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zentific-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id; bh=2MuQzwu3MxxIc3yv9U4PDtLPqTSIAAfgRXrhRkageL0=; b=aaQZ9sAyN8Umml2KQIPA/wDvCey26YtvX1ZDQLZkH9NonE/WSSfwVoHew1qqNJYo5p fcGoJ0FFi7Fe3681xl0MCkqRl/BXfXbLFhFZUMzGHd4Xeiu87xX//iX6Nb5RDyrNuGNh ZW1bHL4Us6oNhknh8m2RP8GF6suA3fqArJ6vgEAFXSpdQ8byCuMfMxpJ7zuwU7KmBbtI TuR8keBo1chkTuPPyLdu0HmYdPm75Y59y5bvPPtR18n9hGiITd1NQjt2XAiwhBjcOXa2 lUoPchZnLRV0JTitSxKza3YgEmijzjiZRiDOPZbA+oUPCxXTu8/LMPtb6mMAr49lcJd+ QnqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=2MuQzwu3MxxIc3yv9U4PDtLPqTSIAAfgRXrhRkageL0=; b=Lg7OdxBsdFCD3p+AA1phlPFddUBuKQJjpCD3IleLzgpohuMGnpD/DcwiEj1MCZzP6r WShA9wUlcqLjhBx2l0BZX45YirjpqcYCWxyUG8uetJSwb5hAVAZueqVNl878K8O/qpK5 uwfzOT+H+NljIRwxEXQirtbnyif9YB/kM7vNimIsUZC4Ox5+3L4lWonPUoEAp+MqGIXv d7pK7dsZTt2EsWNtmBgFWdhl+SHKIpsLjP588Y5IRhrkfOkcLchEyDtnQ8tbe9eiSH9j pGgWwBvIG23d3j0WbzYkasPmXCVWIDMWTVQ+y6BS+JtOvsVBLcmucK5W5vNgEabRYpfu Q2oQ== X-Gm-Message-State: AEkoout9LyW4FcHclDDAjTc20SInukB8v64kd5hL/C+5/Cnn5LMYX2F53vi1UTVlbNLHWw== X-Received: by 10.36.253.7 with SMTP id m7mr58702766ith.6.1470070376802; Mon, 01 Aug 2016 09:52:56 -0700 (PDT) Received: from l1.lan (c-73-14-35-59.hsd1.co.comcast.net. [73.14.35.59]) by smtp.gmail.com with ESMTPSA id w192sm13972698iof.24.2016.08.01.09.52.55 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 01 Aug 2016 09:52:56 -0700 (PDT) From: Tamas K Lengyel To: xen-devel@lists.xenproject.org Date: Mon, 1 Aug 2016 10:52:51 -0600 Message-Id: <1470070371-10833-1-git-send-email-tamas.lengyel@zentific.com> X-Mailer: git-send-email 2.8.1 Cc: Stefano Stabellini , Razvan Cojocaru , George Dunlap , Tamas K Lengyel , Julien Grall , Jan Beulich Subject: [Xen-devel] [PATCH v2] mem_access: sanitize code around sending vm_event request X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP The two functions monitor_traps and mem_access_send_req duplicate some of the same functionality. The mem_access_send_req however leaves a lot of the standard vm_event fields to be filled by other functions. Remove mem_access_send_req() completely, making use of monitor_traps() to put requests into the monitor ring. This in turn causes some cleanup around the old callsites of mem_access_send_req(), and on ARM, the introduction of the __p2m_mem_access_send_req() helper to fill in common mem_access information. We also update monitor_traps to now include setting the common vcpu_id field so that all other call-sites can ommit this step. Finally, this change identifies that errors from mem_access_send_req() were never checked. As errors constitute a problem with the monitor ring, crashing the domain is the most appropriate action to take. Signed-off-by: Tamas K Lengyel Reviewed-by: Andrew Cooper Acked-by: Razvan Cojocaru Acked-by: George Dunlap --- Cc: Stefano Stabellini Cc: Julien Grall Cc: Jan Beulich Cc: Razvan Cojocaru Cc: George Dunlap --- xen/arch/arm/p2m.c | 71 +++++++++++++++++++-------------------- xen/arch/x86/hvm/hvm.c | 16 ++++++--- xen/arch/x86/hvm/monitor.c | 11 +++--- xen/arch/x86/mm/p2m.c | 24 ++----------- xen/common/mem_access.c | 11 ------ xen/common/monitor.c | 2 ++ xen/include/asm-x86/hvm/monitor.h | 2 ++ xen/include/asm-x86/p2m.h | 13 ++++--- xen/include/xen/mem_access.h | 7 ---- 9 files changed, 66 insertions(+), 91 deletions(-) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index d82349c..a378371 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -5,7 +5,7 @@ #include #include #include -#include +#include #include #include #include @@ -1642,12 +1642,40 @@ void __init setup_virt_paging(void) smp_call_function(setup_virt_paging_one, (void *)val, 1); } +static int +__p2m_mem_access_send_req(paddr_t gpa, vaddr_t gla, const struct npfec npfec, + bool_t sync) +{ + struct vcpu *v = current; + vm_event_request_t req = {}; + + req.reason = VM_EVENT_REASON_MEM_ACCESS; + + /* Send request to mem access subscriber */ + req.u.mem_access.gfn = gpa >> PAGE_SHIFT; + req.u.mem_access.offset = gpa & ~PAGE_MASK; + if ( npfec.gla_valid ) + { + req.u.mem_access.flags |= MEM_ACCESS_GLA_VALID; + req.u.mem_access.gla = gla; + + if ( npfec.kind == npfec_kind_with_gla ) + req.u.mem_access.flags |= MEM_ACCESS_FAULT_WITH_GLA; + else if ( npfec.kind == npfec_kind_in_gpt ) + req.u.mem_access.flags |= MEM_ACCESS_FAULT_IN_GPT; + } + req.u.mem_access.flags |= npfec.read_access ? MEM_ACCESS_R : 0; + req.u.mem_access.flags |= npfec.write_access ? MEM_ACCESS_W : 0; + req.u.mem_access.flags |= npfec.insn_fetch ? MEM_ACCESS_X : 0; + + return monitor_traps(v, sync, &req); +} + bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec) { int rc; - bool_t violation; + bool_t violation, sync = true; xenmem_access_t xma; - vm_event_request_t *req; struct vcpu *v = current; struct p2m_domain *p2m = p2m_get_hostp2m(v->domain); @@ -1688,6 +1716,7 @@ bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec) case XENMEM_access_n: case XENMEM_access_n2rwx: violation = true; + sync = false; break; } @@ -1734,40 +1763,8 @@ bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec) return false; } - req = xzalloc(vm_event_request_t); - if ( req ) - { - req->reason = VM_EVENT_REASON_MEM_ACCESS; - - /* Pause the current VCPU */ - if ( xma != XENMEM_access_n2rwx ) - req->flags |= VM_EVENT_FLAG_VCPU_PAUSED; - - /* Send request to mem access subscriber */ - req->u.mem_access.gfn = gpa >> PAGE_SHIFT; - req->u.mem_access.offset = gpa & ((1 << PAGE_SHIFT) - 1); - if ( npfec.gla_valid ) - { - req->u.mem_access.flags |= MEM_ACCESS_GLA_VALID; - req->u.mem_access.gla = gla; - - if ( npfec.kind == npfec_kind_with_gla ) - req->u.mem_access.flags |= MEM_ACCESS_FAULT_WITH_GLA; - else if ( npfec.kind == npfec_kind_in_gpt ) - req->u.mem_access.flags |= MEM_ACCESS_FAULT_IN_GPT; - } - req->u.mem_access.flags |= npfec.read_access ? MEM_ACCESS_R : 0; - req->u.mem_access.flags |= npfec.write_access ? MEM_ACCESS_W : 0; - req->u.mem_access.flags |= npfec.insn_fetch ? MEM_ACCESS_X : 0; - req->vcpu_id = v->vcpu_id; - - mem_access_send_req(v->domain, req); - xfree(req); - } - - /* Pause the current VCPU */ - if ( xma != XENMEM_access_n2rwx ) - vm_event_vcpu_pause(v); + if ( __p2m_mem_access_send_req(gpa, gla, npfec, sync) < 0 ) + domain_crash(v->domain); return false; } diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index daaee1d..688370d 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -1707,7 +1707,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla, int rc, fall_through = 0, paged = 0; int sharing_enomem = 0; vm_event_request_t *req_ptr = NULL; - bool_t ap2m_active; + bool_t ap2m_active, sync = 0; /* On Nested Virtualization, walk the guest page table. * If this succeeds, all is fine. @@ -1846,11 +1846,12 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla, } } - if ( p2m_mem_access_check(gpa, gla, npfec, &req_ptr) ) - { + sync = p2m_mem_access_check(gpa, gla, npfec, &req_ptr); + + if ( !sync ) { fall_through = 1; } else { - /* Rights not promoted, vcpu paused, work here is done */ + /* Rights not promoted (aka. sync event), work here is done */ rc = 1; goto out_put_gfn; } @@ -1956,7 +1957,12 @@ out: } if ( req_ptr ) { - mem_access_send_req(currd, req_ptr); + if ( hvm_monitor_mem_access(curr, sync, req_ptr) < 0 ) + { + /* Crash the domain */ + rc = 0; + } + xfree(req_ptr); } return rc; diff --git a/xen/arch/x86/hvm/monitor.c b/xen/arch/x86/hvm/monitor.c index 7277c12..12103f2 100644 --- a/xen/arch/x86/hvm/monitor.c +++ b/xen/arch/x86/hvm/monitor.c @@ -44,7 +44,6 @@ bool_t hvm_monitor_cr(unsigned int index, unsigned long value, unsigned long old vm_event_request_t req = { .reason = VM_EVENT_REASON_WRITE_CTRLREG, - .vcpu_id = curr->vcpu_id, .u.write_ctrlreg.index = index, .u.write_ctrlreg.new_value = value, .u.write_ctrlreg.old_value = old @@ -65,7 +64,6 @@ void hvm_monitor_msr(unsigned int msr, uint64_t value) { vm_event_request_t req = { .reason = VM_EVENT_REASON_MOV_TO_MSR, - .vcpu_id = curr->vcpu_id, .u.mov_to_msr.msr = msr, .u.mov_to_msr.value = value, }; @@ -131,8 +129,6 @@ int hvm_monitor_debug(unsigned long rip, enum hvm_monitor_debug_type type, return -EOPNOTSUPP; } - req.vcpu_id = curr->vcpu_id; - return monitor_traps(curr, sync, &req); } @@ -146,12 +142,17 @@ int hvm_monitor_cpuid(unsigned long insn_length) return 0; req.reason = VM_EVENT_REASON_CPUID; - req.vcpu_id = curr->vcpu_id; req.u.cpuid.insn_length = insn_length; return monitor_traps(curr, 1, &req); } +int hvm_monitor_mem_access(struct vcpu* v, bool_t sync, + vm_event_request_t *req) +{ + return monitor_traps(v, sync, req); +} + /* * Local variables: * mode: C diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 812dbf6..27f9d26 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -1728,13 +1728,8 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla, if ( req ) { *req_ptr = req; - req->reason = VM_EVENT_REASON_MEM_ACCESS; - - /* Pause the current VCPU */ - if ( p2ma != p2m_access_n2rwx ) - req->flags |= VM_EVENT_FLAG_VCPU_PAUSED; - /* Send request to mem event */ + req->reason = VM_EVENT_REASON_MEM_ACCESS; req->u.mem_access.gfn = gfn; req->u.mem_access.offset = gpa & ((1 << PAGE_SHIFT) - 1); if ( npfec.gla_valid ) @@ -1750,23 +1745,10 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla, req->u.mem_access.flags |= npfec.read_access ? MEM_ACCESS_R : 0; req->u.mem_access.flags |= npfec.write_access ? MEM_ACCESS_W : 0; req->u.mem_access.flags |= npfec.insn_fetch ? MEM_ACCESS_X : 0; - req->vcpu_id = v->vcpu_id; - - vm_event_fill_regs(req); - - if ( altp2m_active(v->domain) ) - { - req->flags |= VM_EVENT_FLAG_ALTERNATE_P2M; - req->altp2m_idx = vcpu_altp2m(v).p2midx; - } } - /* Pause the current VCPU */ - if ( p2ma != p2m_access_n2rwx ) - vm_event_vcpu_pause(v); - - /* VCPU may be paused, return whether we promoted automatically */ - return (p2ma == p2m_access_n2rwx); + /* Return whether vCPU pause is required (aka. sync event) */ + return (p2ma != p2m_access_n2rwx); } static inline diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c index b4033f0..82f4bad 100644 --- a/xen/common/mem_access.c +++ b/xen/common/mem_access.c @@ -108,17 +108,6 @@ int mem_access_memop(unsigned long cmd, return rc; } -int mem_access_send_req(struct domain *d, vm_event_request_t *req) -{ - int rc = vm_event_claim_slot(d, &d->vm_event->monitor); - if ( rc < 0 ) - return rc; - - vm_event_put_request(d, &d->vm_event->monitor, req); - - return 0; -} - /* * Local variables: * mode: C diff --git a/xen/common/monitor.c b/xen/common/monitor.c index c73d1d5..451f42f 100644 --- a/xen/common/monitor.c +++ b/xen/common/monitor.c @@ -107,6 +107,8 @@ int monitor_traps(struct vcpu *v, bool_t sync, vm_event_request_t *req) return rc; }; + req->vcpu_id = v->vcpu_id; + if ( sync ) { req->flags |= VM_EVENT_FLAG_VCPU_PAUSED; diff --git a/xen/include/asm-x86/hvm/monitor.h b/xen/include/asm-x86/hvm/monitor.h index a92f3fc..52c1f47 100644 --- a/xen/include/asm-x86/hvm/monitor.h +++ b/xen/include/asm-x86/hvm/monitor.h @@ -41,6 +41,8 @@ void hvm_monitor_msr(unsigned int msr, uint64_t value); int hvm_monitor_debug(unsigned long rip, enum hvm_monitor_debug_type type, unsigned long trap_type, unsigned long insn_length); int hvm_monitor_cpuid(unsigned long insn_length); +int hvm_monitor_mem_access(struct vcpu* v, bool_t sync, + vm_event_request_t *req); #endif /* __ASM_X86_HVM_MONITOR_H__ */ diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index 194020e..f4a746f 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -660,11 +660,14 @@ int p2m_mem_paging_prep(struct domain *d, unsigned long gfn, uint64_t buffer); /* Resume normal operation (in case a domain was paused) */ void p2m_mem_paging_resume(struct domain *d, vm_event_response_t *rsp); -/* Send mem event based on the access (gla is -1ull if not available). Handles - * the rw2rx conversion. Boolean return value indicates if access rights have - * been promoted with no underlying vcpu pause. If the req_ptr has been populated, - * then the caller must put the event in the ring (once having released get_gfn* - * locks -- caller must also xfree the request. */ +/* + * Setup vm_event request based on the access (gla is -1ull if not available). + * Handles the rw2rx conversion. Boolean return value indicates if event type + * is syncronous (aka. requires vCPU pause). If the req_ptr has been populated, + * then the caller should use monitor_traps to send the event on the MONITOR + * ring. Once having released get_gfn* locks caller must also xfree the + * request. + */ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla, struct npfec npfec, vm_event_request_t **req_ptr); diff --git a/xen/include/xen/mem_access.h b/xen/include/xen/mem_access.h index 272f1e4..3d054e0 100644 --- a/xen/include/xen/mem_access.h +++ b/xen/include/xen/mem_access.h @@ -29,7 +29,6 @@ int mem_access_memop(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg); -int mem_access_send_req(struct domain *d, vm_event_request_t *req); static inline void mem_access_resume(struct vcpu *v, vm_event_response_t *rsp) @@ -47,12 +46,6 @@ int mem_access_memop(unsigned long cmd, } static inline -int mem_access_send_req(struct domain *d, vm_event_request_t *req) -{ - return -ENOSYS; -} - -static inline void mem_access_resume(struct vcpu *vcpu, vm_event_response_t *rsp) { /* Nothing to do. */