From patchwork Tue Jul 12 09:02:07 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhang X-Patchwork-Id: 9224841 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6099560868 for ; Tue, 12 Jul 2016 09:19:32 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4DC9524B48 for ; Tue, 12 Jul 2016 09:19:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 424B927813; Tue, 12 Jul 2016 09:19:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 748CC24B48 for ; Tue, 12 Jul 2016 09:19:31 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bMto8-0004H5-Ra; Tue, 12 Jul 2016 09:16:48 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bMto8-0004Gi-0b for xen-devel@lists.xen.org; Tue, 12 Jul 2016 09:16:48 +0000 Received: from [85.158.139.211] by server-4.bemta-5.messagelabs.com id D8/10-11823-F75B4875; Tue, 12 Jul 2016 09:16:47 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrLLMWRWlGSWpSXmKPExsVywNwkVrdua0u 4QfNELYslHxezODB6HN39mymAMYo1My8pvyKBNWPVzy/sBQ/8Ks5MnMTUwPjcpIuRk0NIoELi 1NnnzCC2hACvxJFlM1gh7ACJziNr2SBq6iT2fF4JFmcT0Jb4sfo3I4gtIiAtce3zZSCbi4NZo J9RYt6jJqAiDg5hgRCJvZs0QGpYBFQlDsw9ygJi8wp4SUyedJkFYr6cxMljk8Fmcgp4S7RPnM sMsctLYtWbv0wTGHkXMDKsYtQoTi0qSy3SNbTQSyrKTM8oyU3MzNE1NDDVy00tLk5MT81JTCr WS87P3cQIDAYGINjB2LTd8xCjJAeTkihvM3NLuBBfUn5KZUZicUZ8UWlOavEhRhkODiUJ3slb gHKCRanpqRVpmTnAsIRJS3DwKInwxoOkeYsLEnOLM9MhUqcYFaXEeWNBEgIgiYzSPLg2WCxcY pSVEuZlBDpEiKcgtSg3swRV/hWjOAejkjCvNsgUnsy8Erjpr4AWMwEtrnVoBllckoiQkmpgtB dcyyttpBRrPUH9oZkGx7/KeVvCXNskqkt5+2czm6X7NS3eMLdk+4lp1Rdm/9Trj1ZZm6zid/g Re6VH0m3GjO7EzFTHPNtecdakSz/FLHsCZpd67Lw96XH0ztNubjwmMQkp+d8u6h7LdTie2X88 6YXVolj/5wJL5Tp691b8iIivm2cl3aLEUpyRaKjFXFScCAC1An/EgAIAAA== X-Env-Sender: yu.c.zhang@linux.intel.com X-Msg-Ref: server-11.tower-206.messagelabs.com!1468315002!37439193!3 X-Originating-IP: [192.55.52.93] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n X-StarScan-Received: X-StarScan-Version: 8.77; banners=-,-,- X-VirusChecked: Checked Received: (qmail 12746 invoked from network); 12 Jul 2016 09:16:46 -0000 Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93) by server-11.tower-206.messagelabs.com with SMTP; 12 Jul 2016 09:16:46 -0000 Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga102.fm.intel.com with ESMTP; 12 Jul 2016 02:16:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos; i="5.28,351,1464678000"; d="scan'208"; a="1015212912" Received: from zhangyu-xengt.bj.intel.com ([10.238.157.46]) by orsmga002.jf.intel.com with ESMTP; 12 Jul 2016 02:16:44 -0700 From: Yu Zhang To: xen-devel@lists.xen.org Date: Tue, 12 Jul 2016 17:02:07 +0800 Message-Id: <1468314129-28465-3-git-send-email-yu.c.zhang@linux.intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1468314129-28465-1-git-send-email-yu.c.zhang@linux.intel.com> References: <1468314129-28465-1-git-send-email-yu.c.zhang@linux.intel.com> Cc: Andrew Cooper , Paul Durrant , zhiyuan.lv@intel.com, Jan Beulich Subject: [Xen-devel] [PATCH v5 2/4] x86/ioreq server: Add new functions to get/set memory types. X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP For clarity this patch breaks the code to set/get memory types out of do_hvm_op() into dedicated functions: hvmop_set/get_mem_type(). Also, for clarity, checks for whether a memory type change is allowed are broken out into a separate function called by hvmop_set_mem_type(). There is no intentional functional change in this patch. Signed-off-by: Paul Durrant Signed-off-by: Yu Zhang Reviewed-by: George Dunlap Acked-by: Andrew Cooper --- Cc: Jan Beulich Cc: Andrew Cooper changes in v4: - According to Wei Liu's comments, change the format of the commit message. changes in v3: - Add Andrew's Acked-by and George's Reviewed-by. changes in v2: - According to George Dunlap's comments, follow the "set rc / do something / goto out" pattern in hvmop_get_mem_type(). --- xen/arch/x86/hvm/hvm.c | 288 +++++++++++++++++++++++++++---------------------- 1 file changed, 161 insertions(+), 127 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 4b51c57..4453ec0 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -5345,6 +5345,61 @@ static int do_altp2m_op( return rc; } +static int hvmop_get_mem_type( + XEN_GUEST_HANDLE_PARAM(xen_hvm_get_mem_type_t) arg) +{ + struct xen_hvm_get_mem_type a; + struct domain *d; + p2m_type_t t; + int rc; + + if ( copy_from_guest(&a, arg, 1) ) + return -EFAULT; + + d = rcu_lock_domain_by_any_id(a.domid); + if ( d == NULL ) + return -ESRCH; + + rc = xsm_hvm_param(XSM_TARGET, d, HVMOP_get_mem_type); + if ( rc ) + goto out; + + rc = -EINVAL; + if ( !is_hvm_domain(d) ) + goto out; + + /* + * Use get_gfn query as we are interested in the current + * type, not in allocating or unsharing. That'll happen + * on access. + */ + get_gfn_query_unlocked(d, a.pfn, &t); + if ( p2m_is_mmio(t) ) + a.mem_type = HVMMEM_mmio_dm; + else if ( t == p2m_ioreq_server ) + a.mem_type = HVMMEM_ioreq_server; + else if ( p2m_is_readonly(t) ) + a.mem_type = HVMMEM_ram_ro; + else if ( p2m_is_ram(t) ) + a.mem_type = HVMMEM_ram_rw; + else if ( p2m_is_pod(t) ) + a.mem_type = HVMMEM_ram_rw; + else if ( p2m_is_grant(t) ) + a.mem_type = HVMMEM_ram_rw; + else + a.mem_type = HVMMEM_mmio_dm; + + rc = -EFAULT; + if ( __copy_to_guest(arg, &a, 1) ) + goto out; + rc = 0; + + out: + rcu_unlock_domain(d); + + return rc; +} + /* * Note that this value is effectively part of the ABI, even if we don't need * to make it a formal part of it: A guest suspended for migration in the @@ -5353,6 +5408,107 @@ static int do_altp2m_op( */ #define HVMOP_op_mask 0xff +static bool_t hvm_allow_p2m_type_change(p2m_type_t old, p2m_type_t new) +{ + if ( p2m_is_ram(old) || + (p2m_is_hole(old) && new == p2m_mmio_dm) || + (old == p2m_ioreq_server && new == p2m_ram_rw) ) + return 1; + + return 0; +} + +static int hvmop_set_mem_type( + XEN_GUEST_HANDLE_PARAM(xen_hvm_set_mem_type_t) arg, + unsigned long *iter) +{ + unsigned long start_iter = *iter; + struct xen_hvm_set_mem_type a; + struct domain *d; + int rc; + + /* Interface types to internal p2m types */ + static const p2m_type_t memtype[] = { + [HVMMEM_ram_rw] = p2m_ram_rw, + [HVMMEM_ram_ro] = p2m_ram_ro, + [HVMMEM_mmio_dm] = p2m_mmio_dm, + [HVMMEM_unused] = p2m_invalid, + [HVMMEM_ioreq_server] = p2m_ioreq_server + }; + + if ( copy_from_guest(&a, arg, 1) ) + return -EFAULT; + + rc = rcu_lock_remote_domain_by_id(a.domid, &d); + if ( rc != 0 ) + return rc; + + rc = -EINVAL; + if ( !is_hvm_domain(d) ) + goto out; + + rc = xsm_hvm_control(XSM_DM_PRIV, d, HVMOP_set_mem_type); + if ( rc ) + goto out; + + rc = -EINVAL; + if ( a.nr < start_iter || + ((a.first_pfn + a.nr - 1) < a.first_pfn) || + ((a.first_pfn + a.nr - 1) > domain_get_maximum_gpfn(d)) ) + goto out; + + if ( a.hvmmem_type >= ARRAY_SIZE(memtype) || + unlikely(a.hvmmem_type == HVMMEM_unused) ) + goto out; + + while ( a.nr > start_iter ) + { + unsigned long pfn = a.first_pfn + start_iter; + p2m_type_t t; + + get_gfn_unshare(d, pfn, &t); + if ( p2m_is_paging(t) ) + { + put_gfn(d, pfn); + p2m_mem_paging_populate(d, pfn); + rc = -EAGAIN; + goto out; + } + if ( p2m_is_shared(t) ) + { + put_gfn(d, pfn); + rc = -EAGAIN; + goto out; + } + if ( !hvm_allow_p2m_type_change(t, memtype[a.hvmmem_type]) ) + { + put_gfn(d, pfn); + goto out; + } + + rc = p2m_change_type_one(d, pfn, t, memtype[a.hvmmem_type]); + put_gfn(d, pfn); + + if ( rc ) + goto out; + + /* Check for continuation if it's not the last interation */ + if ( a.nr > ++start_iter && !(start_iter & HVMOP_op_mask) && + hypercall_preempt_check() ) + { + rc = -ERESTART; + goto out; + } + } + rc = 0; + + out: + rcu_unlock_domain(d); + *iter = start_iter; + + return rc; +} + long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg) { unsigned long start_iter, mask; @@ -5542,137 +5698,15 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg) } case HVMOP_get_mem_type: - { - struct xen_hvm_get_mem_type a; - struct domain *d; - p2m_type_t t; - - if ( copy_from_guest(&a, arg, 1) ) - return -EFAULT; - - d = rcu_lock_domain_by_any_id(a.domid); - if ( d == NULL ) - return -ESRCH; - - rc = xsm_hvm_param(XSM_TARGET, d, op); - if ( unlikely(rc) ) - /* nothing */; - else if ( likely(is_hvm_domain(d)) ) - { - /* Use get_gfn query as we are interested in the current - * type, not in allocating or unsharing. That'll happen - * on access. */ - get_gfn_query_unlocked(d, a.pfn, &t); - if ( p2m_is_mmio(t) ) - a.mem_type = HVMMEM_mmio_dm; - else if ( t == p2m_ioreq_server ) - a.mem_type = HVMMEM_ioreq_server; - else if ( p2m_is_readonly(t) ) - a.mem_type = HVMMEM_ram_ro; - else if ( p2m_is_ram(t) ) - a.mem_type = HVMMEM_ram_rw; - else if ( p2m_is_pod(t) ) - a.mem_type = HVMMEM_ram_rw; - else if ( p2m_is_grant(t) ) - a.mem_type = HVMMEM_ram_rw; - else - a.mem_type = HVMMEM_mmio_dm; - if ( __copy_to_guest(arg, &a, 1) ) - rc = -EFAULT; - } - else - rc = -EINVAL; - - rcu_unlock_domain(d); + rc = hvmop_get_mem_type( + guest_handle_cast(arg, xen_hvm_get_mem_type_t)); break; - } case HVMOP_set_mem_type: - { - struct xen_hvm_set_mem_type a; - struct domain *d; - - /* Interface types to internal p2m types */ - static const p2m_type_t memtype[] = { - [HVMMEM_ram_rw] = p2m_ram_rw, - [HVMMEM_ram_ro] = p2m_ram_ro, - [HVMMEM_mmio_dm] = p2m_mmio_dm, - [HVMMEM_unused] = p2m_invalid, - [HVMMEM_ioreq_server] = p2m_ioreq_server - }; - - if ( copy_from_guest(&a, arg, 1) ) - return -EFAULT; - - rc = rcu_lock_remote_domain_by_id(a.domid, &d); - if ( rc != 0 ) - return rc; - - rc = -EINVAL; - if ( !is_hvm_domain(d) ) - goto setmemtype_fail; - - rc = xsm_hvm_control(XSM_DM_PRIV, d, op); - if ( rc ) - goto setmemtype_fail; - - rc = -EINVAL; - if ( a.nr < start_iter || - ((a.first_pfn + a.nr - 1) < a.first_pfn) || - ((a.first_pfn + a.nr - 1) > domain_get_maximum_gpfn(d)) ) - goto setmemtype_fail; - - if ( a.hvmmem_type >= ARRAY_SIZE(memtype) || - unlikely(a.hvmmem_type == HVMMEM_unused) ) - goto setmemtype_fail; - - while ( a.nr > start_iter ) - { - unsigned long pfn = a.first_pfn + start_iter; - p2m_type_t t; - - get_gfn_unshare(d, pfn, &t); - if ( p2m_is_paging(t) ) - { - put_gfn(d, pfn); - p2m_mem_paging_populate(d, pfn); - rc = -EAGAIN; - goto setmemtype_fail; - } - if ( p2m_is_shared(t) ) - { - put_gfn(d, pfn); - rc = -EAGAIN; - goto setmemtype_fail; - } - if ( !p2m_is_ram(t) && - (!p2m_is_hole(t) || a.hvmmem_type != HVMMEM_mmio_dm) && - (t != p2m_ioreq_server || a.hvmmem_type != HVMMEM_ram_rw) ) - { - put_gfn(d, pfn); - goto setmemtype_fail; - } - - rc = p2m_change_type_one(d, pfn, t, memtype[a.hvmmem_type]); - put_gfn(d, pfn); - if ( rc ) - goto setmemtype_fail; - - /* Check for continuation if it's not the last interation */ - if ( a.nr > ++start_iter && !(start_iter & HVMOP_op_mask) && - hypercall_preempt_check() ) - { - rc = -ERESTART; - goto setmemtype_fail; - } - } - - rc = 0; - - setmemtype_fail: - rcu_unlock_domain(d); + rc = hvmop_set_mem_type( + guest_handle_cast(arg, xen_hvm_set_mem_type_t), + &start_iter); break; - } case HVMOP_pagetable_dying: {