From patchwork Tue Dec 6 13:46:17 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 9462639 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2DEB760231 for ; Tue, 6 Dec 2016 14:11:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1D1E0283F5 for ; Tue, 6 Dec 2016 14:11:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 11909283F8; Tue, 6 Dec 2016 14:11:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 28274283F5 for ; Tue, 6 Dec 2016 14:11:01 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cEGQO-00080e-4u; Tue, 06 Dec 2016 14:08:52 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cEGQM-0007yC-8e for xen-devel@lists.xenproject.org; Tue, 06 Dec 2016 14:08:50 +0000 Received: from [85.158.139.211] by server-11.bemta-5.messagelabs.com id 5E/AC-09407-176C6485; Tue, 06 Dec 2016 14:08:49 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFmpmkeJIrShJLcpLzFFi42JxWrrBXrfwmFu EQc93HovvWyYzOTB6HP5whSWAMYo1My8pvyKBNePSj3dMBYfTKg5eW8rcwHjKu4uRg0NCwF/i 0nO7LkZODjYBHYmpTy+xgoRFBFQkbu816GLk4mAWeMUocWjidhaQGmEBS4mjH3+ygNSwANXcv +kPEuYVcJP4u+ETK4gtISAncf74T2YQm1PAXeLCDoi4EFDN0ScL2CBsFYn1U2exQfQKSpyc+Q RsPLOAhMTBFy+YJzDyzkKSmoUktYCRaRWjRnFqUVlqka6xgV5SUWZ6RkluYmaOrqGBqV5uanF xYnpqTmJSsV5yfu4mRmDg1DMwMO5gnLDK7xCjJAeTkiiviqVbhBBfUn5KZUZicUZ8UWlOavEh RhkODiUJ3gdHgHKCRanpqRVpmTnAEIZJS3DwKInwphwFSvMWFyTmFmemQ6ROMSpKifPeAukTA ElklObBtcHi5hKjrJQwLyMDA4MQT0FqUW5mCar8K0ZxDkYlYd4QkPE8mXklcNOBgQ90swjvie POIItLEhFSUg2M61fH2Dm1iJwNV+e4Gff5xYz0k2q1pS/YrK5WF6vmPHz7Mehz13L22OPcqzt MW8Km8/kGSz2eoOi6xfPZnjYpa8U3korXWp897WLV3ZbhERjDLKp93W6NpYLTzHqebYxr1xRf n1ypxBcfK3ruSXUQ73ubMxmrXzX/V3tj2JnhdMl9j92ZlL1KLMUZiYZazEXFiQDmT99glgIAA A== X-Env-Sender: prvs=1410ab0d3=Paul.Durrant@citrix.com X-Msg-Ref: server-12.tower-206.messagelabs.com!1481033323!37700175!2 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 9.0.16; banners=-,-,- X-VirusChecked: Checked Received: (qmail 38403 invoked from network); 6 Dec 2016 14:08:46 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP; 6 Dec 2016 14:08:46 -0000 X-IronPort-AV: E=Sophos;i="5.33,310,1477958400"; d="scan'208";a="402058786" From: Paul Durrant To: Date: Tue, 6 Dec 2016 13:46:17 +0000 Message-ID: <1481031979-4751-7-git-send-email-paul.durrant@citrix.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1481031979-4751-1-git-send-email-paul.durrant@citrix.com> References: <1481031979-4751-1-git-send-email-paul.durrant@citrix.com> MIME-Version: 1.0 Cc: Wei Liu , Andrew Cooper , Ian Jackson , Paul Durrant , Jan Beulich , Daniel De Graaf Subject: [Xen-devel] [PATCH v2 6/8] dm_op: convert HVMOP_set_mem_type X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This patch removes the need for handling HVMOP restarts, so that infrastructure is removed. NOTE: This patch also modifies the type of the 'nr' argument of xc_hvm_set_mem_type() from uint64_t to uint32_t. In practice the value passed was always truncated to 32 bits. Suggested-by: Jan Beulich Signed-off-by: Paul Durrant --- Cc: Jan Beulich Cc: Ian Jackson Cc: Wei Liu Cc: Andrew Cooper Cc: Daniel De Graaf v2: - Addressed several comments from Jan. --- tools/libxc/include/xenctrl.h | 2 +- tools/libxc/xc_misc.c | 29 +++----- xen/arch/x86/hvm/dm.c | 95 +++++++++++++++++++++++++ xen/arch/x86/hvm/hvm.c | 136 +----------------------------------- xen/include/public/hvm/dm_op.h | 22 ++++++ xen/include/public/hvm/hvm_op.h | 20 ------ xen/xsm/flask/policy/access_vectors | 2 +- 7 files changed, 130 insertions(+), 176 deletions(-) diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h index 9950690..32a1e9e 100644 --- a/tools/libxc/include/xenctrl.h +++ b/tools/libxc/include/xenctrl.h @@ -1634,7 +1634,7 @@ int xc_hvm_modified_memory( * Allowed types are HVMMEM_ram_rw, HVMMEM_ram_ro, HVMMEM_mmio_dm */ int xc_hvm_set_mem_type( - xc_interface *xch, domid_t dom, hvmmem_type_t memtype, uint64_t first_pfn, uint64_t nr); + xc_interface *xch, domid_t dom, hvmmem_type_t memtype, uint64_t first_pfn, uint32_t nr); /* * Injects a hardware/software CPU trap, to take effect the next time the HVM diff --git a/tools/libxc/xc_misc.c b/tools/libxc/xc_misc.c index 597df99..5b06d6b 100644 --- a/tools/libxc/xc_misc.c +++ b/tools/libxc/xc_misc.c @@ -590,30 +590,21 @@ int xc_hvm_modified_memory( } int xc_hvm_set_mem_type( - xc_interface *xch, domid_t dom, hvmmem_type_t mem_type, uint64_t first_pfn, uint64_t nr) + xc_interface *xch, domid_t dom, hvmmem_type_t mem_type, uint64_t first_pfn, uint32_t nr) { - DECLARE_HYPERCALL_BUFFER(struct xen_hvm_set_mem_type, arg); - int rc; - - arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg)); - if ( arg == NULL ) - { - PERROR("Could not allocate memory for xc_hvm_set_mem_type hypercall"); - return -1; - } + struct xen_dm_op op; + struct xen_dm_op_set_mem_type *data; - arg->domid = dom; - arg->hvmmem_type = mem_type; - arg->first_pfn = first_pfn; - arg->nr = nr; + memset(&op, 0, sizeof(op)); - rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op, - HVMOP_set_mem_type, - HYPERCALL_BUFFER_AS_ARG(arg)); + op.op = XEN_DMOP_set_mem_type; + data = &op.u.set_mem_type; - xc_hypercall_buffer_free(xch, arg); + data->mem_type = mem_type; + data->first_pfn = first_pfn; + data->nr = nr; - return rc; + return do_dm_op(xch, dom, 1, &op, sizeof(op)); } int xc_hvm_inject_trap( diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c index 4e7d8f9..3737372 100644 --- a/xen/arch/x86/hvm/dm.c +++ b/xen/arch/x86/hvm/dm.c @@ -199,6 +199,87 @@ static int modified_memory(struct domain *d, xen_pfn_t *first_pfn, return rc; } +static bool allow_p2m_type_change(p2m_type_t old, p2m_type_t new) +{ + return ( p2m_is_ram(old) || + (p2m_is_hole(old) && new == p2m_mmio_dm) || + (old == p2m_ioreq_server && new == p2m_ram_rw) ); +} + +static int set_mem_type(struct domain *d, hvmmem_type_t mem_type, + xen_pfn_t *first_pfn, unsigned int *nr) +{ + xen_pfn_t last_pfn = *first_pfn + *nr - 1; + unsigned int iter; + int rc; + + /* Interface types to internal p2m types */ + static const p2m_type_t memtype[] = { + [HVMMEM_ram_rw] = p2m_ram_rw, + [HVMMEM_ram_ro] = p2m_ram_ro, + [HVMMEM_mmio_dm] = p2m_mmio_dm, + [HVMMEM_unused] = p2m_invalid, + [HVMMEM_ioreq_server] = p2m_ioreq_server + }; + + if ( (*first_pfn > last_pfn) || + (last_pfn > domain_get_maximum_gpfn(d)) ) + return -EINVAL; + + if ( mem_type >= ARRAY_SIZE(memtype) || + unlikely(mem_type == HVMMEM_unused) ) + return -EINVAL; + + iter = 0; + rc = 0; + while ( iter < *nr ) + { + unsigned long pfn = *first_pfn + iter; + p2m_type_t t; + + get_gfn_unshare(d, pfn, &t); + if ( p2m_is_paging(t) ) + { + put_gfn(d, pfn); + p2m_mem_paging_populate(d, pfn); + return -EAGAIN; + } + + if ( p2m_is_shared(t) ) + rc = -EAGAIN; + else if ( !allow_p2m_type_change(t, memtype[mem_type]) ) + rc = -EINVAL; + else + rc = p2m_change_type_one(d, pfn, t, memtype[mem_type]); + + put_gfn(d, pfn); + + if ( rc ) + break; + + iter++; + + /* + * Check for continuation every 256th iteration and if the + * iteration is not the last. + */ + if ( (iter < *nr) && ((iter & 0xff) == 0) && + hypercall_preempt_check() ) + { + rc = -ERESTART; + break; + } + } + + if ( rc == -ERESTART ) + { + *first_pfn += iter; + *nr -= iter; + } + + return rc; +} + long do_dm_op(domid_t domid, unsigned int nr_bufs, XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs) @@ -384,6 +465,20 @@ long do_dm_op(domid_t domid, break; } + case XEN_DMOP_set_mem_type: + { + struct xen_dm_op_set_mem_type *data = + &op.u.set_mem_type; + + rc = -EINVAL; + if ( data->pad ) + break; + + rc = set_mem_type(d, data->mem_type, &data->first_pfn, + &data->nr); + break; + } + default: rc = -EOPNOTSUPP; break; diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 3760e0b..4a24e4e 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -5266,132 +5266,11 @@ static int hvmop_get_mem_type( return rc; } -/* - * Note that this value is effectively part of the ABI, even if we don't need - * to make it a formal part of it: A guest suspended for migration in the - * middle of a continuation would fail to work if resumed on a hypervisor - * using a different value. - */ -#define HVMOP_op_mask 0xff - -static bool_t hvm_allow_p2m_type_change(p2m_type_t old, p2m_type_t new) -{ - if ( p2m_is_ram(old) || - (p2m_is_hole(old) && new == p2m_mmio_dm) || - (old == p2m_ioreq_server && new == p2m_ram_rw) ) - return 1; - - return 0; -} - -static int hvmop_set_mem_type( - XEN_GUEST_HANDLE_PARAM(xen_hvm_set_mem_type_t) arg, - unsigned long *iter) -{ - unsigned long start_iter = *iter; - struct xen_hvm_set_mem_type a; - struct domain *d; - int rc; - - /* Interface types to internal p2m types */ - static const p2m_type_t memtype[] = { - [HVMMEM_ram_rw] = p2m_ram_rw, - [HVMMEM_ram_ro] = p2m_ram_ro, - [HVMMEM_mmio_dm] = p2m_mmio_dm, - [HVMMEM_unused] = p2m_invalid, - [HVMMEM_ioreq_server] = p2m_ioreq_server - }; - - if ( copy_from_guest(&a, arg, 1) ) - return -EFAULT; - - rc = rcu_lock_remote_domain_by_id(a.domid, &d); - if ( rc != 0 ) - return rc; - - rc = -EINVAL; - if ( !is_hvm_domain(d) ) - goto out; - - rc = xsm_hvm_control(XSM_DM_PRIV, d, HVMOP_set_mem_type); - if ( rc ) - goto out; - - rc = -EINVAL; - if ( a.nr < start_iter || - ((a.first_pfn + a.nr - 1) < a.first_pfn) || - ((a.first_pfn + a.nr - 1) > domain_get_maximum_gpfn(d)) ) - goto out; - - if ( a.hvmmem_type >= ARRAY_SIZE(memtype) || - unlikely(a.hvmmem_type == HVMMEM_unused) ) - goto out; - - while ( a.nr > start_iter ) - { - unsigned long pfn = a.first_pfn + start_iter; - p2m_type_t t; - - get_gfn_unshare(d, pfn, &t); - if ( p2m_is_paging(t) ) - { - put_gfn(d, pfn); - p2m_mem_paging_populate(d, pfn); - rc = -EAGAIN; - goto out; - } - if ( p2m_is_shared(t) ) - { - put_gfn(d, pfn); - rc = -EAGAIN; - goto out; - } - if ( !hvm_allow_p2m_type_change(t, memtype[a.hvmmem_type]) ) - { - put_gfn(d, pfn); - goto out; - } - - rc = p2m_change_type_one(d, pfn, t, memtype[a.hvmmem_type]); - put_gfn(d, pfn); - - if ( rc ) - goto out; - - /* Check for continuation if it's not the last interation */ - if ( a.nr > ++start_iter && !(start_iter & HVMOP_op_mask) && - hypercall_preempt_check() ) - { - rc = -ERESTART; - goto out; - } - } - rc = 0; - - out: - rcu_unlock_domain(d); - *iter = start_iter; - - return rc; -} - long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg) { - unsigned long start_iter, mask; long rc = 0; - switch ( op & HVMOP_op_mask ) - { - default: - mask = ~0UL; - break; - case HVMOP_set_mem_type: - mask = HVMOP_op_mask; - break; - } - - start_iter = op & ~mask; - switch ( op &= mask ) + switch ( op ) { case HVMOP_set_evtchn_upcall_vector: rc = hvmop_set_evtchn_upcall_vector( @@ -5422,12 +5301,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg) guest_handle_cast(arg, xen_hvm_get_mem_type_t)); break; - case HVMOP_set_mem_type: - rc = hvmop_set_mem_type( - guest_handle_cast(arg, xen_hvm_set_mem_type_t), - &start_iter); - break; - case HVMOP_pagetable_dying: { struct xen_hvm_pagetable_dying a; @@ -5536,13 +5409,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg) } } - if ( rc == -ERESTART ) - { - ASSERT(!(start_iter & mask)); - rc = hypercall_create_continuation(__HYPERVISOR_hvm_op, "lh", - op | start_iter, arg); - } - return rc; } diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h index 1a1b784..4b74c91 100644 --- a/xen/include/public/hvm/dm_op.h +++ b/xen/include/public/hvm/dm_op.h @@ -251,6 +251,27 @@ struct xen_dm_op_modified_memory { uint64_aligned_t first_pfn; }; +/* + * XEN_DMOP_set_mem_type: Notify that a region of memory is to be treated + * in a specific way. (See definition of + * hvmmem_type_t). + * + * NOTE: In the event of a continuation (return code -ERESTART), the + * @first_pfn is set to the value of the pfn of the remaining + * region and @nr reduced to the size of the remaining region. + */ +#define XEN_DMOP_set_mem_type 12 + +struct xen_dm_op_set_mem_type { + /* IN - number of contiguous pages */ + uint32_t nr; + /* IN - new hvmmem_type_t of region */ + uint16_t mem_type; + uint16_t pad; + /* IN - first pfn in region */ + uint64_aligned_t first_pfn; +}; + struct xen_dm_op { uint32_t op; @@ -267,6 +288,7 @@ struct xen_dm_op { struct xen_dm_op_set_isa_irq_level set_isa_irq_level; struct xen_dm_op_set_pci_link_route set_pci_link_route; struct xen_dm_op_modified_memory modified_memory; + struct xen_dm_op_set_mem_type set_mem_type; } u; }; diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h index 76e1b78..d7e2f12 100644 --- a/xen/include/public/hvm/hvm_op.h +++ b/xen/include/public/hvm/hvm_op.h @@ -96,26 +96,6 @@ typedef enum { HVMMEM_ioreq_server } hvmmem_type_t; -/* Following tools-only interfaces may change in future. */ -#if defined(__XEN__) || defined(__XEN_TOOLS__) - -#define HVMOP_set_mem_type 8 -/* Notify that a region of memory is to be treated in a specific way. */ -struct xen_hvm_set_mem_type { - /* Domain to be updated. */ - domid_t domid; - /* Memory type */ - uint16_t hvmmem_type; - /* Number of pages. */ - uint32_t nr; - /* First pfn. */ - uint64_aligned_t first_pfn; -}; -typedef struct xen_hvm_set_mem_type xen_hvm_set_mem_type_t; -DEFINE_XEN_GUEST_HANDLE(xen_hvm_set_mem_type_t); - -#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */ - /* Hint from PV drivers for pagetable destruction. */ #define HVMOP_pagetable_dying 9 struct xen_hvm_pagetable_dying { diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors index 2041ca5..125210b 100644 --- a/xen/xsm/flask/policy/access_vectors +++ b/xen/xsm/flask/policy/access_vectors @@ -260,7 +260,7 @@ class hvm bind_irq # XEN_DOMCTL_pin_mem_cacheattr cacheattr -# HVMOP_get_mem_type, HVMOP_set_mem_type, +# HVMOP_get_mem_type, # HVMOP_set_mem_access, HVMOP_get_mem_access, HVMOP_pagetable_dying, # HVMOP_inject_trap hvmctl