From patchwork Thu Jan 28 20:58:07 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tamas K Lengyel X-Patchwork-Id: 8153891 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 5C7019F9A0 for ; Thu, 28 Jan 2016 21:01:21 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id BF38820256 for ; Thu, 28 Jan 2016 21:01:19 +0000 (UTC) Received: from lists.xen.org (lists.xenproject.org [50.57.142.19]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 02C2720253 for ; Thu, 28 Jan 2016 21:01:18 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1aOte0-0007TY-Rv; Thu, 28 Jan 2016 20:58:20 +0000 Received: from mail6.bemta4.messagelabs.com ([85.158.143.247]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1aOtdz-0007TP-P0 for xen-devel@lists.xenproject.org; Thu, 28 Jan 2016 20:58:20 +0000 Received: from [85.158.143.35] by server-2.bemta-4.messagelabs.com id 59/78-08977-BE08AA65; Thu, 28 Jan 2016 20:58:19 +0000 X-Env-Sender: tlengyel@novetta.com X-Msg-Ref: server-3.tower-21.messagelabs.com!1454014696!12296165!1 X-Originating-IP: [209.85.214.180] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 7.35.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 56559 invoked from network); 28 Jan 2016 20:58:17 -0000 Received: from mail-ob0-f180.google.com (HELO mail-ob0-f180.google.com) (209.85.214.180) by server-3.tower-21.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 28 Jan 2016 20:58:17 -0000 Received: by mail-ob0-f180.google.com with SMTP id wb13so13373908obb.1 for ; Thu, 28 Jan 2016 12:58:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=novetta-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id; bh=gkI9UULI4fW0Gc7FUe+kgk5RSrTDLBdFsZO33ISrwog=; b=mNoKWJXngzYSjU52MHrwjuSslLhF4CpCWKmD4H7S2MCxgXzMZoJlHjqt3R6ue419bn FXl+S/McBAz3JGDawOJPoDdkNq36RGqSmmCcNIbSl9EMYcoHU6PrwDdAeR+Udaot0vvV x+FqUuJEEbcyl9rx+cvg+WUSzalzZc/CiK+TM10gN7Xyw49Og78P5Sxl3NX5ag3R/rr1 S4OZKzz9Zgyg1ZG3d9aocl+aJ7c+9zrbgdOAUXCW6K0/aRL1Plhm02oym4AJ+zCb6lsD f+jM2ytq2VPS9iiLKJfwOx5nAgKJWK5OOxWKs69zcnzRqpOyWwB3QcfdBNVl9IpKHokn OlCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=gkI9UULI4fW0Gc7FUe+kgk5RSrTDLBdFsZO33ISrwog=; b=eKr4Qu1JPQ7a4za23Zs+xckDTGrfSS9oxqeQODko9j0SNZmMCCLYI+TvSyb4n4CuUM 0J8Cz9NNkpHvyTXEb7HwMX1BSLQ/lw7HLFPRQ2Z8NnzoXkBHBCOiLo+a0Emas9Bw+0VT P4lqkfGhEpoKi7h7PjTJhgl89U6UzcSaDm8UzIBbX4rpIzz9jhgNWkfFNoDOsvYPoQsA O9mL32WMedsIdxeNkC8zbiHAF22u5vqQWHAGVBx58svF0+dJdnDbShDZ50bBZfvfPG99 zCAqpTfI07jZsj/YbaEbGZnBG6jv7HT8c+W3vn7rw+hEh35VeCagcHHvTZkuwfCY5tkB p3aA== X-Gm-Message-State: AG10YOQzgvxxZFSeS0fEPHyu6fXgWjPybvOjedfLoKlj4RM7MNz2sZMVKQcovk7HtNkl641w X-Received: by 10.60.39.136 with SMTP id p8mr3963802oek.81.1454014696348; Thu, 28 Jan 2016 12:58:16 -0800 (PST) Received: from localhost.localdomain (c-73-14-35-59.hsd1.co.comcast.net. [73.14.35.59]) by smtp.gmail.com with ESMTPSA id s7sm5273551obk.17.2016.01.28.12.58.14 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 28 Jan 2016 12:58:15 -0800 (PST) From: Tamas K Lengyel To: xen-devel@lists.xenproject.org Date: Thu, 28 Jan 2016 13:58:07 -0700 Message-Id: <1454014688-25060-1-git-send-email-tlengyel@novetta.com> X-Mailer: git-send-email 2.1.4 Cc: Wei Liu , Ian Campbell , Razvan Cojocaru , Stefano Stabellini , George Dunlap , Andrew Cooper , Ian Jackson , Stefano Stabellini , Jan Beulich , Tamas K Lengyel , Keir Fraser Subject: [Xen-devel] [PATCH v2 1/2] altp2m: Merge p2m_set_altp2m_mem_access and p2m_set_mem_access X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The altp2m subsystem in its current form uses its own HVMOP hypercall to set mem_access permissions, duplicating some of the code already present for setting regular mem_access permissions. In this patch we consolidate the two and update the corresponding tools. Signed-off-by: Tamas K Lengyel Cc: Ian Jackson Cc: Stefano Stabellini Cc: Ian Campbell Cc: Wei Liu Cc: Razvan Cojocaru Cc: Stefano Stabellini Cc: Keir Fraser Cc: Jan Beulich Cc: Andrew Cooper Cc: George Dunlap Acked-by: Wei Liu --- v2: Don't deprecate the HVMOP hypercall for setting mem_access Use unsigned int instead of unsigned long --- tools/libxc/include/xenctrl.h | 5 +- tools/libxc/xc_altp2m.c | 25 ------ tools/libxc/xc_mem_access.c | 14 +-- tools/tests/xen-access/xen-access.c | 53 ++++------- xen/arch/arm/p2m.c | 9 +- xen/arch/x86/hvm/hvm.c | 6 +- xen/arch/x86/mm/p2m.c | 169 ++++++++++++++++-------------------- xen/common/mem_access.c | 2 +- xen/include/asm-x86/p2m.h | 4 - xen/include/public/memory.h | 3 + xen/include/xen/p2m-common.h | 3 +- 11 files changed, 117 insertions(+), 176 deletions(-) diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h index e632b1e..b4e57d8 100644 --- a/tools/libxc/include/xenctrl.h +++ b/tools/libxc/include/xenctrl.h @@ -2027,9 +2027,6 @@ int xc_altp2m_destroy_view(xc_interface *handle, domid_t domid, /* Switch all vCPUs of the domain to the specified altp2m view */ int xc_altp2m_switch_to_view(xc_interface *handle, domid_t domid, uint16_t view_id); -int xc_altp2m_set_mem_access(xc_interface *handle, domid_t domid, - uint16_t view_id, xen_pfn_t gfn, - xenmem_access_t access); int xc_altp2m_change_gfn(xc_interface *handle, domid_t domid, uint16_t view_id, xen_pfn_t old_gfn, xen_pfn_t new_gfn); @@ -2062,7 +2059,7 @@ int xc_mem_paging_load(xc_interface *xch, domid_t domain_id, */ int xc_set_mem_access(xc_interface *xch, domid_t domain_id, xenmem_access_t access, uint64_t first_pfn, - uint32_t nr); + uint32_t nr, uint16_t altp2m_idx); /* * Gets the mem access for the given page (returned in access on success) diff --git a/tools/libxc/xc_altp2m.c b/tools/libxc/xc_altp2m.c index 0639632..4f44a7b 100644 --- a/tools/libxc/xc_altp2m.c +++ b/tools/libxc/xc_altp2m.c @@ -163,31 +163,6 @@ int xc_altp2m_switch_to_view(xc_interface *handle, domid_t domid, return rc; } -int xc_altp2m_set_mem_access(xc_interface *handle, domid_t domid, - uint16_t view_id, xen_pfn_t gfn, - xenmem_access_t access) -{ - int rc; - DECLARE_HYPERCALL_BUFFER(xen_hvm_altp2m_op_t, arg); - - arg = xc_hypercall_buffer_alloc(handle, arg, sizeof(*arg)); - if ( arg == NULL ) - return -1; - - arg->version = HVMOP_ALTP2M_INTERFACE_VERSION; - arg->cmd = HVMOP_altp2m_set_mem_access; - arg->domain = domid; - arg->u.set_mem_access.view = view_id; - arg->u.set_mem_access.hvmmem_access = access; - arg->u.set_mem_access.gfn = gfn; - - rc = xencall2(handle->xcall, __HYPERVISOR_hvm_op, HVMOP_altp2m, - HYPERCALL_BUFFER_AS_ARG(arg)); - - xc_hypercall_buffer_free(handle, arg); - return rc; -} - int xc_altp2m_change_gfn(xc_interface *handle, domid_t domid, uint16_t view_id, xen_pfn_t old_gfn, xen_pfn_t new_gfn) diff --git a/tools/libxc/xc_mem_access.c b/tools/libxc/xc_mem_access.c index 3634c39..d6fb409 100644 --- a/tools/libxc/xc_mem_access.c +++ b/tools/libxc/xc_mem_access.c @@ -27,15 +27,17 @@ int xc_set_mem_access(xc_interface *xch, domid_t domain_id, xenmem_access_t access, uint64_t first_pfn, - uint32_t nr) + uint32_t nr, + uint16_t altp2m_idx) { xen_mem_access_op_t mao = { - .op = XENMEM_access_op_set_access, - .domid = domain_id, - .access = access, - .pfn = first_pfn, - .nr = nr + .op = XENMEM_access_op_set_access, + .domid = domain_id, + .access = access, + .pfn = first_pfn, + .nr = nr, + .altp2m_idx = altp2m_idx }; return do_memory_op(xch, XENMEM_access_op, &mao, sizeof(mao)); diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c index 7993947..2300e9a 100644 --- a/tools/tests/xen-access/xen-access.c +++ b/tools/tests/xen-access/xen-access.c @@ -448,9 +448,6 @@ int main(int argc, char *argv[]) /* With altp2m we just create a new, restricted view of the memory */ if ( altp2m ) { - xen_pfn_t gfn = 0; - unsigned long perm_set = 0; - rc = xc_altp2m_set_domain_state( xch, domain_id, 1 ); if ( rc < 0 ) { @@ -466,17 +463,6 @@ int main(int argc, char *argv[]) } DPRINTF("altp2m view created with id %u\n", altp2m_view_id); - DPRINTF("Setting altp2m mem_access permissions.. "); - - for(; gfn < xenaccess->max_gpfn; ++gfn) - { - rc = xc_altp2m_set_mem_access( xch, domain_id, altp2m_view_id, gfn, - default_access); - if ( !rc ) - perm_set++; - } - - DPRINTF("done! Permissions set on %lu pages.\n", perm_set); rc = xc_altp2m_switch_to_view( xch, domain_id, altp2m_view_id ); if ( rc < 0 ) @@ -493,25 +479,22 @@ int main(int argc, char *argv[]) } } - if ( !altp2m ) + /* Set the default access type and convert all pages to it */ + rc = xc_set_mem_access(xch, domain_id, default_access, ~0ull, 0, 0); + if ( rc < 0 ) { - /* Set the default access type and convert all pages to it */ - rc = xc_set_mem_access(xch, domain_id, default_access, ~0ull, 0); - if ( rc < 0 ) - { - ERROR("Error %d setting default mem access type\n", rc); - goto exit; - } + ERROR("Error %d setting default mem access type\n", rc); + goto exit; + } - rc = xc_set_mem_access(xch, domain_id, default_access, START_PFN, - (xenaccess->max_gpfn - START_PFN) ); + rc = xc_set_mem_access(xch, domain_id, default_access, START_PFN, + (xenaccess->max_gpfn - START_PFN), altp2m_view_id ); - if ( rc < 0 ) - { - ERROR("Error %d setting all memory to access type %d\n", rc, - default_access); - goto exit; - } + if ( rc < 0 ) + { + ERROR("Error %d setting all memory to access type %d\n", rc, + default_access); + goto exit; } if ( breakpoint ) @@ -547,12 +530,12 @@ int main(int argc, char *argv[]) for ( vcpu_id = 0; vcpu_idmax_gpfn - START_PFN) ); } + rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, ~0ull, 0, 0); + rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, START_PFN, + (xenaccess->max_gpfn - START_PFN), 0 ); + shutting_down = 1; } @@ -623,7 +606,7 @@ int main(int argc, char *argv[]) else if ( default_access != after_first_access ) { rc = xc_set_mem_access(xch, domain_id, after_first_access, - req.u.mem_access.gfn, 1); + req.u.mem_access.gfn, 1, 0); if (rc < 0) { ERROR("Error %d setting gfn to access_type %d\n", rc, diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 2190908..8568087 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -1709,13 +1709,13 @@ bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec) if ( npfec.write_access && xma == XENMEM_access_rx2rw ) { rc = p2m_set_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), 1, - 0, ~0, XENMEM_access_rw); + 0, ~0, XENMEM_access_rw, 0); return false; } else if ( xma == XENMEM_access_n2rwx ) { rc = p2m_set_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), 1, - 0, ~0, XENMEM_access_rwx); + 0, ~0, XENMEM_access_rwx, 0); } /* Otherwise, check if there is a vm_event monitor subscriber */ @@ -1737,7 +1737,7 @@ bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec) /* A listener is not required, so clear the access * restrictions. */ rc = p2m_set_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), 1, - 0, ~0, XENMEM_access_rwx); + 0, ~0, XENMEM_access_rwx, 0); } } @@ -1788,7 +1788,8 @@ bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec) * If gfn == INVALID_GFN, sets the default access type. */ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr, - uint32_t start, uint32_t mask, xenmem_access_t access) + uint32_t start, uint32_t mask, xenmem_access_t access, + unsigned int altp2m_idx) { struct p2m_domain *p2m = p2m_get_hostp2m(d); p2m_access_t a; diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 674feea..37305fb 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -6398,9 +6398,9 @@ static int do_altp2m_op( if ( a.u.set_mem_access.pad ) rc = -EINVAL; else - rc = p2m_set_altp2m_mem_access(d, a.u.set_mem_access.view, - _gfn(a.u.set_mem_access.gfn), - a.u.set_mem_access.hvmmem_access); + rc = p2m_set_mem_access(d, _gfn(a.u.set_mem_access.gfn), 1, 0, 0, + a.u.set_mem_access.hvmmem_access, + a.u.set_mem_access.view); break; case HVMOP_altp2m_change_gfn: diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index a45ee35..a3181f3 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -1777,14 +1777,57 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla, return (p2ma == p2m_access_n2rwx); } +static int p2m_set_altp2m_mem_access(struct domain *d, struct p2m_domain *hp2m, + struct p2m_domain *ap2m, p2m_access_t a, + unsigned long gfn) +{ + mfn_t mfn; + p2m_type_t t; + p2m_access_t old_a; + unsigned int page_order; + int rc; + + mfn = ap2m->get_entry(ap2m, gfn, &t, &old_a, 0, NULL, NULL); + + /* Check host p2m if no valid entry in alternate */ + if ( !mfn_valid(mfn) ) + { + mfn = hp2m->get_entry(hp2m, gfn, &t, &old_a, + P2M_ALLOC | P2M_UNSHARE, &page_order, NULL); + + rc = -ESRCH; + if ( !mfn_valid(mfn) || t != p2m_ram_rw ) + goto out; + + /* If this is a superpage, copy that first */ + if ( page_order != PAGE_ORDER_4K ) + { + unsigned long mask = ~((1UL << page_order) - 1); + gfn_t gfn2 = _gfn(gfn & mask); + mfn_t mfn2 = _mfn(mfn_x(mfn) & mask); + + rc = ap2m->set_entry(ap2m, gfn_x(gfn2), mfn2, page_order, t, old_a, 1); + if ( rc ) + goto out; + } + } + + rc = ap2m->set_entry(ap2m, gfn, mfn, PAGE_ORDER_4K, t, a, + (current->domain != d)); + + out: + return rc; +} + /* * Set access type for a region of gfns. * If gfn == INVALID_GFN, sets the default access type. */ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr, - uint32_t start, uint32_t mask, xenmem_access_t access) + uint32_t start, uint32_t mask, xenmem_access_t access, + unsigned int altp2m_idx) { - struct p2m_domain *p2m = p2m_get_hostp2m(d); + struct p2m_domain *p2m = p2m_get_hostp2m(d), *ap2m = NULL; p2m_access_t a, _a; p2m_type_t t; mfn_t mfn; @@ -1806,6 +1849,16 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr, #undef ACCESS }; + /* altp2m view 0 is treated as the hostp2m */ + if ( altp2m_idx ) + { + if ( altp2m_idx >= MAX_ALTP2M || + d->arch.altp2m_eptp[altp2m_idx] == INVALID_MFN ) + return -EINVAL; + + ap2m = d->arch.altp2m_p2m[altp2m_idx]; + } + switch ( access ) { case 0 ... ARRAY_SIZE(memaccess) - 1: @@ -1826,12 +1879,25 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr, } p2m_lock(p2m); + if ( ap2m ) + p2m_lock(ap2m); + for ( gfn_l = gfn_x(gfn) + start; nr > start; ++gfn_l ) { - mfn = p2m->get_entry(p2m, gfn_l, &t, &_a, 0, NULL, NULL); - rc = p2m->set_entry(p2m, gfn_l, mfn, PAGE_ORDER_4K, t, a, -1); - if ( rc ) - break; + if ( ap2m ) + { + rc = p2m_set_altp2m_mem_access(d, p2m, ap2m, a, gfn_l); + /* If the corresponding mfn is invalid we will just skip it */ + if ( rc && rc != -ESRCH ) + break; + } + else + { + mfn = p2m->get_entry(p2m, gfn_l, &t, &_a, 0, NULL, NULL); + rc = p2m->set_entry(p2m, gfn_l, mfn, PAGE_ORDER_4K, t, a, -1); + if ( rc ) + break; + } /* Check for continuation if it's not the last iteration. */ if ( nr > ++start && !(start & mask) && hypercall_preempt_check() ) @@ -1840,7 +1906,11 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr, break; } } + + if ( ap2m ) + p2m_unlock(ap2m); p2m_unlock(p2m); + return rc; } @@ -2395,93 +2465,6 @@ int p2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx) return rc; } -int p2m_set_altp2m_mem_access(struct domain *d, unsigned int idx, - gfn_t gfn, xenmem_access_t access) -{ - struct p2m_domain *hp2m, *ap2m; - p2m_access_t req_a, old_a; - p2m_type_t t; - mfn_t mfn; - unsigned int page_order; - int rc = -EINVAL; - - static const p2m_access_t memaccess[] = { -#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac - ACCESS(n), - ACCESS(r), - ACCESS(w), - ACCESS(rw), - ACCESS(x), - ACCESS(rx), - ACCESS(wx), - ACCESS(rwx), -#undef ACCESS - }; - - if ( idx >= MAX_ALTP2M || d->arch.altp2m_eptp[idx] == INVALID_MFN ) - return rc; - - ap2m = d->arch.altp2m_p2m[idx]; - - switch ( access ) - { - case 0 ... ARRAY_SIZE(memaccess) - 1: - req_a = memaccess[access]; - break; - case XENMEM_access_default: - req_a = ap2m->default_access; - break; - default: - return rc; - } - - /* If request to set default access */ - if ( gfn_x(gfn) == INVALID_GFN ) - { - ap2m->default_access = req_a; - return 0; - } - - hp2m = p2m_get_hostp2m(d); - - p2m_lock(ap2m); - - mfn = ap2m->get_entry(ap2m, gfn_x(gfn), &t, &old_a, 0, NULL, NULL); - - /* Check host p2m if no valid entry in alternate */ - if ( !mfn_valid(mfn) ) - { - mfn = hp2m->get_entry(hp2m, gfn_x(gfn), &t, &old_a, - P2M_ALLOC | P2M_UNSHARE, &page_order, NULL); - - if ( !mfn_valid(mfn) || t != p2m_ram_rw ) - goto out; - - /* If this is a superpage, copy that first */ - if ( page_order != PAGE_ORDER_4K ) - { - gfn_t gfn2; - unsigned long mask; - mfn_t mfn2; - - mask = ~((1UL << page_order) - 1); - gfn2 = _gfn(gfn_x(gfn) & mask); - mfn2 = _mfn(mfn_x(mfn) & mask); - - if ( ap2m->set_entry(ap2m, gfn_x(gfn2), mfn2, page_order, t, old_a, 1) ) - goto out; - } - } - - if ( !ap2m->set_entry(ap2m, gfn_x(gfn), mfn, PAGE_ORDER_4K, t, req_a, - (current->domain != d)) ) - rc = 0; - - out: - p2m_unlock(ap2m); - return rc; -} - int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx, gfn_t old_gfn, gfn_t new_gfn) { diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c index 159c036..0411443 100644 --- a/xen/common/mem_access.c +++ b/xen/common/mem_access.c @@ -67,7 +67,7 @@ int mem_access_memop(unsigned long cmd, break; rc = p2m_set_mem_access(d, _gfn(mao.pfn), mao.nr, start_iter, - MEMOP_CMD_MASK, mao.access); + MEMOP_CMD_MASK, mao.access, mao.altp2m_idx); if ( rc > 0 ) { ASSERT(!(rc & MEMOP_CMD_MASK)); diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index fa46dd9..c0df1ea 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -808,10 +808,6 @@ int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx); /* Switch alternate p2m for entire domain */ int p2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx); -/* Set access type for a gfn */ -int p2m_set_altp2m_mem_access(struct domain *d, unsigned int idx, - gfn_t gfn, xenmem_access_t access); - /* Change a gfn->mfn mapping */ int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx, gfn_t old_gfn, gfn_t new_gfn); diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h index 4df38d6..30ccb41 100644 --- a/xen/include/public/memory.h +++ b/xen/include/public/memory.h @@ -423,11 +423,14 @@ struct xen_mem_access_op { /* xenmem_access_t */ uint8_t access; domid_t domid; + uint16_t altp2m_idx; + uint16_t _pad; /* * Number of pages for set op * Ignored on setting default access and other ops */ uint32_t nr; + uint32_t _pad2; /* * First pfn for set op * pfn for get op diff --git a/xen/include/xen/p2m-common.h b/xen/include/xen/p2m-common.h index 47c40c7..8b70459 100644 --- a/xen/include/xen/p2m-common.h +++ b/xen/include/xen/p2m-common.h @@ -49,7 +49,8 @@ int unmap_mmio_regions(struct domain *d, * If gfn == INVALID_GFN, sets the default access type. */ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr, - uint32_t start, uint32_t mask, xenmem_access_t access); + uint32_t start, uint32_t mask, xenmem_access_t access, + unsigned int altp2m_idx); /* * Get access type for a gfn.