From patchwork Mon Jul 10 13:13:24 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konrad Rzeszutek Wilk X-Patchwork-Id: 9832963 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2B48F60350 for ; Mon, 10 Jul 2017 13:16:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0064726256 for ; Mon, 10 Jul 2017 13:16:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D3096283AF; Mon, 10 Jul 2017 13:16:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 563CE26256 for ; Mon, 10 Jul 2017 13:16:20 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dUYVN-0003nV-Vp; Mon, 10 Jul 2017 13:13:37 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dUYVN-0003nP-CF for xen-devel@lists.xen.org; Mon, 10 Jul 2017 13:13:37 +0000 Received: from [85.158.143.35] by server-11.bemta-6.messagelabs.com id 10/6F-03612-08D73695; Mon, 10 Jul 2017 13:13:36 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprMJsWRWlGSWpSXmKPExsXSO6nOVbe+Njn SYPJMZoslHxezODB6HN39mymAMYo1My8pvyKBNePr96WsBcfbmSsWP7vJ2sC4ZglTFyMXh5DA ZCaJbWufM0I4fxklVr2cDJXZyCgxf8tx1i5GTiCnh1Fi83EjEJtFQFXixN+bbF2MHBxsAiYSb 1Y5goRFBJQlen/9ZgGxmQUsJfr2vmECsYUFzCSuPugCK+cFis/fEQIxMUni2s8usHJeAUGJkz OfQLVaSeyeu4YdpJxZQFpi+T8OkDCngL3E4z9/wCaKAm3afWsvM4gtIWAs0f72ItsERsFZSCb NQjJpFsIkiLCWxI1/L5lwCoPYiRJ/F69kwRRPlWj7c58NUzxWYvvSCVjEEyQefW/CIu4rcWfd LnZM8UyJrYthbFT1C9acZFzAKLyKUaM4tagstUjX0FAvqSgzPaMkNzEzR9fQwEwvN7W4ODE9N ScxqVgvOT93EyMwaTAAwQ7GT8sCDjFKcjApifJKeidHCvEl5adUZiQWZ8QXleakFh9ilOHgUJ LgtagBygkWpaanVqRl5gDTF0xagoNHSYR3YzVQmre4IDG3ODMdInWK0Zhjw+r1X5g47vRt+MI kxJKXn5cqJc6rDDJJAKQ0ozQPbhAsrV5ilJUS5mUEOk2IpyC1KDezBFX+FaM4B6OSMO+VSqAp PJl5JXD7XgGdwgR0CltdAsgpJYkIKakGxviN0m0hJ917Dapk5nNoaKg1pnek+qfeefLqdgYbe /UUYcHsDT9cf99eb3WNJ8n30CTeNRNCX14NPlYoOS3Ra6VnyGnRL8WLzt1V1Ot4ss4u+7ZkxH vhwuD790RFWFnn1qovX6G1VayyveUgZ8rmnr9/FzRsyOQ7Ob1xpsMkjUaF2wareA9OVGIpzkg 01GIuKk4EAEIfkOKmAwAA X-Env-Sender: konrad.wilk@oracle.com X-Msg-Ref: server-14.tower-21.messagelabs.com!1499692413!65856732!1 X-Originating-IP: [141.146.126.69] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n X-StarScan-Received: X-StarScan-Version: 9.4.25; banners=-,-,- X-VirusChecked: Checked Received: (qmail 43427 invoked from network); 10 Jul 2017 13:13:34 -0000 Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com) (141.146.126.69) by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 10 Jul 2017 13:13:34 -0000 Received: from aserv0022.oracle.com (aserv0022.oracle.com [141.146.126.234]) by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id v6ADDT2f029747 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 10 Jul 2017 13:13:29 GMT Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236]) by aserv0022.oracle.com (8.14.4/8.14.4) with ESMTP id v6ADDTVE014839 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 10 Jul 2017 13:13:29 GMT Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25]) by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id v6ADDSWA018156; Mon, 10 Jul 2017 13:13:28 GMT Received: from localhost.localdomain (/209.6.200.48) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Mon, 10 Jul 2017 06:13:27 -0700 Date: Mon, 10 Jul 2017 09:13:24 -0400 From: Konrad Rzeszutek Wilk To: Jan Beulich Message-ID: <20170710131323.GF2461@localhost.localdomain> References: <20170710101034.GA19754@aepfle.de> <596375FF020000780016A352@prv-mh.provo.novell.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <596375FF020000780016A352@prv-mh.provo.novell.com> User-Agent: Mutt/1.8.0 (2017-02-23) X-Source-IP: aserv0022.oracle.com [141.146.126.234] Cc: Olaf Hering , xen-devel@lists.xen.org Subject: Re: [Xen-devel] API to query NUMA node of mfn X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP On Mon, Jul 10, 2017 at 04:41:35AM -0600, Jan Beulich wrote: > >>> On 10.07.17 at 12:10, wrote: > > I would like to verify on which NUMA node the PFNs used by a HVM guest > > are located. Is there an API for that? Something like: > > > > foreach (pfn, domid) > > mfns_per_node[pfn_to_node(pfn)]++ > > foreach (node) > > printk("%x %x\n", node, mfns_per_node[node]) > > phys_to_nid() ? Soo I wrote some code for exactly this for Xen 4.4.4 , along with creation of a PGM map to see the NUMA nodes locality. Attaching them here.. > > Jan > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xen.org > https://lists.xen.org/xen-devel From a5e039801c989df29b704a4a5256715321906535 Mon Sep 17 00:00:00 2001 From: Konrad Rzeszutek Wilk Date: Tue, 6 Jun 2017 20:31:21 -0400 Subject: [PATCH 1/7] xen/x86: XENDOMCTL_get_memlist: Make it work This hypercall has a bunch of problems which this patch fixes. Specifically it is not preempt capable, takes a nested lock, and the data is stale after you get it. The nested lock (and order inversion) is due to the copy_to_guest_offset call. The particular implementation (see __hvm_copy) makes P2M calls (p2m_mem_paging_populate), which take the p2m_lock. We avoid this by taking the p2m lock early (before page_lock) in: if ( !guest_handle_okay(domctl->u.getmemlist.buffer, max_pfns) ) here (this takes the p2m lock and then unlocks). And since it checks out, we can use the fast variant of copy_to_guest (which still takes the p2m lock). And we extend this thinking in the copying of the values to the guest. The loop that copies the mfns[] to buffer takes (potentially) a p2m lock on every invocation. So to not make us holding the page_alloc_lock we create a temporary array (mfns) - which is filled while holding page_alloc_lock. But we don't hold any locks (well, we hold the domctl lock) while copying to the guest. The preemption is used and we also honor 'start_pfn' which is renamed to 'index' - as there is no enforced order in which the pages correspond to PFNs. All of those are fixed by this patch, also it means that the callers of xc_get_pfn_list have to take into account that max_pfns != num_pfns value and loop around. See patch: "libxc: Use XENDOMCTL_get_memlist properly" and "xen-mceinj: Loop around xc_get_pfn_list" Signed-off-by: Konrad Rzeszutek Wilk --- xen/arch/x86/domctl.c | 76 ++++++++++++++++++++++++++++++--------------- xen/arch/x86/mm/hap/hap.c | 1 + xen/arch/x86/mm/p2m-ept.c | 2 ++ xen/include/asm-x86/p2m.h | 2 ++ xen/include/public/domctl.h | 36 ++++++++++++++++----- 5 files changed, 84 insertions(+), 33 deletions(-) diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c index bebe1fb..3af6b39 100644 --- a/xen/arch/x86/domctl.c +++ b/xen/arch/x86/domctl.c @@ -325,57 +325,83 @@ long arch_do_domctl( case XEN_DOMCTL_getmemlist: { - int i; +#define XEN_DOMCTL_getmemlist_max_pfns (GB(1) / PAGE_SIZE) + unsigned int i = 0, idx = 0; unsigned long max_pfns = domctl->u.getmemlist.max_pfns; + unsigned long index = domctl->u.getmemlist.index; uint64_t mfn; struct page_info *page; + uint64_t *mfns; if ( unlikely(d->is_dying) ) { ret = -EINVAL; break; } + /* XSA-74: This sub-hypercall is fixed. */ - /* - * XSA-74: This sub-hypercall is broken in several ways: - * - lock order inversion (p2m locks inside page_alloc_lock) - * - no preemption on huge max_pfns input - * - not (re-)checking d->is_dying with page_alloc_lock held - * - not honoring start_pfn input (which libxc also doesn't set) - * Additionally it is rather useless, as the result is stale by the - * time the caller gets to look at it. - * As it only has a single, non-production consumer (xen-mceinj), - * rather than trying to fix it we restrict it for the time being. - */ - if ( /* No nested locks inside copy_to_guest_offset(). */ - paging_mode_external(current->domain) || - /* Arbitrary limit capping processing time. */ - max_pfns > GB(4) / PAGE_SIZE ) + ret = -E2BIG; + if ( max_pfns > XEN_DOMCTL_getmemlist_max_pfns ) + max_pfns = XEN_DOMCTL_getmemlist_max_pfns; + + /* Report the max number we are OK with. */ + if ( !max_pfns && guest_handle_is_null(domctl->u.getmemlist.buffer) ) { - ret = -EOPNOTSUPP; + domctl->u.getmemlist.max_pfns = XEN_DOMCTL_getmemlist_max_pfns; + copyback = 1; break; } - spin_lock(&d->page_alloc_lock); + ret = -EINVAL; + if ( !guest_handle_okay(domctl->u.getmemlist.buffer, max_pfns) ) + break; + + mfns = xmalloc_array(uint64_t, max_pfns); + if ( !mfns ) + { + ret = -ENOMEM; + break; + } - ret = i = 0; + ret = -EINVAL; + spin_lock(&d->page_alloc_lock); page_list_for_each(page, &d->page_list) { - if ( i >= max_pfns ) + if ( idx >= max_pfns ) break; + + if ( index > i++ ) + continue; + + if ( idx && !(idx & 0xFF) && hypercall_preempt_check() ) + break; + mfn = page_to_mfn(page); - if ( copy_to_guest_offset(domctl->u.getmemlist.buffer, - i, &mfn, 1) ) + mfns[idx++] = mfn; + } + spin_unlock(&d->page_alloc_lock); + + ret = 0; + for ( i = 0; i < idx; i++ ) + { + + if ( __copy_to_guest_offset(domctl->u.getmemlist.buffer, + i, &mfns[i], 1) ) { ret = -EFAULT; break; } - ++i; } - spin_unlock(&d->page_alloc_lock); - domctl->u.getmemlist.num_pfns = i; + /* + * A poor-man way of keeping track of P2M changes. If the P2M + * is changed the version will change as well and the caller + * can redo it's list. + */ + domctl->u.getmemlist.version = p2m_get_hostp2m(d)->version; + copyback = 1; + xfree(mfns); } break; diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c index ccc4174..0406c2a 100644 --- a/xen/arch/x86/mm/hap/hap.c +++ b/xen/arch/x86/mm/hap/hap.c @@ -709,6 +709,7 @@ hap_write_p2m_entry(struct vcpu *v, unsigned long gfn, l1_pgentry_t *p, if ( old_flags & _PAGE_PRESENT ) flush_tlb_mask(d->domain_dirty_cpumask); + p2m_get_hostp2m(d)->version++; paging_unlock(d); if ( flush_nestedp2m ) diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c index 72b3d0a..7da5b06 100644 --- a/xen/arch/x86/mm/p2m-ept.c +++ b/xen/arch/x86/mm/p2m-ept.c @@ -674,6 +674,8 @@ void ept_sync_domain(struct p2m_domain *p2m) { struct domain *d = p2m->domain; struct ept_data *ept = &p2m->ept; + + p2m->version++; /* Only if using EPT and this domain has some VCPUs to dirty. */ if ( !paging_mode_hap(d) || !d->vcpu || !d->vcpu[0] ) return; diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index fcb50b1..b0549e8 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -293,6 +293,8 @@ struct p2m_domain { struct ept_data ept; /* NPT-equivalent structure could be added here. */ }; + /* OVM: Every update to P2M increases this version. */ + unsigned long version; }; /* get host p2m table */ diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h index 27f5001..2a25079 100644 --- a/xen/include/public/domctl.h +++ b/xen/include/public/domctl.h @@ -118,16 +118,36 @@ typedef struct xen_domctl_getdomaininfo xen_domctl_getdomaininfo_t; DEFINE_XEN_GUEST_HANDLE(xen_domctl_getdomaininfo_t); -/* XEN_DOMCTL_getmemlist */ +/* + * XEN_DOMCTL_getmemlist + * Retrieve an array of mfns of the guest. + * + * If the hypercall returns an zero value, then it has copied 'num_pfns' + * (up to `max_pfns`) of the MFNs in 'buffer', along with the + * `version` updated (it may be the same across hypercalls. If it + * varies the data is stale and it is recommended that the caller restart + * iwht 'index' being zero). + * + * If the 'max_pfns' is zero, and 'buffer' is NULL, the hypercall returns + * -E2BIG and updates the 'max_pfns' with the recommend value to be used. + * + * Note that due to the asynchronous nature of hypercalls the domain might have + * added or removed the number of MFNS making this information stale. It is + * the responsibility of the toolstack to use the `version` field to check + * between each invocation. if the version differs it should discard the stale + * data and start from scratch. It is OK for the toolstack to use the new + * `version` field. + */ struct xen_domctl_getmemlist { - /* IN variables. */ - /* Max entries to write to output buffer. */ + /* IN/OUT: Max entries to write to output buffer. If max_pfns is zero and + * buffer is NULL, this has the recommend max size of buffer. */ uint64_aligned_t max_pfns; - /* Start index in guest's page list. */ - uint64_aligned_t start_pfn; - XEN_GUEST_HANDLE_64(uint64) buffer; - /* OUT variables. */ - uint64_aligned_t num_pfns; + uint64_aligned_t index; /* IN: Start index in guest's page list. */ + XEN_GUEST_HANDLE_64(uint64) buffer; /* IN: If NULL with max_pfns == 0, then + * max_pfns has recommend value. */ + uint64_aligned_t version; /* IN/OUT: If value differs, prior calls may + * have stale data. */ + uint64_aligned_t num_pfns; /* OUT: Number (up to max_pfns) copied. */ }; typedef struct xen_domctl_getmemlist xen_domctl_getmemlist_t; DEFINE_XEN_GUEST_HANDLE(xen_domctl_getmemlist_t);