From patchwork Wed Jul 20 18:55:01 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tamas Lengyel X-Patchwork-Id: 9240293 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A15F86077C for ; Wed, 20 Jul 2016 18:58:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 93D2727CE5 for ; Wed, 20 Jul 2016 18:58:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8876727D5D; Wed, 20 Jul 2016 18:58:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id E376E27CE5 for ; Wed, 20 Jul 2016 18:58:18 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bPweG-0008P3-9r; Wed, 20 Jul 2016 18:55:12 +0000 Received: from mail6.bemta6.messagelabs.com ([85.158.143.247]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bPweE-0008Ox-FF for xen-devel@lists.xenproject.org; Wed, 20 Jul 2016 18:55:10 +0000 Received: from [85.158.143.35] by server-3.bemta-6.messagelabs.com id E0/7D-05661-D09CF875; Wed, 20 Jul 2016 18:55:09 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFupnkeJIrShJLcpLzFFi42K5GHr/iC7Pyf5 wg/8nLCy+b5nM5MDocfjDFZYAxijWzLyk/IoE1owt12ayF5w3qVi3aCZTA+M1jS5GLg4hgRmM Er8v/WYDcVgE3rBInHvcyQziSAi8Y5F4cn8uUxcjJ5ATI/GyYQULhF0hcXDndzBbSEBT4uCGM 8wQoyYwSay4MJMRJMEmYCRx9WoPG4gtIqAkcW/VZCaQImaBNYwSmzf1s4IkhAViJQ5/aWMHsV kEVCX+TGwC28Yr4CVx48FNqM1yEpenP2CbwMi3gJFhFaN6cWpRWWqRrpFeUlFmekZJbmJmjq6 hgZlebmpxcWJ6ak5iUrFecn7uJkZgsDAAwQ7GZX+dDjFKcjApifKqivaGC/El5adUZiQWZ8QX leakFh9ilOHgUJLgXX+8P1xIsCg1PbUiLTMHGLYwaQkOHiUR3g0gad7igsTc4sx0iNQpRmOOL b+vrWXi2Db13lomIZa8/LxUKXHeBSClAiClGaV5cINg8XSJUVZKmJcR6DQhnoLUotzMElT5V4 ziHIxKwrxfQabwZOaVwO17BXQKE9ApcwTATilJREhJNTB633C6pvF7id+s5uilbzmX6/Gt+88 gIT9lzdIyzcc2TY+Xfjvkq+zAdcXg79qqWSxX1zD+yXL97nn07PqPOfm3hJ6eY62uqHiWKM7a zSGu+3Hu6VXViR3PxCuvH/z18RaLIlOovPFajsJwCybe8lO/ryqlMu4WW3Gw9sUlp+eTz+f/3 /3xorWxEktxRqKhFnNRcSIAR1U1i6ICAAA= X-Env-Sender: tamas.lengyel@zentific.com X-Msg-Ref: server-8.tower-21.messagelabs.com!1469040907!24804364!1 X-Originating-IP: [209.85.223.196] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 8.77; banners=-,-,- X-VirusChecked: Checked Received: (qmail 7672 invoked from network); 20 Jul 2016 18:55:08 -0000 Received: from mail-io0-f196.google.com (HELO mail-io0-f196.google.com) (209.85.223.196) by server-8.tower-21.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 20 Jul 2016 18:55:08 -0000 Received: by mail-io0-f196.google.com with SMTP id q83so4053731iod.2 for ; Wed, 20 Jul 2016 11:55:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zentific-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id; bh=U99vfO/EtEPAqccztNDfyxWBifoy5PoKx7O9wF4jnRg=; b=pST9X7ZBMzWLzJhlydYtGJ05B41F2YFOW2DN1Nf2bVKfuRA7Q3YlFHK1Y5iq2fYlf2 LVFL/V+HqF9ED7D3si2SIkj43zxd9/rAKLjPozk04MDFNXo/T6xzthy8WlDxE8TGS7Mm KJXNH0u24DiueVqi7mCw6T/Xu9uy7WwDnEKhMWOMbsok11zrfUDpH6oeUPbFuKJjw7kc nWQKKMF/o5qjj+laclNvRjiXg3qFkx01TbqP3cDSuenPLtWHo+UwKZuogC8XjCswKOav GyA5opKr6Ot6MEPNiBatlZNM7kujlvovih3PAdCMyPYghh64G6eQ1uoVTjSnX3vFny0W +pqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=U99vfO/EtEPAqccztNDfyxWBifoy5PoKx7O9wF4jnRg=; b=UhMU3zhq3iPOtEeR1wm01f3ys2rC3muBFFKVafQ9fBfVh155r3GFmiZT0WCMeRfklD 0qqlFQp+j+ej1nzOlGWQ1A6mZFsAUv2Iwzzp4XRmn8Z6BWQ64klDQGIbzhKTjTuX3yPU kIWA6WJeF/ZZz1OuSe+qkskLe30yZH83Ivf/b6bQITVU9B4A26eyCrdXfw+bw27gW4xq re25Ep5AUwoae4g1wenFKhmPgvspi42MAsbX5eKcU2vHPcaXUc4pnvzdH2Pr0C0jNMdU kbaTK+IC4rGrcuqW4MooZOUgYLji1DzixH3dGRHTUK4SYJTBUd1VFZmlFjlbR/ZhkcGm gjKA== X-Gm-Message-State: ALyK8tJKGH5Z3voZi6WBT84DO+Pc2a0HIDP2m/nPce6TWEVc4hKWV3Ej5VI8yu/7gsmtKA== X-Received: by 10.107.9.39 with SMTP id j39mr45625347ioi.73.1469040907314; Wed, 20 Jul 2016 11:55:07 -0700 (PDT) Received: from l1.lan (c-73-14-35-59.hsd1.co.comcast.net. [73.14.35.59]) by smtp.gmail.com with ESMTPSA id h99sm2010562iod.9.2016.07.20.11.55.05 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 20 Jul 2016 11:55:06 -0700 (PDT) From: Tamas K Lengyel To: xen-devel@lists.xenproject.org Date: Wed, 20 Jul 2016 12:55:01 -0600 Message-Id: <1469040901-14650-1-git-send-email-tamas.lengyel@zentific.com> X-Mailer: git-send-email 2.8.1 Cc: George Dunlap , Tamas K Lengyel , Jan Beulich , Andrew Cooper Subject: [Xen-devel] [PATCH v5] altp2m: Allow shared entries to be copied to altp2m views during lazycopy X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Move sharing locks above altp2m to avoid locking order violation and crashing the hypervisor during unsharing operations when altp2m is active. Applying mem_access settings or remapping gfns in altp2m views will automatically unshare the page if it was shared previously. Also, disallow nominating pages for which there are pre-existing altp2m mem_access settings or remappings present. However, allow altp2m to populate altp2m views with shared entries during lazycopy as unsharing will automatically propagate the change to these entries in altp2m views as well. Signed-off-by: Tamas K Lengyel --- Cc: George Dunlap Cc: Jan Beulich Cc: Andrew Cooper v5: Allow only lazycopy to copy shared entries to altp2m views Use get_gfn_type_access for unsharing as get_entry doesn't do that --- xen/arch/x86/mm/mem_sharing.c | 25 ++++++++++++++++++++++++- xen/arch/x86/mm/mm-locks.h | 30 +++++++++++++++--------------- xen/arch/x86/mm/p2m.c | 12 +++++++----- 3 files changed, 46 insertions(+), 21 deletions(-) diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c index a522423..3939cd0 100644 --- a/xen/arch/x86/mm/mem_sharing.c +++ b/xen/arch/x86/mm/mem_sharing.c @@ -33,6 +33,7 @@ #include #include #include +#include #include #include #include @@ -828,20 +829,42 @@ int mem_sharing_nominate_page(struct domain *d, int expected_refcnt, shr_handle_t *phandle) { + struct p2m_domain *hp2m = p2m_get_hostp2m(d); p2m_type_t p2mt; + p2m_access_t p2ma; mfn_t mfn; struct page_info *page = NULL; /* gcc... */ int ret; *phandle = 0UL; - mfn = get_gfn(d, gfn, &p2mt); + mfn = get_gfn_type_access(hp2m, gfn, &p2mt, &p2ma, 0, NULL); /* Check if mfn is valid */ ret = -EINVAL; if ( !mfn_valid(mfn) ) goto out; + /* Check if there are mem_access/remapped altp2m entries for this page */ + if ( altp2m_active(d) ) + { + unsigned int i; + struct p2m_domain *ap2m; + mfn_t amfn; + p2m_access_t ap2ma; + + for ( i = 0; i < MAX_ALTP2M; i++ ) + { + ap2m = d->arch.altp2m_p2m[i]; + if ( !ap2m ) + continue; + + amfn = get_gfn_type_access(ap2m, gfn, NULL, &ap2ma, 0, NULL); + if ( mfn_valid(amfn) && (mfn_x(amfn) != mfn_x(mfn) || ap2ma != p2ma) ) + goto out; + } + } + /* Return the handle if the page is already shared */ if ( p2m_is_shared(p2mt) ) { struct page_info *pg = __grab_shared_page(mfn); diff --git a/xen/arch/x86/mm/mm-locks.h b/xen/arch/x86/mm/mm-locks.h index 086c8bb..74fdfc1 100644 --- a/xen/arch/x86/mm/mm-locks.h +++ b/xen/arch/x86/mm/mm-locks.h @@ -242,6 +242,21 @@ declare_mm_lock(nestedp2m) declare_mm_rwlock(p2m); +/* Sharing per page lock + * + * This is an external lock, not represented by an mm_lock_t. The memory + * sharing lock uses it to protect addition and removal of (gfn,domain) + * tuples to a shared page. We enforce order here against the p2m lock, + * which is taken after the page_lock to change the gfn's p2m entry. + * + * The lock is recursive because during share we lock two pages. */ + +declare_mm_order_constraint(per_page_sharing) +#define page_sharing_mm_pre_lock() mm_enforce_order_lock_pre_per_page_sharing() +#define page_sharing_mm_post_lock(l, r) \ + mm_enforce_order_lock_post_per_page_sharing((l), (r)) +#define page_sharing_mm_unlock(l, r) mm_enforce_order_unlock((l), (r)) + /* Alternate P2M list lock (per-domain) * * A per-domain lock that protects the list of alternate p2m's. @@ -287,21 +302,6 @@ declare_mm_rwlock(altp2m); #define p2m_locked_by_me(p) mm_write_locked_by_me(&(p)->lock) #define gfn_locked_by_me(p,g) p2m_locked_by_me(p) -/* Sharing per page lock - * - * This is an external lock, not represented by an mm_lock_t. The memory - * sharing lock uses it to protect addition and removal of (gfn,domain) - * tuples to a shared page. We enforce order here against the p2m lock, - * which is taken after the page_lock to change the gfn's p2m entry. - * - * The lock is recursive because during share we lock two pages. */ - -declare_mm_order_constraint(per_page_sharing) -#define page_sharing_mm_pre_lock() mm_enforce_order_lock_pre_per_page_sharing() -#define page_sharing_mm_post_lock(l, r) \ - mm_enforce_order_lock_post_per_page_sharing((l), (r)) -#define page_sharing_mm_unlock(l, r) mm_enforce_order_unlock((l), (r)) - /* PoD lock (per-p2m-table) * * Protects private PoD data structs: entry and cache diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index ff0cce8..812dbf6 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -1786,8 +1786,9 @@ int p2m_set_altp2m_mem_access(struct domain *d, struct p2m_domain *hp2m, /* Check host p2m if no valid entry in alternate */ if ( !mfn_valid(mfn) ) { - mfn = hp2m->get_entry(hp2m, gfn_l, &t, &old_a, - P2M_ALLOC | P2M_UNSHARE, &page_order, NULL); + + mfn = get_gfn_type_access(hp2m, gfn_l, &t, &old_a, + P2M_ALLOC | P2M_UNSHARE, &page_order); rc = -ESRCH; if ( !mfn_valid(mfn) || t != p2m_ram_rw ) @@ -2363,7 +2364,7 @@ bool_t p2m_altp2m_lazy_copy(struct vcpu *v, paddr_t gpa, return 0; mfn = get_gfn_type_access(hp2m, gfn_x(gfn), &p2mt, &p2ma, - P2M_ALLOC | P2M_UNSHARE, &page_order); + P2M_ALLOC, &page_order); __put_gfn(hp2m, gfn_x(gfn)); if ( mfn_eq(mfn, INVALID_MFN) ) @@ -2562,8 +2563,8 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx, /* Check host p2m if no valid entry in alternate */ if ( !mfn_valid(mfn) ) { - mfn = hp2m->get_entry(hp2m, gfn_x(old_gfn), &t, &a, - P2M_ALLOC | P2M_UNSHARE, &page_order, NULL); + mfn = get_gfn_type_access(hp2m, gfn_x(old_gfn), &t, &a, + P2M_ALLOC | P2M_UNSHARE, &page_order); if ( !mfn_valid(mfn) || t != p2m_ram_rw ) goto out; @@ -2588,6 +2589,7 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx, if ( !mfn_valid(mfn) ) mfn = hp2m->get_entry(hp2m, gfn_x(new_gfn), &t, &a, 0, NULL, NULL); + /* Note: currently it is not safe to remap to a shared entry */ if ( !mfn_valid(mfn) || (t != p2m_ram_rw) ) goto out;