From patchwork Mon Jul 25 18:33:07 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tamas Lengyel X-Patchwork-Id: 9246323 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id EF716607F2 for ; Mon, 25 Jul 2016 18:36:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E280C209CD for ; Mon, 25 Jul 2016 18:36:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D6CFB26CF9; Mon, 25 Jul 2016 18:36:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 23F61209CD for ; Mon, 25 Jul 2016 18:36:13 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bRkgl-0005WB-Ag; Mon, 25 Jul 2016 18:33:15 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bRkgk-0005W5-Nb for xen-devel@lists.xenproject.org; Mon, 25 Jul 2016 18:33:14 +0000 Received: from [85.158.137.68] by server-4.bemta-3.messagelabs.com id 28/C0-15788-96B56975; Mon, 25 Jul 2016 18:33:13 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFupjkeJIrShJLcpLzFFi42K5GHrNSTczelq 4wZwnrBbft0xmcmD0OPzhCksAYxRrZl5SfkUCa8a7KQdYCnaYVRxvr21g/KXZxcjFISQwg1Fi xrZWJhCHReANi8Tv6cfBHAmBdywS6zu/sHYxcgI5MRKnnx9mhrCrJLpv/2EEsYUENCUObjjDD DFqApPEk28nwYrYBIwkrl7tYQOxRQSUJO6tmgw2lVlgDaPE5k39YFOFBWIlrl98x9LFyAG0W1 Vi4ZF8kDCvgJfE0Z0g9SDL5CQuT3/ANoGRbwEjwypGjeLUorLUIl0jI72kosz0jJLcxMwcXUM DY73c1OLixPTUnMSkYr3k/NxNjMBQqWdgYNzBOPWE3yFGSQ4mJVFejeBp4UJ8SfkplRmJxRnx RaU5qcWHGGU4OJQkeBdEAeUEi1LTUyvSMnOAQQuTluDgURLhPQ6S5i0uSMwtzkyHSJ1iNObY8 vvaWiaObVPvrWUSYsnLz0uVEue9AFIqAFKaUZoHNwgWTZcYZaWEeRkZGBiEeApSi3IzS1DlXz GKczAqCfOeB5nCk5lXArfvFdApTECnLOCZDHJKSSJCSqqBkSG7IdJzk/7ydYfOnXXYvlci3mp ul9BBe5mkAOnG3a9E2tLCG/jYXh6vuisWzB561KDycPX3yc5mj5sri51Uz0w15GcpO3VMbaHJ bItjNp2JC9fF2akaZSrOFdV76Bfy5mdP/9pXpVbn9LYaZSgtfv/sadWlObPPHJN5mG5mu5SlS an1xoViJZbijERDLeai4kQAuFK+tqECAAA= X-Env-Sender: tamas.lengyel@zentific.com X-Msg-Ref: server-7.tower-31.messagelabs.com!1469471591!44782689!1 X-Originating-IP: [209.85.214.66] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 8.77; banners=-,-,- X-VirusChecked: Checked Received: (qmail 23951 invoked from network); 25 Jul 2016 18:33:12 -0000 Received: from mail-it0-f66.google.com (HELO mail-it0-f66.google.com) (209.85.214.66) by server-7.tower-31.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 25 Jul 2016 18:33:12 -0000 Received: by mail-it0-f66.google.com with SMTP id u186so8084941ita.1 for ; Mon, 25 Jul 2016 11:33:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zentific-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id; bh=tbU9VcO7elpxI3kd/vs+jjyjTy4lYIJNEwDaLh3QKrI=; b=GdPpm93OBtII7FbVkzLaeOOXkTUgxmCWJmR6jbOyWY3gilf3OkTCqtborISU/nn9ev wG/6ekXm4GXeiIm8Gmr8Xz+vachNX0U4ojaKDaZsbBcX5R0WZfbg7o9aTEQOZQ0c/UIf jW2kB0U9+4xoCXKXzSzJXrJNmn+2VciLysoCRJ07j8WLJIVdtLVhuJnEUfSzA/u2WFNq 0HEKDz2flUDS8Y+q6TWv1zte2Q0wDqmk4AmQlzeehH/EiFHkt/mh4edtp5U6hXP5bb5/ qFxXkWgTVG3SA9t++y7yMfVyKwe42jZwzLqG4L2BCq58u5Ec9kDEaHomubbEfJaVGSeB Jncg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=tbU9VcO7elpxI3kd/vs+jjyjTy4lYIJNEwDaLh3QKrI=; b=YMgyOxSmmHaaacIe02s4iylacwkDFL25SKMndMfp/8BXnmEmcDLWPDAcmjXg5BlVPe uOXG2TNYtPINYTWLhcexbpYnU5wpi4EMjLoua+XTkYQsbsJoTYgfjtwV7jZ44IngSrrh Uo6LZyguCsFLtloqAaK8gCEwCXsNhR/s+t11tTadWr4HInjs6vFUmx62hFvKMsi3GZKs gqiFsG/eT8iz/aXHfIxs7Guufd7j062IeuRoGagmbtpqNsKoZqcDDmUrzgSjYidZHDKR xCeiphpHUN7A+ZEPyDw74IMAAGSnhzs06Yx7RJWCU2mc4TZRA31RoVCBMZ7bq5FQ4aY9 GSCw== X-Gm-Message-State: ALyK8tLXNdr89OZGs6Zi4+49AIDKPxoSEqfVcqbgYpOhFLZeFHW+5f4RRSXurg12YB2YrQ== X-Received: by 10.36.74.137 with SMTP id k131mr90829923itb.85.1469471591457; Mon, 25 Jul 2016 11:33:11 -0700 (PDT) Received: from l1.lan (c-73-14-35-59.hsd1.co.comcast.net. [73.14.35.59]) by smtp.gmail.com with ESMTPSA id i80sm9743735ita.5.2016.07.25.11.33.10 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 25 Jul 2016 11:33:10 -0700 (PDT) From: Tamas K Lengyel To: xen-devel@lists.xenproject.org Date: Mon, 25 Jul 2016 12:33:07 -0600 Message-Id: <1469471587-12921-1-git-send-email-tamas.lengyel@zentific.com> X-Mailer: git-send-email 2.8.1 Cc: George Dunlap , Tamas K Lengyel , Jan Beulich , Andrew Cooper Subject: [Xen-devel] [PATCH v6] altp2m: Allow shared entries to be copied to altp2m views during lazycopy X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Move sharing locks above altp2m to avoid locking order violation and crashing the hypervisor during unsharing operations when altp2m is active. Applying mem_access settings or remapping gfns in altp2m views will automatically unshare the page if it was shared previously and for this we use get_entry() wrappers to properly initiate unsharing. Also, disallow nominating pages for which there are pre-existing altp2m mem_access settings or remappings present. However, allow altp2m to populate altp2m views with shared entries during lazycopy as unsharing will automatically propagate the change to these entries in altp2m views as well. Signed-off-by: Tamas K Lengyel Reviewed-by: George Dunlap --- Cc: George Dunlap Cc: Jan Beulich Cc: Andrew Cooper v6: Lock the altp2m list when checking during nomination Update commit message --- xen/arch/x86/mm/mem_sharing.c | 32 +++++++++++++++++++++++++++++++- xen/arch/x86/mm/mm-locks.h | 30 +++++++++++++++--------------- xen/arch/x86/mm/p2m.c | 12 +++++++----- 3 files changed, 53 insertions(+), 21 deletions(-) diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c index a522423..47e0820 100644 --- a/xen/arch/x86/mm/mem_sharing.c +++ b/xen/arch/x86/mm/mem_sharing.c @@ -33,6 +33,7 @@ #include #include #include +#include #include #include #include @@ -828,14 +829,16 @@ int mem_sharing_nominate_page(struct domain *d, int expected_refcnt, shr_handle_t *phandle) { + struct p2m_domain *hp2m = p2m_get_hostp2m(d); p2m_type_t p2mt; + p2m_access_t p2ma; mfn_t mfn; struct page_info *page = NULL; /* gcc... */ int ret; *phandle = 0UL; - mfn = get_gfn(d, gfn, &p2mt); + mfn = get_gfn_type_access(hp2m, gfn, &p2mt, &p2ma, 0, NULL); /* Check if mfn is valid */ ret = -EINVAL; @@ -861,6 +864,33 @@ int mem_sharing_nominate_page(struct domain *d, if ( !p2m_is_sharable(p2mt) ) goto out; + /* Check if there are mem_access/remapped altp2m entries for this page */ + if ( altp2m_active(d) ) + { + unsigned int i; + struct p2m_domain *ap2m; + mfn_t amfn; + p2m_access_t ap2ma; + + altp2m_list_lock(d); + + for ( i = 0; i < MAX_ALTP2M; i++ ) + { + ap2m = d->arch.altp2m_p2m[i]; + if ( !ap2m ) + continue; + + amfn = get_gfn_type_access(ap2m, gfn, NULL, &ap2ma, 0, NULL); + if ( mfn_valid(amfn) && (mfn_x(amfn) != mfn_x(mfn) || ap2ma != p2ma) ) + { + altp2m_list_unlock(d); + goto out; + } + } + + altp2m_list_unlock(d); + } + /* Try to convert the mfn to the sharable type */ page = mfn_to_page(mfn); ret = page_make_sharable(d, page, expected_refcnt); diff --git a/xen/arch/x86/mm/mm-locks.h b/xen/arch/x86/mm/mm-locks.h index 086c8bb..74fdfc1 100644 --- a/xen/arch/x86/mm/mm-locks.h +++ b/xen/arch/x86/mm/mm-locks.h @@ -242,6 +242,21 @@ declare_mm_lock(nestedp2m) declare_mm_rwlock(p2m); +/* Sharing per page lock + * + * This is an external lock, not represented by an mm_lock_t. The memory + * sharing lock uses it to protect addition and removal of (gfn,domain) + * tuples to a shared page. We enforce order here against the p2m lock, + * which is taken after the page_lock to change the gfn's p2m entry. + * + * The lock is recursive because during share we lock two pages. */ + +declare_mm_order_constraint(per_page_sharing) +#define page_sharing_mm_pre_lock() mm_enforce_order_lock_pre_per_page_sharing() +#define page_sharing_mm_post_lock(l, r) \ + mm_enforce_order_lock_post_per_page_sharing((l), (r)) +#define page_sharing_mm_unlock(l, r) mm_enforce_order_unlock((l), (r)) + /* Alternate P2M list lock (per-domain) * * A per-domain lock that protects the list of alternate p2m's. @@ -287,21 +302,6 @@ declare_mm_rwlock(altp2m); #define p2m_locked_by_me(p) mm_write_locked_by_me(&(p)->lock) #define gfn_locked_by_me(p,g) p2m_locked_by_me(p) -/* Sharing per page lock - * - * This is an external lock, not represented by an mm_lock_t. The memory - * sharing lock uses it to protect addition and removal of (gfn,domain) - * tuples to a shared page. We enforce order here against the p2m lock, - * which is taken after the page_lock to change the gfn's p2m entry. - * - * The lock is recursive because during share we lock two pages. */ - -declare_mm_order_constraint(per_page_sharing) -#define page_sharing_mm_pre_lock() mm_enforce_order_lock_pre_per_page_sharing() -#define page_sharing_mm_post_lock(l, r) \ - mm_enforce_order_lock_post_per_page_sharing((l), (r)) -#define page_sharing_mm_unlock(l, r) mm_enforce_order_unlock((l), (r)) - /* PoD lock (per-p2m-table) * * Protects private PoD data structs: entry and cache diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index ff0cce8..812dbf6 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -1786,8 +1786,9 @@ int p2m_set_altp2m_mem_access(struct domain *d, struct p2m_domain *hp2m, /* Check host p2m if no valid entry in alternate */ if ( !mfn_valid(mfn) ) { - mfn = hp2m->get_entry(hp2m, gfn_l, &t, &old_a, - P2M_ALLOC | P2M_UNSHARE, &page_order, NULL); + + mfn = get_gfn_type_access(hp2m, gfn_l, &t, &old_a, + P2M_ALLOC | P2M_UNSHARE, &page_order); rc = -ESRCH; if ( !mfn_valid(mfn) || t != p2m_ram_rw ) @@ -2363,7 +2364,7 @@ bool_t p2m_altp2m_lazy_copy(struct vcpu *v, paddr_t gpa, return 0; mfn = get_gfn_type_access(hp2m, gfn_x(gfn), &p2mt, &p2ma, - P2M_ALLOC | P2M_UNSHARE, &page_order); + P2M_ALLOC, &page_order); __put_gfn(hp2m, gfn_x(gfn)); if ( mfn_eq(mfn, INVALID_MFN) ) @@ -2562,8 +2563,8 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx, /* Check host p2m if no valid entry in alternate */ if ( !mfn_valid(mfn) ) { - mfn = hp2m->get_entry(hp2m, gfn_x(old_gfn), &t, &a, - P2M_ALLOC | P2M_UNSHARE, &page_order, NULL); + mfn = get_gfn_type_access(hp2m, gfn_x(old_gfn), &t, &a, + P2M_ALLOC | P2M_UNSHARE, &page_order); if ( !mfn_valid(mfn) || t != p2m_ram_rw ) goto out; @@ -2588,6 +2589,7 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx, if ( !mfn_valid(mfn) ) mfn = hp2m->get_entry(hp2m, gfn_x(new_gfn), &t, &a, 0, NULL, NULL); + /* Note: currently it is not safe to remap to a shared entry */ if ( !mfn_valid(mfn) || (t != p2m_ram_rw) ) goto out;