From patchwork Fri Apr 12 19:47:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tamas K Lengyel X-Patchwork-Id: 10899095 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7961E1874 for ; Fri, 12 Apr 2019 19:51:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 59F5E28F2D for ; Fri, 12 Apr 2019 19:51:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5865028F32; Fri, 12 Apr 2019 19:51:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 979B028F3D for ; Fri, 12 Apr 2019 19:51:52 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hF2BZ-0004Hc-2w; Fri, 12 Apr 2019 19:50:05 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hF2BX-000450-Mj for xen-devel@lists.xenproject.org; Fri, 12 Apr 2019 19:50:03 +0000 X-Inumbo-ID: 296ae09d-5d5c-11e9-92d7-bc764e045a96 Received: from mail-it1-f195.google.com (unknown [209.85.166.195]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 296ae09d-5d5c-11e9-92d7-bc764e045a96; Fri, 12 Apr 2019 19:50:02 +0000 (UTC) Received: by mail-it1-f195.google.com with SMTP id s3so3102159itk.1 for ; Fri, 12 Apr 2019 12:50:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=AA8CraOjSDJZU1OuRsTx/rFLaPLNQ6kkiWbNdaov1Aw=; b=WJz92/LmFkiAQeNFnBDbVr9mEixXlVi79U76mX3pZYFOqT5VXFMAuCuSbnNny9K3Mt GqY7Gz/hda8RC+u9yc4nBVcbu2l1Baa7SGdbGxyMji9qEKzF4I7Z27dQV3e29Mw7PqVL csqsu8chXvlHiIv2c3UvVhP+5hh4UuaO7FspfyQjyAg2F/PvpqU84aJPdPfcn3iMCxV0 87BALghtDN4cggjahfLHEuwGEu9YDjXK1yL1UAYQ1QTgJEHpw0ZJng9JWBxhERxykSHl wYOwTXP11GEchd3HFsq5pIiwt4TF3RI7gtMFFO4cEMdqib6hCcmaQBvgULlHyostaTVA supQ== X-Gm-Message-State: APjAAAXY2skwVUceMZ9MtDza8+0/FPZVh/HzPfT26ZKrdahBVftTXTaD PMozhGXZ6PvzQQxISV27nzxTcmtD X-Google-Smtp-Source: APXvYqzNsCAVvAsF9/+0I45tvtnaMoKco4Nqq6WFTHXcx3uyWcbyY9FxWzvWO/PBDE4h1j4U3WE8tg== X-Received: by 2002:a24:1f50:: with SMTP id d77mr14705236itd.25.1555098601928; Fri, 12 Apr 2019 12:50:01 -0700 (PDT) Received: from localhost.localdomain (c-71-205-12-124.hsd1.co.comcast.net. [71.205.12.124]) by smtp.gmail.com with ESMTPSA id c13sm16531606ioi.12.2019.04.12.12.50.00 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 12 Apr 2019 12:50:01 -0700 (PDT) From: Tamas K Lengyel To: xen-devel@lists.xenproject.org Date: Fri, 12 Apr 2019 13:47:12 -0600 Message-Id: <20190412194712.1602-2-tamas@tklengyel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190412194712.1602-1-tamas@tklengyel.com> References: <20190412194712.1602-1-tamas@tklengyel.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v2 2/2] x86/mem_sharing: replace using page_lock with our own lock X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Tamas K Lengyel , Wei Liu , George Dunlap , Andrew Cooper , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP The page_lock system was not intended to be used outside the PV pagetable code. Replace its use in mem_sharing with an internal rwlock. Signed-off-by: Tamas K Lengyel Cc: George Dunlap Cc: Jan Beulich Cc: Andrew Cooper Cc: Wei Liu Cc: Roger Pau Monne --- xen/arch/x86/mm/mem_sharing.c | 88 ++++++++++++------------------- xen/include/asm-x86/mem_sharing.h | 1 + 2 files changed, 34 insertions(+), 55 deletions(-) diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c index 708037d91e..0afbff1d89 100644 --- a/xen/arch/x86/mm/mem_sharing.c +++ b/xen/arch/x86/mm/mem_sharing.c @@ -50,7 +50,7 @@ typedef struct pg_lock_data { static DEFINE_PER_CPU(pg_lock_data_t, __pld); #define MEM_SHARING_DEBUG(_f, _a...) \ - debugtrace_printk("mem_sharing_debug: %s(): " _f, __func__, ##_a) + gdprintk(XENLOG_INFO, _f, ##_a) /* Reverse map defines */ #define RMAP_HASHTAB_ORDER 0 @@ -112,30 +112,21 @@ static inline void page_sharing_dispose(struct page_info *page) #endif /* MEM_SHARING_AUDIT */ -static inline int mem_sharing_page_lock(struct page_info *pg) +static inline void mem_sharing_page_lock(struct page_info *pg) { - int rc; pg_lock_data_t *pld = &(this_cpu(__pld)); - page_sharing_mm_pre_lock(); - rc = page_lock(pg); - if ( rc ) - { - preempt_disable(); - page_sharing_mm_post_lock(&pld->mm_unlock_level, - &pld->recurse_count); - } - return rc; + write_lock(&pg->sharing->lock); + page_sharing_mm_post_lock(&pld->mm_unlock_level, &pld->recurse_count); } static inline void mem_sharing_page_unlock(struct page_info *pg) { pg_lock_data_t *pld = &(this_cpu(__pld)); + page_sharing_mm_unlock(pld->mm_unlock_level, &pld->recurse_count); - page_sharing_mm_unlock(pld->mm_unlock_level, - &pld->recurse_count); - preempt_enable(); - page_unlock(pg); + if ( pg->sharing ) + write_unlock(&pg->sharing->lock); } static inline shr_handle_t get_next_handle(void) @@ -398,12 +389,8 @@ static struct page_info* mem_sharing_lookup(unsigned long mfn) struct page_info* page = mfn_to_page(_mfn(mfn)); if ( page_get_owner(page) == dom_cow ) { - /* Count has to be at least two, because we're called - * with the mfn locked (1) and this is supposed to be - * a shared page (1). */ unsigned long t = read_atomic(&page->u.inuse.type_info); ASSERT((t & PGT_type_mask) == PGT_shared_page); - ASSERT((t & PGT_count_mask) >= 2); ASSERT(SHARED_M2P(get_gpfn_from_mfn(mfn))); return page; } @@ -437,14 +424,7 @@ static int audit(void) pg = pg_shared_info->pg; mfn = page_to_mfn(pg); - /* If we can't lock it, it's definitely not a shared page */ - if ( !mem_sharing_page_lock(pg) ) - { - MEM_SHARING_DEBUG("mfn %lx in audit list, but cannot be locked (%lx)!\n", - mfn_x(mfn), pg->u.inuse.type_info); - errors++; - continue; - } + mem_sharing_page_lock(pg); /* Check if the MFN has correct type, owner and handle. */ if ( (pg->u.inuse.type_info & PGT_type_mask) != PGT_shared_page ) @@ -517,12 +497,12 @@ static int audit(void) put_domain(d); nr_gfns++; } - /* The type count has an extra ref because we have locked the page */ - if ( (nr_gfns + 1) != (pg->u.inuse.type_info & PGT_count_mask) ) + + if ( nr_gfns != (pg->u.inuse.type_info & PGT_count_mask) ) { MEM_SHARING_DEBUG("Mismatched counts for MFN=%lx." "nr_gfns in list %lu, in type_info %lx\n", - mfn_x(mfn), nr_gfns, + mfn_x(mfn), nr_gfns, (pg->u.inuse.type_info & PGT_count_mask)); errors++; } @@ -626,7 +606,6 @@ static int page_make_sharable(struct domain *d, drop_dom_ref = !domain_adjust_tot_pages(d, -1); page_list_del(page, &d->page_list); spin_unlock(&d->page_alloc_lock); - if ( drop_dom_ref ) put_domain(d); return 0; @@ -652,7 +631,7 @@ static int page_make_private(struct domain *d, struct page_info *page) /* Because we are locking pages individually, we need to drop * the lock here, while the page is typed. We cannot risk the * race of page_unlock and then put_page_type. */ - expected_type = (PGT_shared_page | PGT_validated | PGT_locked | 2); + expected_type = (PGT_shared_page | PGT_validated | 2); if ( page->u.inuse.type_info != expected_type ) { spin_unlock(&d->page_alloc_lock); @@ -688,10 +667,7 @@ static inline struct page_info *__grab_shared_page(mfn_t mfn) return NULL; pg = mfn_to_page(mfn); - /* If the page is not validated we can't lock it, and if it's - * not validated it's obviously not shared. */ - if ( !mem_sharing_page_lock(pg) ) - return NULL; + mem_sharing_page_lock(pg); if ( mem_sharing_lookup(mfn_x(mfn)) == NULL ) { @@ -770,6 +746,7 @@ static int nominate_page(struct domain *d, gfn_t gfn, p2m_type_t p2mt; p2m_access_t p2ma; mfn_t mfn; + struct page_sharing_info *sharing = NULL; struct page_info *page = NULL; /* gcc... */ int ret; @@ -833,38 +810,39 @@ static int nominate_page(struct domain *d, gfn_t gfn, } #endif - /* Try to convert the mfn to the sharable type */ - page = mfn_to_page(mfn); - ret = page_make_sharable(d, page, expected_refcnt); - if ( ret ) - goto out; + /* We hold the gfn lock and this domain has the only reference to this page + * so we don't need any other locking, no other domain can use this page + * for sharing until we release the gfn lock */ - /* Now that the page is validated, we can lock it. There is no - * race because we're holding the p2m entry, so no one else - * could be nominating this gfn */ - ret = -ENOENT; - if ( !mem_sharing_page_lock(page) ) + page = mfn_to_page(mfn); + /* Try to convert the mfn to the sharable type */ + ret = page_make_sharable(d, page, expected_refcnt); + if ( ret ) goto out; /* Initialize the shared state */ ret = -ENOMEM; - if ( (page->sharing = - xmalloc(struct page_sharing_info)) == NULL ) + sharing = xmalloc(struct page_sharing_info); + if ( !sharing ) { - /* Making a page private atomically unlocks it */ - BUG_ON(page_make_private(d, page) != 0); + BUG_ON(!page_make_private(d, page)); goto out; } - page->sharing->pg = page; + + rwlock_init(&sharing->lock); + + sharing->pg = page; + page->sharing = sharing; + rmap_init(page); /* Create the handle */ - page->sharing->handle = get_next_handle(); + page->sharing->handle = get_next_handle(); /* Create the local gfn info */ if ( mem_sharing_gfn_alloc(page, d, gfn_x(gfn)) == NULL ) { - xfree(page->sharing); + xfree(sharing); page->sharing = NULL; BUG_ON(page_make_private(d, page) != 0); goto out; @@ -881,7 +859,7 @@ static int nominate_page(struct domain *d, gfn_t gfn, *phandle = page->sharing->handle; audit_add_list(page); - mem_sharing_page_unlock(page); + ret = 0; out: diff --git a/xen/include/asm-x86/mem_sharing.h b/xen/include/asm-x86/mem_sharing.h index 0e77b7d935..03cf651337 100644 --- a/xen/include/asm-x86/mem_sharing.h +++ b/xen/include/asm-x86/mem_sharing.h @@ -38,6 +38,7 @@ typedef struct rmap_hashtab { struct page_sharing_info { + rwlock_t lock; struct page_info *pg; /* Back pointer to the page. */ shr_handle_t handle; /* Globally unique version / handle. */ #if MEM_SHARING_AUDIT