From patchwork Thu Dec 17 16:31:58 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 7874291 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id DD3AB9F349 for ; Thu, 17 Dec 2015 16:36:28 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id D07EE2040F for ; Thu, 17 Dec 2015 16:36:27 +0000 (UTC) Received: from lists.xen.org (lists.xenproject.org [50.57.142.19]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C2DC720426 for ; Thu, 17 Dec 2015 16:36:22 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1a9bUr-00038b-Mt; Thu, 17 Dec 2015 16:33:41 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1a9bUq-00038O-2z for xen-devel@lists.xenproject.org; Thu, 17 Dec 2015 16:33:40 +0000 Received: from [85.158.137.68] by server-7.bemta-3.messagelabs.com id 10/B0-23747-3E3E2765; Thu, 17 Dec 2015 16:33:39 +0000 X-Env-Sender: prvs=78612b3c6=julien.grall@citrix.com X-Msg-Ref: server-16.tower-31.messagelabs.com!1450370013!3927748!3 X-Originating-IP: [66.165.176.89] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 7.35.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 57873 invoked from network); 17 Dec 2015 16:33:36 -0000 Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89) by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP; 17 Dec 2015 16:33:36 -0000 X-IronPort-AV: E=Sophos;i="5.20,441,1444694400"; d="scan'208";a="320056728" From: Julien Grall To: Date: Thu, 17 Dec 2015 16:31:58 +0000 Message-ID: <1450369919-22989-3-git-send-email-julien.grall@citrix.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1450369919-22989-1-git-send-email-julien.grall@citrix.com> References: <1450369919-22989-1-git-send-email-julien.grall@citrix.com> MIME-Version: 1.0 X-DLP: MIA2 Cc: Keir Fraser , ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com, Tim Deegan , Ian Jackson , Julien Grall , Jan Beulich Subject: [Xen-devel] [RFC 2/3] xen/common: memory: Add support for direct mapped domain in XEMEM_exchange X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Direct mapped domain needs to retrieve the exact same underlying physical page when the region is re-populated. Therefore when memory is exchanged for direct mapped domain, we don't want to free memory of the previous region neither allocate new memory. Note that because of that, the hypercall XENMEM_exchange can only work on memory region that has been populated with real RAM when the domain has been created. Signed-off-by: Julien Grall --- Cc: Ian Campbell Cc: Ian Jackson Cc: Jan Beulich Cc: Keir Fraser Cc: Tim Deegan --- xen/common/memory.c | 133 +++++++++++++++++++++++++++++++++++----------------- 1 file changed, 90 insertions(+), 43 deletions(-) diff --git a/xen/common/memory.c b/xen/common/memory.c index ac707e9..94c9a78 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -517,10 +517,19 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) page = mfn_to_page(mfn); - if ( unlikely(steal_page(d, page, MEMF_no_refcount)) ) + if ( is_domain_direct_mapped(d) ) { - put_gfn(d, gmfn + k); + if ( !get_page(page, d) ) + rc = -EINVAL; + else + put_page(page); + } + else if ( unlikely(steal_page(d, page, MEMF_no_refcount)) ) rc = -EINVAL; + + if ( unlikely(rc) ) + { + put_gfn(d, gmfn + k); goto fail; } @@ -530,17 +539,20 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) } /* Allocate a chunk's worth of anonymous output pages. */ - for ( j = 0; j < (1UL << out_chunk_order); j++ ) + if ( !is_domain_direct_mapped(d) ) { - page = alloc_domheap_pages(d, exch.out.extent_order, - MEMF_no_owner | memflags); - if ( unlikely(page == NULL) ) + for ( j = 0; j < (1UL << out_chunk_order); j++ ) { - rc = -ENOMEM; - goto fail; - } + page = alloc_domheap_pages(d, exch.out.extent_order, + MEMF_no_owner | memflags); + if ( unlikely(page == NULL) ) + { + rc = -ENOMEM; + goto fail; + } - page_list_add(page, &out_chunk_list); + page_list_add(page, &out_chunk_list); + } } /* @@ -552,47 +564,26 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) { unsigned long gfn; - if ( !test_and_clear_bit(_PGC_allocated, &page->count_info) ) + if ( !is_domain_direct_mapped(d) && + !test_and_clear_bit(_PGC_allocated, &page->count_info) ) BUG(); mfn = page_to_mfn(page); gfn = mfn_to_gmfn(d, mfn); /* Pages were unshared above */ BUG_ON(SHARED_M2P(gfn)); guest_physmap_remove_page(d, gfn, mfn, 0); - put_page(page); + + /* + * For domain direct mapped, we want to be able to get + * the same page later, so don't deallocate it + */ + if ( !is_domain_direct_mapped(d) ) + put_page(page); } /* Assign each output page to the domain. */ - for ( j = 0; (page = page_list_remove_head(&out_chunk_list)); ++j ) + for ( j = 0; j < (1UL << out_chunk_order); j++ ) { - if ( assign_pages(d, page, exch.out.extent_order, - MEMF_no_refcount) ) - { - unsigned long dec_count; - bool_t drop_dom_ref; - - /* - * Pages in in_chunk_list is stolen without - * decreasing the tot_pages. If the domain is dying when - * assign pages, we need decrease the count. For those pages - * that has been assigned, it should be covered by - * domain_relinquish_resources(). - */ - dec_count = (((1UL << exch.in.extent_order) * - (1UL << in_chunk_order)) - - (j * (1UL << exch.out.extent_order))); - - spin_lock(&d->page_alloc_lock); - drop_dom_ref = (dec_count && - !domain_adjust_tot_pages(d, -dec_count)); - spin_unlock(&d->page_alloc_lock); - - if ( drop_dom_ref ) - put_domain(d); - - free_domheap_pages(page, exch.out.extent_order); - goto dying; - } if ( __copy_from_guest_offset(&gpfn, exch.out.extent_start, (i << out_chunk_order) + j, 1) ) @@ -601,7 +592,61 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) continue; } - mfn = page_to_mfn(page); + if ( is_domain_direct_mapped(d) ) + { + if ( unlikely(d->is_dying) ) + { + gdprintk(XENLOG_INFO, + "Cannot assign page to domain %d -- dying.\n", + d->domain_id); + goto dying; + } + + if ( !check_range_domain_direct_mapped(d, gpfn, + exch.out.extent_order) ) + goto dying; + + mfn = gpfn; + } + else + { + page = page_list_remove_head(&out_chunk_list); + + /* The outchunk list should always contain enough page */ + BUG_ON(!page); + + if ( assign_pages(d, page, exch.out.extent_order, + MEMF_no_refcount) ) + { + unsigned long dec_count; + bool_t drop_dom_ref; + + /* + * Pages in in_chunk_list is stolen without + * decreasing the tot_pages. If the domain is dying when + * assign pages, we need decrease the count. For those pages + * that has been assigned, it should be covered by + * domain_relinquish_resources(). + */ + dec_count = (((1UL << exch.in.extent_order) * + (1UL << in_chunk_order)) - + (j * (1UL << exch.out.extent_order))); + + spin_lock(&d->page_alloc_lock); + drop_dom_ref = (dec_count && + !domain_adjust_tot_pages(d, -dec_count)); + spin_unlock(&d->page_alloc_lock); + + if ( drop_dom_ref ) + put_domain(d); + + free_domheap_pages(page, exch.out.extent_order); + goto dying; + } + + mfn = page_to_mfn(page); + } + guest_physmap_add_page(d, gpfn, mfn, exch.out.extent_order); if ( !paging_mode_translate(d) ) @@ -630,7 +675,8 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) fail: /* Reassign any input pages we managed to steal. */ while ( (page = page_list_remove_head(&in_chunk_list)) ) - if ( assign_pages(d, page, 0, MEMF_no_refcount) ) + if ( is_domain_direct_mapped(d) && + assign_pages(d, page, 0, MEMF_no_refcount) ) { BUG_ON(!d->is_dying); if ( test_and_clear_bit(_PGC_allocated, &page->count_info) ) @@ -640,6 +686,7 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) dying: rcu_unlock_domain(d); /* Free any output pages we managed to allocate. */ + BUG_ON(is_domain_direct_mapped(d) && !page_list_empty(&out_chunk_list)); while ( (page = page_list_remove_head(&out_chunk_list)) ) free_domheap_pages(page, exch.out.extent_order);