From patchwork Tue Jun 7 07:30:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Penny Zheng X-Patchwork-Id: 12871469 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A54ABC433EF for ; Tue, 7 Jun 2022 07:34:03 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.342958.568081 (Exim 4.92) (envelope-from ) id 1nyTir-0008Sd-U2; Tue, 07 Jun 2022 07:33:53 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 342958.568081; Tue, 07 Jun 2022 07:33:53 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nyTir-0008SW-Q1; Tue, 07 Jun 2022 07:33:53 +0000 Received: by outflank-mailman (input) for mailman id 342958; Tue, 07 Jun 2022 07:33:52 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nyTgR-0003fm-B3 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 07:31:23 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id d40a8812-e633-11ec-bd2c-47488cf2e6aa; Tue, 07 Jun 2022 09:31:21 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 47AD5143D; Tue, 7 Jun 2022 00:31:21 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 17FD33F66F; Tue, 7 Jun 2022 00:31:17 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: d40a8812-e633-11ec-bd2c-47488cf2e6aa From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: wei.chen@arm.com, Penny Zheng , Andrew Cooper , George Dunlap , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu , Penny Zheng Subject: [PATCH v6 8/9] xen: introduce prepare_staticmem_pages Date: Tue, 7 Jun 2022 15:30:30 +0800 Message-Id: <20220607073031.722174-9-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220607073031.722174-1-Penny.Zheng@arm.com> References: <20220607073031.722174-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Later, we want to use acquire_domstatic_pages() for populating memory for static domain on runtime, however, there are a lot of pointless work (checking mfn_valid(), scrubbing the free part, cleaning the cache...) considering we know the page is valid and belong to the guest. This commit splits acquire_staticmem_pages() in two parts, and introduces prepare_staticmem_pages to bypass all "pointless work". Signed-off-by: Penny Zheng Acked-by: Jan Beulich --- v6 changes: - adapt to PGC_static --- v5 changes: - new commit --- xen/common/page_alloc.c | 61 ++++++++++++++++++++++++----------------- 1 file changed, 36 insertions(+), 25 deletions(-) diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index 886b5d82a2..9004dd41c1 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -2661,26 +2661,13 @@ void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns, } } -/* - * Acquire nr_mfns contiguous reserved pages, starting at #smfn, of - * static memory. - * This function needs to be reworked if used outside of boot. - */ -static struct page_info * __init acquire_staticmem_pages(mfn_t smfn, - unsigned long nr_mfns, - unsigned int memflags) +static bool __init prepare_staticmem_pages(struct page_info *pg, + unsigned long nr_mfns, + unsigned int memflags) { bool need_tlbflush = false; uint32_t tlbflush_timestamp = 0; unsigned long i; - struct page_info *pg; - - ASSERT(nr_mfns); - for ( i = 0; i < nr_mfns; i++ ) - if ( !mfn_valid(mfn_add(smfn, i)) ) - return NULL; - - pg = mfn_to_page(smfn); spin_lock(&heap_lock); @@ -2691,7 +2678,7 @@ static struct page_info * __init acquire_staticmem_pages(mfn_t smfn, { printk(XENLOG_ERR "pg[%lu] Static MFN %"PRI_mfn" c=%#lx t=%#x\n", - i, mfn_x(smfn) + i, + i, mfn_x(page_to_mfn(pg)) + i, pg[i].count_info, pg[i].tlbflush_timestamp); goto out_err; } @@ -2715,6 +2702,38 @@ static struct page_info * __init acquire_staticmem_pages(mfn_t smfn, if ( need_tlbflush ) filtered_flush_tlb_mask(tlbflush_timestamp); + return true; + + out_err: + while ( i-- ) + pg[i].count_info = PGC_static | PGC_state_free; + + spin_unlock(&heap_lock); + + return false; +} + +/* + * Acquire nr_mfns contiguous reserved pages, starting at #smfn, of + * static memory. + * This function needs to be reworked if used outside of boot. + */ +static struct page_info * __init acquire_staticmem_pages(mfn_t smfn, + unsigned long nr_mfns, + unsigned int memflags) +{ + unsigned long i; + struct page_info *pg; + + ASSERT(nr_mfns); + for ( i = 0; i < nr_mfns; i++ ) + if ( !mfn_valid(mfn_add(smfn, i)) ) + return NULL; + + pg = mfn_to_page(smfn); + if ( !prepare_staticmem_pages(pg, nr_mfns, memflags) ) + return NULL; + /* * Ensure cache and RAM are consistent for platforms where the guest * can control its own visibility of/through the cache. @@ -2723,14 +2742,6 @@ static struct page_info * __init acquire_staticmem_pages(mfn_t smfn, flush_page_to_ram(mfn_x(smfn) + i, !(memflags & MEMF_no_icache_flush)); return pg; - - out_err: - while ( i-- ) - pg[i].count_info = PGC_static | PGC_state_free; - - spin_unlock(&heap_lock); - - return NULL; } /*