From patchwork Fri Feb 7 15:57:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Woodhouse X-Patchwork-Id: 11370721 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E92BA1395 for ; Fri, 7 Feb 2020 15:58:23 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B9994214AF for ; Fri, 7 Feb 2020 15:58:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="qIheDiIB" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B9994214AF Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j060R-0002fs-P3; Fri, 07 Feb 2020 15:57:23 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j060P-0002fd-J7 for xen-devel@lists.xenproject.org; Fri, 07 Feb 2020 15:57:21 +0000 X-Inumbo-ID: 7fbb78e0-49c2-11ea-a759-bc764e2007e4 Received: from merlin.infradead.org (unknown [2001:8b0:10b:1231::1]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 7fbb78e0-49c2-11ea-a759-bc764e2007e4; Fri, 07 Feb 2020 15:57:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description; bh=dyRx+qZ8T6NQ51bVksxM4mw+7cPnqT1tiRX8xFVKa9w=; b=qIheDiIBE/34BhbhUHEgxdoSY0 XwPQxkEJ8BmEc1PYonh93eehJNw04uGm1246359MwZWbJkQqLuxXSNiZFPGTYJEiFJfhqnhMO+aem FvG7XV0OwC6BnKwdd8UB2GUWX/cX/mpLS+vnkHoNdxTk/9SoFTYqqbbpLVGrb8QoTvxNGQOeVtViF D1nrxvzmjTCUmHTCIF1zfwoDR2Rhk1NIhzeWsJEA0JKQxlMSuOOrGl59NJuiCNXSYxKeE7AtmJLn5 BPLJgpQkP1As3r4QQDFHinramWz/xxH+L1iRlLaEfKsDRHLq3LAO4VMHPCHTwqdROKAe3fon5bFMW H4ZNIpmQ==; Received: from i7.infradead.org ([2001:8b0:10b:1:21e:67ff:fecb:7a92]) by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1j0606-0003cj-4N; Fri, 07 Feb 2020 15:57:02 +0000 Received: from dwoodhou by i7.infradead.org with local (Exim 4.92 #3 (Red Hat Linux)) id 1j0605-00Bfgj-AT; Fri, 07 Feb 2020 15:57:01 +0000 From: David Woodhouse To: Jan Beulich Date: Fri, 7 Feb 2020 15:57:00 +0000 Message-Id: <20200207155701.2781820-1-dwmw2@infradead.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <56f7fe21daff2dc4bf8db7ee356666233bdb0f7a.camel@infradead.org> References: <56f7fe21daff2dc4bf8db7ee356666233bdb0f7a.camel@infradead.org> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by merlin.infradead.org. See http://www.infradead.org/rpr.html Subject: [Xen-devel] [PATCH 1/2] xen/mm: fold PGC_broken into PGC_state bits X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Julien Grall , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , George Dunlap , Jeff Kubascik , Stewart Hildebrand , xen-devel@lists.xenproject.org Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: David Woodhouse Only PGC_state_offlining and PGC_state_offlined are valid in conjunction with PGC_broken. The other two states (free and inuse) were never valid for a broken page. By folding PGC_broken in, we can have three bits for PGC_state which allows up to 8 states, of which 6 are currently used and 2 are available for new use cases. Signed-off-by: David Woodhouse --- xen/arch/x86/domctl.c | 2 +- xen/common/page_alloc.c | 67 ++++++++++++++++++++++++---------------- xen/include/asm-arm/mm.h | 26 +++++++++++----- xen/include/asm-x86/mm.h | 33 +++++++++++++------- 4 files changed, 82 insertions(+), 46 deletions(-) diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c index 4fa9c91140..17a318e16d 100644 --- a/xen/arch/x86/domctl.c +++ b/xen/arch/x86/domctl.c @@ -415,7 +415,7 @@ long arch_do_domctl( if ( page->u.inuse.type_info & PGT_pinned ) type |= XEN_DOMCTL_PFINFO_LPINTAB; - if ( page->count_info & PGC_broken ) + if ( page_is_broken(page) ) type = XEN_DOMCTL_PFINFO_BROKEN; } diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index 97902d42c1..4084503554 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -1093,7 +1093,7 @@ static int reserve_offlined_page(struct page_info *head) struct page_info *pg; int next_order; - if ( page_state_is(cur_head, offlined) ) + if ( page_is_offlined(cur_head) ) { cur_head++; if ( first_dirty != INVALID_DIRTY_IDX && first_dirty ) @@ -1113,7 +1113,7 @@ static int reserve_offlined_page(struct page_info *head) for ( i = (1 << cur_order), pg = cur_head + (1 << cur_order ); i < (1 << next_order); i++, pg++ ) - if ( page_state_is(pg, offlined) ) + if ( page_is_offlined(pg) ) break; if ( i == ( 1 << next_order) ) { @@ -1145,16 +1145,19 @@ static int reserve_offlined_page(struct page_info *head) for ( cur_head = head; cur_head < head + ( 1UL << head_order); cur_head++ ) { - if ( !page_state_is(cur_head, offlined) ) + struct page_list_head *list; + if ( page_state_is(cur_head, offlined) ) + list = &page_offlined_list; + else if (page_state_is(cur_head, broken) ) + list = &page_broken_list; + else continue; avail[node][zone]--; total_avail_pages--; ASSERT(total_avail_pages >= 0); - page_list_add_tail(cur_head, - test_bit(_PGC_broken, &cur_head->count_info) ? - &page_broken_list : &page_offlined_list); + page_list_add_tail(cur_head, list); count++; } @@ -1404,13 +1407,16 @@ static void free_heap_pages( switch ( pg[i].count_info & PGC_state ) { case PGC_state_inuse: - BUG_ON(pg[i].count_info & PGC_broken); pg[i].count_info = PGC_state_free; break; case PGC_state_offlining: - pg[i].count_info = (pg[i].count_info & PGC_broken) | - PGC_state_offlined; + pg[i].count_info = PGC_state_offlined; + tainted = 1; + break; + + case PGC_state_broken_offlining: + pg[i].count_info = PGC_state_broken; tainted = 1; break; @@ -1527,16 +1533,25 @@ static unsigned long mark_page_offline(struct page_info *pg, int broken) do { nx = x = y; - if ( ((x & PGC_state) != PGC_state_offlined) && - ((x & PGC_state) != PGC_state_offlining) ) + nx &= ~PGC_state; + + switch ( x & PGC_state ) { - nx &= ~PGC_state; - nx |= (((x & PGC_state) == PGC_state_free) - ? PGC_state_offlined : PGC_state_offlining); - } + case PGC_state_inuse: + case PGC_state_offlining: + nx |= broken ? PGC_state_offlining : PGC_state_broken_offlining; + break; + + case PGC_state_free: + nx |= broken ? PGC_state_broken : PGC_state_offlined; - if ( broken ) - nx |= PGC_broken; + case PGC_state_broken_offlining: + nx |= PGC_state_broken_offlining; + + case PGC_state_offlined: + case PGC_state_broken: + nx |= PGC_state_broken; + } if ( x == nx ) break; @@ -1609,7 +1624,7 @@ int offline_page(mfn_t mfn, int broken, uint32_t *status) * need to prevent malicious guest access the broken page again. * Under such case, hypervisor shutdown guest, preventing recursive mce. */ - if ( (pg->count_info & PGC_broken) && (owner = page_get_owner(pg)) ) + if ( page_is_broken(pg) && (owner = page_get_owner(pg)) ) { *status = PG_OFFLINE_AGAIN; domain_crash(owner); @@ -1620,7 +1635,7 @@ int offline_page(mfn_t mfn, int broken, uint32_t *status) old_info = mark_page_offline(pg, broken); - if ( page_state_is(pg, offlined) ) + if ( page_is_offlined(pg) ) { reserve_heap_page(pg); @@ -1699,14 +1714,14 @@ unsigned int online_page(mfn_t mfn, uint32_t *status) do { ret = *status = 0; - if ( y & PGC_broken ) + if ( (y & PGC_state) == PGC_state_broken || + (y & PGC_state) == PGC_state_broken_offlining ) { ret = -EINVAL; *status = PG_ONLINE_FAILED |PG_ONLINE_BROKEN; break; } - - if ( (y & PGC_state) == PGC_state_offlined ) + else if ( (y & PGC_state) == PGC_state_offlined ) { page_list_del(pg, &page_offlined_list); *status = PG_ONLINE_ONLINED; @@ -1747,11 +1762,11 @@ int query_page_offline(mfn_t mfn, uint32_t *status) pg = mfn_to_page(mfn); - if ( page_state_is(pg, offlining) ) + if ( page_is_offlining(pg) ) *status |= PG_OFFLINE_STATUS_OFFLINE_PENDING; - if ( pg->count_info & PGC_broken ) + if ( page_is_broken(pg) ) *status |= PG_OFFLINE_STATUS_BROKEN; - if ( page_state_is(pg, offlined) ) + if ( page_is_offlined(pg) ) *status |= PG_OFFLINE_STATUS_OFFLINED; spin_unlock(&heap_lock); @@ -2483,7 +2498,7 @@ __initcall(pagealloc_keyhandler_init); void scrub_one_page(struct page_info *pg) { - if ( unlikely(pg->count_info & PGC_broken) ) + if ( unlikely(page_is_broken(pg)) ) return; #ifndef NDEBUG diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h index 333efd3a60..c9466c8ca0 100644 --- a/xen/include/asm-arm/mm.h +++ b/xen/include/asm-arm/mm.h @@ -112,13 +112,25 @@ struct page_info /* Page is broken? */ #define _PGC_broken PG_shift(7) #define PGC_broken PG_mask(1, 7) - /* Mutually-exclusive page states: { inuse, offlining, offlined, free }. */ -#define PGC_state PG_mask(3, 9) -#define PGC_state_inuse PG_mask(0, 9) -#define PGC_state_offlining PG_mask(1, 9) -#define PGC_state_offlined PG_mask(2, 9) -#define PGC_state_free PG_mask(3, 9) -#define page_state_is(pg, st) (((pg)->count_info&PGC_state) == PGC_state_##st) + /* + * Mutually-exclusive page states: + * { inuse, offlining, offlined, free, broken_offlining, broken } + */ +#define PGC_state PG_mask(7, 9) +#define PGC_state_inuse PG_mask(0, 9) +#define PGC_state_offlining PG_mask(1, 9) +#define PGC_state_offlined PG_mask(2, 9) +#define PGC_state_free PG_mask(3, 9) +#define PGC_state_broken_offlining PG_mask(4, 9) +#define PGC_state_broken PG_mask(5, 9) + +#define page_state_is(pg, st) (((pg)->count_info&PGC_state) == PGC_state_##st) +#define page_is_broken(pg) (page_state_is((pg), broken_offlining) || \ + page_state_is((pg), broken)) +#define page_is_offlined(pg) (page_state_is((pg), broken) || \ + page_state_is((pg), offlined)) +#define page_is_offlining(pg) (page_state_is((pg), broken_offlining) || \ + page_state_is((pg), offlining)) /* Count of references to this frame. */ #define PGC_count_width PG_shift(9) diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index 2ca8882ad0..3edadf7a7c 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -67,18 +67,27 @@ /* 3-bit PAT/PCD/PWT cache-attribute hint. */ #define PGC_cacheattr_base PG_shift(6) #define PGC_cacheattr_mask PG_mask(7, 6) - /* Page is broken? */ -#define _PGC_broken PG_shift(7) -#define PGC_broken PG_mask(1, 7) - /* Mutually-exclusive page states: { inuse, offlining, offlined, free }. */ -#define PGC_state PG_mask(3, 9) -#define PGC_state_inuse PG_mask(0, 9) -#define PGC_state_offlining PG_mask(1, 9) -#define PGC_state_offlined PG_mask(2, 9) -#define PGC_state_free PG_mask(3, 9) -#define page_state_is(pg, st) (((pg)->count_info&PGC_state) == PGC_state_##st) - - /* Count of references to this frame. */ + /* + * Mutually-exclusive page states: + * { inuse, offlining, offlined, free, broken_offlining, broken } + */ +#define PGC_state PG_mask(7, 9) +#define PGC_state_inuse PG_mask(0, 9) +#define PGC_state_offlining PG_mask(1, 9) +#define PGC_state_offlined PG_mask(2, 9) +#define PGC_state_free PG_mask(3, 9) +#define PGC_state_broken_offlining PG_mask(4, 9) +#define PGC_state_broken PG_mask(5, 9) + +#define page_state_is(pg, st) (((pg)->count_info&PGC_state) == PGC_state_##st) +#define page_is_broken(pg) (page_state_is((pg), broken_offlining) || \ + page_state_is((pg), broken)) +#define page_is_offlined(pg) (page_state_is((pg), broken) || \ + page_state_is((pg), offlined)) +#define page_is_offlining(pg) (page_state_is((pg), broken_offlining) || \ + page_state_is((pg), offlining)) + +/* Count of references to this frame. */ #define PGC_count_width PG_shift(9) #define PGC_count_mask ((1UL< X-Patchwork-Id: 11370723 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4F48D14BC for ; Fri, 7 Feb 2020 15:58:25 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 136E4214AF for ; Fri, 7 Feb 2020 15:58:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="tAc7SRrN" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 136E4214AF Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j060M-0002fS-Ev; Fri, 07 Feb 2020 15:57:18 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j060K-0002fN-MK for xen-devel@lists.xenproject.org; Fri, 07 Feb 2020 15:57:16 +0000 X-Inumbo-ID: 7fb42400-49c2-11ea-a759-bc764e2007e4 Received: from merlin.infradead.org (unknown [2001:8b0:10b:1231::1]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 7fb42400-49c2-11ea-a759-bc764e2007e4; Fri, 07 Feb 2020 15:57:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc: To:From:Reply-To:Content-ID:Content-Description; bh=WPhbpHbnEi5ph9YnLzAFjSxiRDYKFToUxDKb8RkGn+8=; b=tAc7SRrNumbZplqo3GVKefObwX droLuEPS+itAmSnY9OQIQnZfURhGZrQhoOB1pTtG37OG7dSYMoUUAJWGBpQ8l2z3vMCgzkCEUxvlc Z4q4gP7vBPQRaqrZxB/X9FhMs2H16GIdMgqpt7CFyQ7HSFVgYFh4PHw9Ba5mIO9iWVjJDsmi33cQM sIqLXR+8G0CJz2ZSy55xaKnZ8Q0lh9vnuVaHElaG7M8oSBqMTUAplDWFhGn0moesir/4pVcQowUZQ J0pnIGHPGubQUyaeR8kBeB3CtnUB8/GpHj2V1JlGfW/JWhN1ep6Nw8rZ3UidNEQcrQTrrlE8s9Vsl 7xVbZg2g==; Received: from i7.infradead.org ([2001:8b0:10b:1:21e:67ff:fecb:7a92]) by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1j0606-0003ck-5P; Fri, 07 Feb 2020 15:57:02 +0000 Received: from dwoodhou by i7.infradead.org with local (Exim 4.92 #3 (Red Hat Linux)) id 1j0605-00Bfgn-BK; Fri, 07 Feb 2020 15:57:01 +0000 From: David Woodhouse To: Jan Beulich Date: Fri, 7 Feb 2020 15:57:01 +0000 Message-Id: <20200207155701.2781820-2-dwmw2@infradead.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <56f7fe21daff2dc4bf8db7ee356666233bdb0f7a.camel@infradead.org> References: <56f7fe21daff2dc4bf8db7ee356666233bdb0f7a.camel@infradead.org> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by merlin.infradead.org. See http://www.infradead.org/rpr.html Subject: [Xen-devel] [PATCH 2/2] xen/mm: Introduce PG_state_uninitialised X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Julien Grall , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , George Dunlap , Jeff Kubascik , Stewart Hildebrand , xen-devel@lists.xenproject.org Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: David Woodhouse It is possible for pages to enter general circulation without ever being process by init_heap_pages(). For example, pages of the multiboot module containing the initramfs may be assigned via assign_pages() to dom0 as it is created. And some code including map_pages_to_xen() has checks on 'system_state' to determine whether to use the boot or the heap allocator, but it seems impossible to prove that pages allocated by the boot allocator are not subsequently freed with free_heap_pages(). This actually works fine in the majority of cases; there are only a few esoteric corner cases which init_heap_pages() handles before handing the page range off to free_heap_pages(): • Excluding MFN #0 to avoid inappropriate cross-zone merging. • Ensuring that the node information structures exist, when the first page(s) of a given node are handled. • High order allocations crossing from one node to another. To handle this case, shift PG_state_inuse from its current value of zero, to another value. Use zero, which is the initial state of the entire frame table, as PG_state_uninitialised. Fix a couple of assertions which were assuming that PG_state_inuse is zero, and make them cope with the PG_state_uninitialised case too where appopriate. Finally, make free_xenheap_pages() and free_domheap_pages() call through to init_heap_pages() instead of directly to free_heap_pages() in the case where pages are being free which have never passed through init_heap_pages(). Signed-off-by: David Woodhouse Signed-off-by: David Woodhouse --- xen/arch/x86/mm.c | 3 ++- xen/common/page_alloc.c | 41 ++++++++++++++++++++++++++-------------- xen/include/asm-arm/mm.h | 3 ++- xen/include/asm-x86/mm.h | 3 ++- 4 files changed, 33 insertions(+), 17 deletions(-) diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 9b33829084..bf660ee8eb 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -488,7 +488,8 @@ void share_xen_page_with_guest(struct page_info *page, struct domain *d, page_set_owner(page, d); smp_wmb(); /* install valid domain ptr before updating refcnt. */ - ASSERT((page->count_info & ~PGC_xen_heap) == 0); + ASSERT((page->count_info & ~PGC_xen_heap) == PGC_state_inuse || + (page->count_info & ~PGC_xen_heap) == PGC_state_uninitialised); /* Only add to the allocation list if the domain isn't dying. */ if ( !d->is_dying ) diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index 4084503554..9703a2c664 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -1407,6 +1407,7 @@ static void free_heap_pages( switch ( pg[i].count_info & PGC_state ) { case PGC_state_inuse: + case PGC_state_uninitialised: pg[i].count_info = PGC_state_free; break; @@ -1780,11 +1781,10 @@ int query_page_offline(mfn_t mfn, uint32_t *status) * latter is not on a MAX_ORDER boundary, then we reserve the page by * not freeing it to the buddy allocator. */ -static void init_heap_pages( - struct page_info *pg, unsigned long nr_pages) +static void init_heap_pages(struct page_info *pg, unsigned long nr_pages, + bool scrub) { unsigned long i; - bool idle_scrub = false; /* * Keep MFN 0 away from the buddy allocator to avoid crossing zone @@ -1809,7 +1809,7 @@ static void init_heap_pages( spin_unlock(&heap_lock); if ( system_state < SYS_STATE_active && opt_bootscrub == BOOTSCRUB_IDLE ) - idle_scrub = true; + scrub = true; for ( i = 0; i < nr_pages; i++ ) { @@ -1837,7 +1837,7 @@ static void init_heap_pages( nr_pages -= n; } - free_heap_pages(pg + i, 0, scrub_debug || idle_scrub); + free_heap_pages(pg + i, 0, scrub_debug || scrub); } } @@ -1873,7 +1873,7 @@ void __init end_boot_allocator(void) if ( (r->s < r->e) && (phys_to_nid(pfn_to_paddr(r->s)) == cpu_to_node(0)) ) { - init_heap_pages(mfn_to_page(_mfn(r->s)), r->e - r->s); + init_heap_pages(mfn_to_page(_mfn(r->s)), r->e - r->s, false); r->e = r->s; break; } @@ -1882,7 +1882,7 @@ void __init end_boot_allocator(void) { struct bootmem_region *r = &bootmem_region_list[i]; if ( r->s < r->e ) - init_heap_pages(mfn_to_page(_mfn(r->s)), r->e - r->s); + init_heap_pages(mfn_to_page(_mfn(r->s)), r->e - r->s, false); } nr_bootmem_regions = 0; @@ -2151,7 +2151,7 @@ void init_xenheap_pages(paddr_t ps, paddr_t pe) memguard_guard_range(maddr_to_virt(ps), pe - ps); - init_heap_pages(maddr_to_page(ps), (pe - ps) >> PAGE_SHIFT); + init_heap_pages(maddr_to_page(ps), (pe - ps) >> PAGE_SHIFT, false); } @@ -2174,14 +2174,20 @@ void *alloc_xenheap_pages(unsigned int order, unsigned int memflags) void free_xenheap_pages(void *v, unsigned int order) { + struct page_info *pg; ASSERT(!in_irq()); if ( v == NULL ) return; + pg = virt_to_page(v); + memguard_guard_range(v, 1 << (order + PAGE_SHIFT)); - free_heap_pages(virt_to_page(v), order, false); + if ( unlikely(page_state_is(pg, uninitialised)) ) + init_heap_pages(pg, 1 << order, true); + else + free_heap_pages(pg, order, false); } #else /* !CONFIG_SEPARATE_XENHEAP */ @@ -2237,7 +2243,10 @@ void free_xenheap_pages(void *v, unsigned int order) for ( i = 0; i < (1u << order); i++ ) pg[i].count_info &= ~PGC_xen_heap; - free_heap_pages(pg, order, true); + if ( unlikely(page_state_is(pg, uninitialised)) ) + init_heap_pages(pg, 1 << order, true); + else + free_heap_pages(pg, order, true); } #endif /* CONFIG_SEPARATE_XENHEAP */ @@ -2260,7 +2269,7 @@ void init_domheap_pages(paddr_t ps, paddr_t pe) if ( mfn_x(emfn) <= mfn_x(smfn) ) return; - init_heap_pages(mfn_to_page(smfn), mfn_x(emfn) - mfn_x(smfn)); + init_heap_pages(mfn_to_page(smfn), mfn_x(emfn) - mfn_x(smfn), false); } @@ -2301,10 +2310,11 @@ int assign_pages( for ( i = 0; i < (1 << order); i++ ) { ASSERT(page_get_owner(&pg[i]) == NULL); - ASSERT(!pg[i].count_info); + ASSERT(pg[i].count_info == PGC_state_inuse || + pg[i].count_info == PGC_state_uninitialised); page_set_owner(&pg[i], d); smp_wmb(); /* Domain pointer must be visible before updating refcnt. */ - pg[i].count_info = PGC_allocated | 1; + pg[i].count_info |= PGC_allocated | 1; page_list_add_tail(&pg[i], &d->page_list); } @@ -2427,7 +2437,10 @@ void free_domheap_pages(struct page_info *pg, unsigned int order) scrub = 1; } - free_heap_pages(pg, order, scrub); + if ( unlikely(page_state_is(pg, uninitialised)) ) + init_heap_pages(pg, 1 << order, scrub); + else + free_heap_pages(pg, order, scrub); } if ( drop_dom_ref ) diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h index c9466c8ca0..c696941600 100644 --- a/xen/include/asm-arm/mm.h +++ b/xen/include/asm-arm/mm.h @@ -117,12 +117,13 @@ struct page_info * { inuse, offlining, offlined, free, broken_offlining, broken } */ #define PGC_state PG_mask(7, 9) -#define PGC_state_inuse PG_mask(0, 9) +#define PGC_state_uninitialised PG_mask(0, 9) #define PGC_state_offlining PG_mask(1, 9) #define PGC_state_offlined PG_mask(2, 9) #define PGC_state_free PG_mask(3, 9) #define PGC_state_broken_offlining PG_mask(4, 9) #define PGC_state_broken PG_mask(5, 9) +#define PGC_state_inuse PG_mask(6, 9) #define page_state_is(pg, st) (((pg)->count_info&PGC_state) == PGC_state_##st) #define page_is_broken(pg) (page_state_is((pg), broken_offlining) || \ diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index 3edadf7a7c..645368e6a9 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -72,12 +72,13 @@ * { inuse, offlining, offlined, free, broken_offlining, broken } */ #define PGC_state PG_mask(7, 9) -#define PGC_state_inuse PG_mask(0, 9) +#define PGC_state_uninitialised PG_mask(0, 9) #define PGC_state_offlining PG_mask(1, 9) #define PGC_state_offlined PG_mask(2, 9) #define PGC_state_free PG_mask(3, 9) #define PGC_state_broken_offlining PG_mask(4, 9) #define PGC_state_broken PG_mask(5, 9) +#define PGC_state_inuse PG_mask(6, 9) #define page_state_is(pg, st) (((pg)->count_info&PGC_state) == PGC_state_##st) #define page_is_broken(pg) (page_state_is((pg), broken_offlining) || \