From patchwork Thu Mar 19 21:21:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Woodhouse X-Patchwork-Id: 11448105 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1C52A6CA for ; Thu, 19 Mar 2020 21:23:01 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E18992072C for ; Thu, 19 Mar 2020 21:23:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="Xj5lATH5" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E18992072C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jF2c7-00029R-2O; Thu, 19 Mar 2020 21:22:03 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jF2c5-000295-3r for xen-devel@lists.xenproject.org; Thu, 19 Mar 2020 21:22:01 +0000 X-Inumbo-ID: a6e0fde8-6a27-11ea-92cf-bc764e2007e4 Received: from merlin.infradead.org (unknown [2001:8b0:10b:1231::1]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id a6e0fde8-6a27-11ea-92cf-bc764e2007e4; Thu, 19 Mar 2020 21:21:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description; bh=3hAblW12irUlQuN3btyT41tUeEcWkTKGflBlA87LNuI=; b=Xj5lATH5GRvyzgQHGVC/29by5G YvYiYLSFRZqj4EmRdwduRoa4D89p+eSYcQK1g22mnoMD88xmhxKqCCWhLsMesDQmifpS4DE/SY2eq GAXhvuNalHV2W7DONSvQkGYgZ6mO7viOoGJpkRVELVRFYDK7bnJYHuXNHz5BpfksKIvt1LB25zIC0 e8Y+JCnTQZuzg9uRFOy0OhjnSLzzfXxzVStWfOcDLhiv8utEuI6vAOJP6qVYEY5F2HdJzK+wTCquy SWVwMr+OVTHklURvcFQJCWPQwdOhjBdmi40ejJLR3/pXSj4RaJvNuvUk3Q7mFeoggaavQDp1KPqal 8VFpsweg==; Received: from i7.infradead.org ([2001:8b0:10b:1:21e:67ff:fecb:7a92]) by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jF2bv-0000nT-Db; Thu, 19 Mar 2020 21:21:51 +0000 Received: from dwoodhou by i7.infradead.org with local (Exim 4.92 #3 (Red Hat Linux)) id 1jF2bu-00B7lV-JG; Thu, 19 Mar 2020 21:21:50 +0000 From: David Woodhouse To: xen-devel@lists.xenproject.org Date: Thu, 19 Mar 2020 21:21:49 +0000 Message-Id: <20200319212150.2651419-1-dwmw2@infradead.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <759b48cc361af1136e3cf1658f3dcb1d2937db9c.camel@infradead.org> References: <759b48cc361af1136e3cf1658f3dcb1d2937db9c.camel@infradead.org> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by merlin.infradead.org. See http://www.infradead.org/rpr.html Subject: [Xen-devel] [PATCH 1/2] xen/mm: fold PGC_broken into PGC_state bits X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Julien Grall , Wei Liu , Andrew Cooper , Ian Jackson , George Dunlap , hongyxia@amazon.com, Jan Beulich , Volodymyr Babchuk , =?utf-8?q?Roger_Pau_Monn?= =?utf-8?q?=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: David Woodhouse Only PGC_state_offlining and PGC_state_offlined are valid in conjunction with PGC_broken. The other two states (free and inuse) were never valid for a broken page. By folding PGC_broken in, we can have three bits for PGC_state which allows up to 8 states, of which 6 are currently used and 2 are available for new use cases. Signed-off-by: David Woodhouse --- xen/arch/x86/domctl.c | 2 +- xen/common/page_alloc.c | 66 ++++++++++++++++++++++------------------ xen/include/asm-arm/mm.h | 38 +++++++++++++++-------- xen/include/asm-x86/mm.h | 36 ++++++++++++++++------ 4 files changed, 89 insertions(+), 53 deletions(-) diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c index ed86762fa6..a411f64afa 100644 --- a/xen/arch/x86/domctl.c +++ b/xen/arch/x86/domctl.c @@ -422,7 +422,7 @@ long arch_do_domctl( if ( page->u.inuse.type_info & PGT_pinned ) type |= XEN_DOMCTL_PFINFO_LPINTAB; - if ( page->count_info & PGC_broken ) + if ( page_is_broken(page) ) type = XEN_DOMCTL_PFINFO_BROKEN; } diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index 76d37226df..8d72a64f4e 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -1093,7 +1093,7 @@ static int reserve_offlined_page(struct page_info *head) struct page_info *pg; int next_order; - if ( page_state_is(cur_head, offlined) ) + if ( page_is_offlined(cur_head) ) { cur_head++; if ( first_dirty != INVALID_DIRTY_IDX && first_dirty ) @@ -1113,7 +1113,7 @@ static int reserve_offlined_page(struct page_info *head) for ( i = (1 << cur_order), pg = cur_head + (1 << cur_order ); i < (1 << next_order); i++, pg++ ) - if ( page_state_is(pg, offlined) ) + if ( page_is_offlined(pg) ) break; if ( i == ( 1 << next_order) ) { @@ -1145,16 +1145,20 @@ static int reserve_offlined_page(struct page_info *head) for ( cur_head = head; cur_head < head + ( 1UL << head_order); cur_head++ ) { - if ( !page_state_is(cur_head, offlined) ) + struct page_list_head *list; + + if ( page_state_is(cur_head, offlined) ) + list = &page_offlined_list; + else if (page_state_is(cur_head, broken) ) + list = &page_broken_list; + else continue; avail[node][zone]--; total_avail_pages--; ASSERT(total_avail_pages >= 0); - page_list_add_tail(cur_head, - test_bit(_PGC_broken, &cur_head->count_info) ? - &page_broken_list : &page_offlined_list); + page_list_add_tail(cur_head, list); count++; } @@ -1404,13 +1408,16 @@ static void free_heap_pages( switch ( pg[i].count_info & PGC_state ) { case PGC_state_inuse: - BUG_ON(pg[i].count_info & PGC_broken); pg[i].count_info = PGC_state_free; break; case PGC_state_offlining: - pg[i].count_info = (pg[i].count_info & PGC_broken) | - PGC_state_offlined; + pg[i].count_info = PGC_state_offlined; + tainted = 1; + break; + + case PGC_state_broken_offlining: + pg[i].count_info = PGC_state_broken; tainted = 1; break; @@ -1527,16 +1534,16 @@ static unsigned long mark_page_offline(struct page_info *pg, int broken) do { nx = x = y; - if ( ((x & PGC_state) != PGC_state_offlined) && - ((x & PGC_state) != PGC_state_offlining) ) - { - nx &= ~PGC_state; - nx |= (((x & PGC_state) == PGC_state_free) - ? PGC_state_offlined : PGC_state_offlining); - } + nx &= ~PGC_state; - if ( broken ) - nx |= PGC_broken; + /* If it was already broken, it stays broken */ + if ( pgc_is_broken(x) ) + broken = 1; + + if ( pgc_is_offlined(x) || pgc_is(x, free) ) + nx |= broken ? PGC_state_broken : PGC_state_offlined; + else + nx |= broken ? PGC_state_broken_offlining : PGC_state_offlining; if ( x == nx ) break; @@ -1609,7 +1616,7 @@ int offline_page(mfn_t mfn, int broken, uint32_t *status) * need to prevent malicious guest access the broken page again. * Under such case, hypervisor shutdown guest, preventing recursive mce. */ - if ( (pg->count_info & PGC_broken) && (owner = page_get_owner(pg)) ) + if ( page_is_broken(pg) && (owner = page_get_owner(pg)) ) { *status = PG_OFFLINE_AGAIN; domain_crash(owner); @@ -1620,7 +1627,7 @@ int offline_page(mfn_t mfn, int broken, uint32_t *status) old_info = mark_page_offline(pg, broken); - if ( page_state_is(pg, offlined) ) + if ( page_is_offlined(pg) ) { reserve_heap_page(pg); @@ -1699,19 +1706,18 @@ unsigned int online_page(mfn_t mfn, uint32_t *status) do { ret = *status = 0; - if ( y & PGC_broken ) + if ( pgc_is_broken(y) ) { ret = -EINVAL; - *status = PG_ONLINE_FAILED |PG_ONLINE_BROKEN; + *status = PG_ONLINE_FAILED | PG_ONLINE_BROKEN; break; } - - if ( (y & PGC_state) == PGC_state_offlined ) + else if ( pgc_is(y, offlined) ) { page_list_del(pg, &page_offlined_list); *status = PG_ONLINE_ONLINED; } - else if ( (y & PGC_state) == PGC_state_offlining ) + else if ( pgc_is(y, offlining) ) { *status = PG_ONLINE_ONLINED; } @@ -1726,7 +1732,7 @@ unsigned int online_page(mfn_t mfn, uint32_t *status) spin_unlock(&heap_lock); - if ( (y & PGC_state) == PGC_state_offlined ) + if ( pgc_is(y, offlined) ) free_heap_pages(pg, 0, false); return ret; @@ -1747,11 +1753,11 @@ int query_page_offline(mfn_t mfn, uint32_t *status) pg = mfn_to_page(mfn); - if ( page_state_is(pg, offlining) ) + if ( page_is_offlining(pg) ) *status |= PG_OFFLINE_STATUS_OFFLINE_PENDING; - if ( pg->count_info & PGC_broken ) + if ( page_is_broken(pg) ) *status |= PG_OFFLINE_STATUS_BROKEN; - if ( page_state_is(pg, offlined) ) + if ( page_is_offlined(pg) ) *status |= PG_OFFLINE_STATUS_OFFLINED; spin_unlock(&heap_lock); @@ -2519,7 +2525,7 @@ __initcall(pagealloc_keyhandler_init); void scrub_one_page(struct page_info *pg) { - if ( unlikely(pg->count_info & PGC_broken) ) + if ( unlikely(page_is_broken(pg)) ) return; #ifndef NDEBUG diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h index 7df91280bc..a877791d1c 100644 --- a/xen/include/asm-arm/mm.h +++ b/xen/include/asm-arm/mm.h @@ -108,21 +108,35 @@ struct page_info /* Page is Xen heap? */ #define _PGC_xen_heap PG_shift(2) #define PGC_xen_heap PG_mask(1, 2) -/* ... */ -/* Page is broken? */ -#define _PGC_broken PG_shift(7) -#define PGC_broken PG_mask(1, 7) - /* Mutually-exclusive page states: { inuse, offlining, offlined, free }. */ -#define PGC_state PG_mask(3, 9) -#define PGC_state_inuse PG_mask(0, 9) -#define PGC_state_offlining PG_mask(1, 9) -#define PGC_state_offlined PG_mask(2, 9) -#define PGC_state_free PG_mask(3, 9) -#define page_state_is(pg, st) (((pg)->count_info&PGC_state) == PGC_state_##st) + /* + * Mutually-exclusive page states: + * { inuse, offlining, offlined, free, broken_offlining, broken } + */ +#define PGC_state PG_mask(7, 9) +#define PGC_state_inuse PG_mask(0, 9) +#define PGC_state_offlining PG_mask(1, 9) +#define PGC_state_offlined PG_mask(2, 9) +#define PGC_state_free PG_mask(3, 9) +#define PGC_state_broken_offlining PG_mask(4, 9) /* Broken and offlining */ +#define PGC_state_broken PG_mask(5, 9) /* Broken and offlined */ + +#define pgc_is(pgc, st) (((pgc) & PGC_state) == PGC_state_##st) +#define page_state_is(pg, st) pgc_is((pg)->count_info, st) + +#define pgc_is_broken(pgc) (pgc_is(pgc, broken) || \ + pgc_is(pgc, broken_offlining)) +#define pgc_is_offlined(pgc) (pgc_is(pgc, offlined) || \ + pgc_is(pgc, broken)) +#define pgc_is_offlining(pgc) (pgc_is(pgc, offlining) || \ + pgc_is(pgc, broken_offlining)) + +#define page_is_broken(pg) (pgc_is_broken((pg)->count_info)) +#define page_is_offlined(pg) (pgc_is_broken((pg)->count_info)) +#define page_is_offlining(pg) (pgc_is_broken((pg)->count_info)) + /* Page is not reference counted */ #define _PGC_extra PG_shift(10) #define PGC_extra PG_mask(1, 10) - /* Count of references to this frame. */ #define PGC_count_width PG_shift(10) #define PGC_count_mask ((1UL<count_info&PGC_state) == PGC_state_##st) + /* + * Mutually-exclusive page states: + * { inuse, offlining, offlined, free, broken_offlining, broken } + */ +#define PGC_state PG_mask(7, 9) +#define PGC_state_inuse PG_mask(0, 9) +#define PGC_state_offlining PG_mask(1, 9) +#define PGC_state_offlined PG_mask(2, 9) +#define PGC_state_free PG_mask(3, 9) +#define PGC_state_broken_offlining PG_mask(4, 9) /* Broken and offlining */ +#define PGC_state_broken PG_mask(5, 9) /* Broken and offlined */ + +#define pgc_is(pgc, st) (((pgc) & PGC_state) == PGC_state_##st) +#define page_state_is(pg, st) pgc_is((pg)->count_info, st) + +#define pgc_is_broken(pgc) (pgc_is(pgc, broken) || \ + pgc_is(pgc, broken_offlining)) +#define pgc_is_offlined(pgc) (pgc_is(pgc, offlined) || \ + pgc_is(pgc, broken)) +#define pgc_is_offlining(pgc) (pgc_is(pgc, offlining) || \ + pgc_is(pgc, broken_offlining)) + +#define page_is_broken(pg) (pgc_is_broken((pg)->count_info)) +#define page_is_offlined(pg) (pgc_is_broken((pg)->count_info)) +#define page_is_offlining(pg) (pgc_is_broken((pg)->count_info)) + /* Page is not reference counted */ #define _PGC_extra PG_shift(10) #define PGC_extra PG_mask(1, 10)