From patchwork Wed Jan 22 08:53:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Woodhouse X-Patchwork-Id: 11345357 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 352F1924 for ; Wed, 22 Jan 2020 08:55:36 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 120A42253D for ; Wed, 22 Jan 2020 08:55:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="sD+wemh9" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 120A42253D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iuBmW-00013L-1C; Wed, 22 Jan 2020 08:54:36 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iuBmU-00012b-TL for xen-devel@lists.xenproject.org; Wed, 22 Jan 2020 08:54:35 +0000 X-Inumbo-ID: be4353b0-3cf4-11ea-9fd7-bc764e2007e4 Received: from merlin.infradead.org (unknown [2001:8b0:10b:1231::1]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id be4353b0-3cf4-11ea-9fd7-bc764e2007e4; Wed, 22 Jan 2020 08:54:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=wbgu2T5PM1NqsO6J+RaAdJbHvsoVj2zMh9jVS8dJmgE=; b=sD+wemh9o82wMKOKWrbHJIXjXw IbSLDuvy8oEBAWy/OQDVVTtemWSEEvEHnDPdHZA6X0WE1TNyVIfkEWtb33d/Py0aVKbrunu0hZNLv HPqH3f6NUzLFHfdcS1XGgRZ1G0PWjhe7GNPkTalMCvEqnDHz43sPusiszb+D9XMWUhLUDdFZjAKV8 8QGdIeXmm0QwLOg35ZjGKrwsjCWlAf7eJ4Uf10EmytMcc76QkcXNwI5BuRvAUGADaqng35BXR6/RS EnFJWnh6BLwrJWSvEyKtHKZubSMzBuiFAjTpv07Zdrtvz/DVBIXown80LPqSAoWNukua1LRssuPGg AdnDM6BA==; Received: from i7.infradead.org ([2001:8b0:10b:1:21e:67ff:fecb:7a92]) by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1iuBlu-0002ct-6i; Wed, 22 Jan 2020 08:53:58 +0000 Received: from dwoodhou by i7.infradead.org with local (Exim 4.92 #3 (Red Hat Linux)) id 1iuBlt-008mS2-D8; Wed, 22 Jan 2020 08:53:57 +0000 From: David Woodhouse To: Xen-devel Date: Wed, 22 Jan 2020 08:53:55 +0000 Message-Id: <20200122085357.2092778-12-dwmw2@infradead.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200122085357.2092778-1-dwmw2@infradead.org> References: <6cbe16ae42ab806df513d359220212d4f01ce43d.camel@infradead.org> <20200122085357.2092778-1-dwmw2@infradead.org> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by merlin.infradead.org. See http://www.infradead.org/rpr.html Subject: [Xen-devel] [RFC PATCH v2 12/14] Don't add bad pages above HYPERVISOR_VIRT_END to the domheap X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Julien Grall , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Varad Gautam , paul@xen.org, Ian Jackson , Hongyan Xia , Amit Shah , =?utf-8?q?Ro?= =?utf-8?q?ger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: David Woodhouse Signed-off-by: David Woodhouse --- xen/common/page_alloc.c | 83 +++++++++++++++++++++++++++++++++++++++-- 1 file changed, 80 insertions(+), 3 deletions(-) diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index 20ef25d45a..2a20c12abb 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -1758,6 +1758,18 @@ int query_page_offline(mfn_t mfn, uint32_t *status) return 0; } +static unsigned long contig_avail_pages(struct page_info *pg, unsigned long max_pages) +{ + unsigned long i; + + for ( i = 0 ; i < max_pages; i++) + { + if ( pg[i].count_info & (PGC_broken | PGC_allocated) ) + break; + } + return i; +} + /* * Hand the specified arbitrary page range to the specified heap zone * checking the node_id of the previous page. If they differ and the @@ -1799,18 +1811,24 @@ static void init_heap_pages( { unsigned int nid = phys_to_nid(page_to_maddr(pg+i)); + /* If the (first) page is already marked bad, or allocated in advance + * due to live update, don't add it to the heap. */ + if (pg[i].count_info & (PGC_broken | PGC_allocated)) + continue; + if ( unlikely(!avail[nid]) ) { + unsigned long contig_nr_pages = contig_avail_pages(pg + i, nr_pages); unsigned long s = mfn_x(page_to_mfn(pg + i)); - unsigned long e = mfn_x(mfn_add(page_to_mfn(pg + nr_pages - 1), 1)); + unsigned long e = mfn_x(mfn_add(page_to_mfn(pg + contig_nr_pages - 1), 1)); bool use_tail = (nid == phys_to_nid(pfn_to_paddr(e - 1))) && !(s & ((1UL << MAX_ORDER) - 1)) && (find_first_set_bit(e) <= find_first_set_bit(s)); unsigned long n; - n = init_node_heap(nid, mfn_x(page_to_mfn(pg + i)), nr_pages - i, + n = init_node_heap(nid, mfn_x(page_to_mfn(pg + i)), contig_nr_pages - i, &use_tail); - BUG_ON(i + n > nr_pages); + BUG_ON(i + n > contig_nr_pages); if ( n && !use_tail ) { i += n - 1; @@ -1846,6 +1864,63 @@ static unsigned long avail_heap_pages( return free_pages; } +static void mark_bad_pages(void) +{ + unsigned long bad_spfn, bad_epfn; + const char *p; + struct page_info *pg; +#ifdef CONFIG_X86 + const struct platform_bad_page *badpage; + unsigned int i, j, array_size; + + badpage = get_platform_badpages(&array_size); + if ( badpage ) + { + for ( i = 0; i < array_size; i++ ) + { + for ( j = 0; j < 1UL << badpage->order; j++ ) + { + if ( mfn_valid(badpage->mfn + j) ) + { + pg = mfn_to_page(badpage->mfn + j); + pg->count_info |= PGC_broken; + page_list_add_tail(pg, &page_broken_list); + } + } + } + } +#endif + + /* Check new pages against the bad-page list. */ + p = opt_badpage; + while ( *p != '\0' ) + { + bad_spfn = simple_strtoul(p, &p, 0); + bad_epfn = bad_spfn; + + if ( *p == '-' ) + { + p++; + bad_epfn = simple_strtoul(p, &p, 0); + if ( bad_epfn < bad_spfn ) + bad_epfn = bad_spfn; + } + + if ( *p == ',' ) + p++; + else if ( *p != '\0' ) + break; + + while ( mfn_valid(_mfn(bad_spfn)) && bad_spfn < bad_epfn ) + { + pg = mfn_to_page(_mfn(bad_spfn)); + pg->count_info |= PGC_broken; + page_list_add_tail(pg, &page_broken_list); + bad_spfn++; + } + } +} + void __init end_boot_allocator(void) { unsigned int i; @@ -1870,6 +1945,8 @@ void __init end_boot_allocator(void) } nr_bootmem_regions = 0; + mark_bad_pages(); + if ( !dma_bitsize && (num_online_nodes() > 1) ) dma_bitsize = arch_get_dma_bitsize();