From patchwork Sat Mar 11 00:38:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Doug Berger X-Patchwork-Id: 13170544 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8994DC6FD19 for ; Sat, 11 Mar 2023 00:40:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F31436B0074; Fri, 10 Mar 2023 19:40:11 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EB8328E0008; Fri, 10 Mar 2023 19:40:11 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D59FB6B0078; Fri, 10 Mar 2023 19:40:11 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id C24B36B0074 for ; Fri, 10 Mar 2023 19:40:11 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 9B30B140222 for ; Sat, 11 Mar 2023 00:40:11 +0000 (UTC) X-FDA: 80554760622.07.0E297C1 Received: from mail-qt1-f178.google.com (mail-qt1-f178.google.com [209.85.160.178]) by imf17.hostedemail.com (Postfix) with ESMTP id CB9D940005 for ; Sat, 11 Mar 2023 00:40:09 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=E6LvORJl; spf=pass (imf17.hostedemail.com: domain of opendmb@gmail.com designates 209.85.160.178 as permitted sender) smtp.mailfrom=opendmb@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678495209; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5jXGljhJMOsOLQ0WVWyx9Qh8aJ4axzukipUofLqyH0c=; b=laM6Ps0NVNt+lAWjk0xRZsqM0l6kdlq3E1uYtKaBWkwk3skJNKS9Sue/rmTj8ABK2maSSv I65JrGqaxHXZrW2uZQBm3z5ssmW9zx1wkFIKEa7vQBAPwyNjXUOol1qudPeh1VAGnIHgYN oG+MOfPvp111EPCLFWBjdML0A0/+/I4= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=E6LvORJl; spf=pass (imf17.hostedemail.com: domain of opendmb@gmail.com designates 209.85.160.178 as permitted sender) smtp.mailfrom=opendmb@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678495209; a=rsa-sha256; cv=none; b=CsJYLVbPr0oHIU0Tykj+I1be1+gekSVU+jAZetBatf7/aT2UDs4peM7YCM8E2aEmMSDiGi 3VFQMFhEXmBYeTG2LuidPeduuzRXlemQ6evb7RF8yhct95Oec3otkwDJlRO//a8zltinJ2 1+BsadyFsH7T62/07WG/XxcTU3hh5Tc= Received: by mail-qt1-f178.google.com with SMTP id z6so7811249qtv.0 for ; Fri, 10 Mar 2023 16:40:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1678495209; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5jXGljhJMOsOLQ0WVWyx9Qh8aJ4axzukipUofLqyH0c=; b=E6LvORJlN2jcni/5ioUWmsjOnXgl7B775URlFqKgBop4b56C+en9J/o5VQci/Gr3HZ mDdqxrmhy35dEWIEr0ibeFXQqzumxlSJvxH0I05R+AxhZTacM9/FvPnPJwWedeUC3cvi pQeNNZX6OTbmRuNopd8WQ1Ls+zrYG6cPajDLFXJgfwUMMroLFQt144s5fzywH9RwVKGB dmBEqLB4kzE3dqcZNzL4WQn30AqT97RuhmZuQIzcafVEHXu3L1817x/L3l9h/Gi2ryuc MrvohJM2ORGZfictxJHfVmKJwsQQjwobLzwvo9n/kzxDzegIeFbxcKr7T/9FtgMrtnZZ mFBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678495209; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5jXGljhJMOsOLQ0WVWyx9Qh8aJ4axzukipUofLqyH0c=; b=P+Gh4T2sxE3ab2BsNpIQn1Q2zHuBzyBfQOtk1i8Ne717AW6K5egDDaynOEVrnM996y 54+i5t9tsYjMKoN7Khk/Yya/zwGL682uQtipF57k3MabDx2goNbnRuCroDex3nWwFsbN UeQpgPOk1+rgqtyZFIOiegJIMdxMy7rrm8d72McrHagzwpNB6wDF4rrzcX8k5fzq5QvX GrT/aYE0amhDPJ1cevbFpTXFYjwVt5QYbPPshiTdaES0T0ubGwGoGj56k008Is80qUOP VmozLreInUmwgEIMy1YIdD7jNwHjl+YMJS2oVcgLgDjiyqNAfzm/DM9+Zz1Msw0itjtm +i0Q== X-Gm-Message-State: AO0yUKVX9ht/5pW7Q5jtMkBs6R8PzwQmUMRsJUq49N4DuIjYcZ6LbZsl 5gbW9SPuau6FxEB8X81Exjc= X-Google-Smtp-Source: AK7set82JZQLeG0mnfSCEhr7dXawy4oS4wEWrjbePSKQBRygykBuuvLPI0fUBxrCTyPQT+oR+bashA== X-Received: by 2002:a05:622a:590:b0:3a7:e625:14f with SMTP id c16-20020a05622a059000b003a7e625014fmr44499078qtb.9.1678495208862; Fri, 10 Mar 2023 16:40:08 -0800 (PST) Received: from stbirv-lnx-1.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id a5-20020ac84345000000b003bfaff2a6b9sm868874qtn.10.2023.03.10.16.40.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 10 Mar 2023 16:40:08 -0800 (PST) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Mike Rapoport , Borislav Petkov , "Paul E. McKenney" , Randy Dunlap , Neeraj Upadhyay , Damien Le Moal , Kim Phillips , "Steven Rostedt (Google)" , Michal Hocko , Johannes Weiner , Vlastimil Babka , KOSAKI Motohiro , Mel Gorman , Muchun Song , Mike Kravetz , Florian Fainelli , David Hildenbrand , Oscar Salvador , Joonsoo Kim , Sukadev Bhattiprolu , Rik van Riel , Roman Gushchin , Minchan Kim , Chris Goldsworthy , "Georgi Djakov" , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Doug Berger Subject: [PATCH v4 7/9] mm/dmb: Introduce Designated Movable Blocks Date: Fri, 10 Mar 2023 16:38:53 -0800 Message-Id: <20230311003855.645684-8-opendmb@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230311003855.645684-1-opendmb@gmail.com> References: <20230311003855.645684-1-opendmb@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: a37ehiognf6su75zksg6ef9ideyw6oc5 X-Rspamd-Queue-Id: CB9D940005 X-HE-Tag: 1678495209-614154 X-HE-Meta: U2FsdGVkX1/vk53TX3MQ6QiAh9ZT0+No7oLHwOsxUU81sUHSLBkZDhGttMf3MSi+3RIOUd7CXb2zMFZjq2cSjiz0Aa6FJUXzFKqmAE0SyycjAWu+lmQ4ZLWSeMSuPq303uG0We8hWNXTF0e1Uz4cFfWsklIvbFBMmRtJlyafuWVDCF/KFXhYBX2ltx7Tw2ghLIHGcywhqTTFVeAFlHJreG0LbDcxInGo8N0ojp5zc9lCDuh/5nnS7YKZ4aJc57npFFGKfTuv4j/Hg1X1D128+7ubqF2aHN+OWnKk8P0W9SO3pH/BkUp+oKSJulqZKFhXYosCTb1ZOW99UIptcrYG/RLM5O9FdL1/AXUBZszOF62fE728Zv3FIXGb07oZYn7uGBIu5uvo8s9zx9wh46XGB5D+HKcyIdjuYZr4etMXw5od5N6sGpQVDigffNTOoL4D3sWRwdt21tl69x87Ogc50wiLeiPlEMI235xXavkCAztp2aY9BJWDgNcJQG0V4hoiEVkd/bn8Oo32ahfhH2krQ31QBxTpGEOH7WiD2nwKXJM8DEGH/SwNEi6GgscNdMlaG8hhQ43uESWI1lXJR080CMUUlmZ9EJwlfZunJvP0wSrrK48ic+ZMjZ08FlvCoozVOIXS3P1eyQIWBuLi2jQDxgekJ2FwZKWGioh2qqR5lTh6cUZd1OPYApphK4zLXsvHg6h/zzcnB9WPSz/4JGVE5FLP1wetspp2+EYkAx35XX7iIU7h04mVDXWC1prB/3AywRQJoc49aS3/+wkyC+127vcl5iFs6R62T1xQT4G0/bA5q4hfMvoqBHF4VhYwV2MUSbxtwWzgs6TNRaF8v/vGb9WWFcTDYPiVrcDsoC3sRm088KrR2OrBUnaTvWYwoyMBbWq1HI86zab4HOEmYt+acJu7oFwH8TGG1kpsn6/KKkyjauxRg6dYRImFd50Ws8PqtU6mEPPuCq1nKdiShCN LKbSwj6i fDqHZH2suLentNkGbDAh4Zld/dyTo9aUUXfQ1NGI/Qz+uezPqAn5WbccjSQEQ1az6qtAOr/nEbe4i+mlUsWijhv1TmKv94lC20mwFvJu4lprQmlRqWx5SsAD2hqqH1bF2XeIDUhaQJB2VwraXUC4cumZYpdgdVHVqRbBIJlGYvCxcQPh66xBC6XpXsMfQkxDq4mvFL80etWhNd25u67D5iZmvKfYsTVkc9z31srJ5zpDWHOl4G4WQ3QNl6k2kEjESbx3/+b0xC3jsn3FzmdZPAzNJjhF/lFpmtiSU1XniKLBzkkARgLspM+9odbIAi6hKBZ7yL5PEdQtpjVbd1xZCUTe4cFLK7dCtOFh8rBiEBk8YbToSy7uHGXnvVCKCqoXlM9pQO5J4Ng1Z/ar4Aa5gz7hmyytjtTW/pE+UMwwNKPSFzwYsUR5zYY4GXvbBJp3XBVMzZWfoLof0cZ09IhGdejfVFQeI1WgYsgk0yRZY73gh8G/qApvVXWyWBxulHJi3lb3ZSwZrEme8JNus24q5x/2aVQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Designated Movable Blocks are blocks of memory that are composed of one or more adjacent memblocks that have the MEMBLOCK_MOVABLE designation. These blocks must be reserved before receiving that designation and will be located in the ZONE_MOVABLE zone rather than any other zone that may span them. Signed-off-by: Doug Berger --- include/linux/dmb.h | 29 ++++++++++++++ mm/Kconfig | 12 ++++++ mm/Makefile | 1 + mm/dmb.c | 91 +++++++++++++++++++++++++++++++++++++++++++ mm/memblock.c | 6 ++- mm/page_alloc.c | 95 ++++++++++++++++++++++++++++++++++++++------- 6 files changed, 220 insertions(+), 14 deletions(-) create mode 100644 include/linux/dmb.h create mode 100644 mm/dmb.c diff --git a/include/linux/dmb.h b/include/linux/dmb.h new file mode 100644 index 000000000000..fa2976c0fa21 --- /dev/null +++ b/include/linux/dmb.h @@ -0,0 +1,29 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __DMB_H__ +#define __DMB_H__ + +#include + +/* + * the buddy -- especially pageblock merging and alloc_contig_range() + * -- can deal with only some pageblocks of a higher-order page being + * MIGRATE_MOVABLE, we can use pageblock_nr_pages. + */ +#define DMB_MIN_ALIGNMENT_PAGES pageblock_nr_pages +#define DMB_MIN_ALIGNMENT_BYTES (PAGE_SIZE * DMB_MIN_ALIGNMENT_PAGES) + +enum { + DMB_DISJOINT = 0, + DMB_INTERSECTS, + DMB_MIXED, +}; + +struct dmb; + +extern int dmb_intersects(unsigned long spfn, unsigned long epfn); + +extern int dmb_reserve(phys_addr_t base, phys_addr_t size, + struct dmb **res_dmb); +extern void dmb_init_region(struct memblock_region *region); + +#endif diff --git a/mm/Kconfig b/mm/Kconfig index 4751031f3f05..85ac5f136487 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -913,6 +913,18 @@ config CMA_AREAS If unsure, leave the default value "7" in UMA and "19" in NUMA. +config DMB_COUNT + int "Maximum count of Designated Movable Blocks" + default 19 if NUMA + default 7 + help + Designated Movable Blocks are blocks of memory that can be used + by the page allocator exclusively for movable pages. They are + managed in ZONE_MOVABLE but may overlap with other zones. This + parameter sets the maximum number of DMBs in the system. + + If unsure, leave the default value "7" in UMA and "19" in NUMA. + config MEM_SOFT_DIRTY bool "Track memory changes" depends on CHECKPOINT_RESTORE && HAVE_ARCH_SOFT_DIRTY && PROC_FS diff --git a/mm/Makefile b/mm/Makefile index 8e105e5b3e29..824be8fb11cd 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -67,6 +67,7 @@ obj-y += page-alloc.o obj-y += init-mm.o obj-y += memblock.o obj-y += $(memory-hotplug-y) +obj-y += dmb.o ifdef CONFIG_MMU obj-$(CONFIG_ADVISE_SYSCALLS) += madvise.o diff --git a/mm/dmb.c b/mm/dmb.c new file mode 100644 index 000000000000..f6c4e2662e0f --- /dev/null +++ b/mm/dmb.c @@ -0,0 +1,91 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Designated Movable Block + */ + +#define pr_fmt(fmt) "dmb: " fmt + +#include + +struct dmb { + unsigned long start_pfn; + unsigned long end_pfn; +}; + +static struct dmb dmb_areas[CONFIG_DMB_COUNT]; +static unsigned int dmb_area_count; + +int dmb_intersects(unsigned long spfn, unsigned long epfn) +{ + int i; + struct dmb *dmb; + + if (spfn >= epfn) + return DMB_DISJOINT; + + for (i = 0; i < dmb_area_count; i++) { + dmb = &dmb_areas[i]; + if (spfn >= dmb->end_pfn) + continue; + if (epfn <= dmb->start_pfn) + return DMB_DISJOINT; + if (spfn >= dmb->start_pfn && epfn <= dmb->end_pfn) + return DMB_INTERSECTS; + else + return DMB_MIXED; + } + + return DMB_DISJOINT; +} +EXPORT_SYMBOL(dmb_intersects); + +int __init dmb_reserve(phys_addr_t base, phys_addr_t size, + struct dmb **res_dmb) +{ + struct dmb *dmb; + + /* Sanity checks */ + if (!size || !memblock_is_region_reserved(base, size)) + return -EINVAL; + + /* ensure minimal alignment required by mm core */ + if (!IS_ALIGNED(base | size, DMB_MIN_ALIGNMENT_BYTES)) + return -EINVAL; + + if (dmb_area_count == ARRAY_SIZE(dmb_areas)) { + pr_warn("Not enough slots for DMB reserved regions!\n"); + return -ENOSPC; + } + + /* + * Each reserved area must be initialised later, when more kernel + * subsystems (like slab allocator) are available. + */ + dmb = &dmb_areas[dmb_area_count++]; + + dmb->start_pfn = PFN_DOWN(base); + dmb->end_pfn = PFN_DOWN(base + size); + if (res_dmb) + *res_dmb = dmb; + + memblock_mark_movable(base, size); + return 0; +} + +void __init dmb_init_region(struct memblock_region *region) +{ + unsigned long pfn; + int i; + + for (pfn = memblock_region_memory_base_pfn(region); + pfn < memblock_region_memory_end_pfn(region); + pfn += pageblock_nr_pages) { + struct page *page = pfn_to_page(pfn); + + for (i = 0; i < pageblock_nr_pages; i++) + set_page_zone(page + i, ZONE_MOVABLE); + + /* free reserved pageblocks to page allocator */ + init_reserved_pageblock(page); + } +} diff --git a/mm/memblock.c b/mm/memblock.c index 794a099ec3e2..3db06288a5c0 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include @@ -2103,13 +2104,16 @@ static void __init memmap_init_reserved_pages(void) for_each_reserved_mem_range(i, &start, &end) reserve_bootmem_region(start, end); - /* and also treat struct pages for the NOMAP regions as PageReserved */ for_each_mem_region(region) { + /* treat struct pages for the NOMAP regions as PageReserved */ if (memblock_is_nomap(region)) { start = region->base; end = start + region->size; reserve_bootmem_region(start, end); } + /* move Designated Movable Block pages to ZONE_MOVABLE */ + if (memblock_is_movable(region)) + dmb_init_region(region); } } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index da1af678995b..26846a9a9fc4 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -76,6 +76,7 @@ #include #include #include +#include #include #include #include @@ -414,6 +415,8 @@ static unsigned long required_kernelcore __initdata; static unsigned long required_kernelcore_percent __initdata; static unsigned long required_movablecore __initdata; static unsigned long required_movablecore_percent __initdata; +static unsigned long min_dmb_pfn[MAX_NUMNODES] __initdata; +static unsigned long max_dmb_pfn[MAX_NUMNODES] __initdata; static unsigned long zone_movable_pfn[MAX_NUMNODES] __initdata; bool mirrored_kernelcore __initdata_memblock; @@ -2171,7 +2174,7 @@ static int __init deferred_init_memmap(void *data) } zone_empty: /* Sanity check that the next zone really is unpopulated */ - WARN_ON(++zid < MAX_NR_ZONES && populated_zone(++zone)); + WARN_ON(++zid < ZONE_MOVABLE && populated_zone(++zone)); pr_info("node %d deferred pages initialised in %ums\n", pgdat->node_id, jiffies_to_msecs(jiffies - start)); @@ -7022,6 +7025,10 @@ static void __init memmap_init_zone_range(struct zone *zone, unsigned long zone_end_pfn = zone_start_pfn + zone->spanned_pages; int nid = zone_to_nid(zone), zone_id = zone_idx(zone); + /* Skip overlap of ZONE_MOVABLE */ + if (zone_id == ZONE_MOVABLE && zone_start_pfn < *hole_pfn) + zone_start_pfn = *hole_pfn; + start_pfn = clamp(start_pfn, zone_start_pfn, zone_end_pfn); end_pfn = clamp(end_pfn, zone_start_pfn, zone_end_pfn); @@ -7482,6 +7489,12 @@ static unsigned long __init zone_spanned_pages_in_node(int nid, node_start_pfn, node_end_pfn, zone_start_pfn, zone_end_pfn); + if (zone_type == ZONE_MOVABLE && max_dmb_pfn[nid]) { + if (*zone_start_pfn == *zone_end_pfn) + *zone_end_pfn = max_dmb_pfn[nid]; + *zone_start_pfn = min(*zone_start_pfn, min_dmb_pfn[nid]); + } + /* Check that this node has pages within the zone's required range */ if (*zone_end_pfn < node_start_pfn || *zone_start_pfn > node_end_pfn) return 0; @@ -7550,12 +7563,21 @@ static unsigned long __init zone_absent_pages_in_node(int nid, &zone_start_pfn, &zone_end_pfn); nr_absent = __absent_pages_in_range(nid, zone_start_pfn, zone_end_pfn); + if (zone_type == ZONE_MOVABLE && max_dmb_pfn[nid]) { + if (zone_start_pfn == zone_end_pfn) + zone_end_pfn = max_dmb_pfn[nid]; + else + zone_end_pfn = zone_movable_pfn[nid]; + zone_start_pfn = min(zone_start_pfn, min_dmb_pfn[nid]); + nr_absent += zone_end_pfn - zone_start_pfn; + } + /* * ZONE_MOVABLE handling. - * Treat pages to be ZONE_MOVABLE in ZONE_NORMAL as absent pages + * Treat pages to be ZONE_MOVABLE in other zones as absent pages * and vice versa. */ - if (mirrored_kernelcore && zone_movable_pfn[nid]) { + if (zone_movable_pfn[nid]) { unsigned long start_pfn, end_pfn; struct memblock_region *r; @@ -7565,6 +7587,19 @@ static unsigned long __init zone_absent_pages_in_node(int nid, end_pfn = clamp(memblock_region_memory_end_pfn(r), zone_start_pfn, zone_end_pfn); + if (memblock_is_movable(r)) { + if (zone_type != ZONE_MOVABLE) { + nr_absent += end_pfn - start_pfn; + continue; + } + + nr_absent -= end_pfn - start_pfn; + continue; + } + + if (!mirrored_kernelcore) + continue; + if (zone_type == ZONE_MOVABLE && memblock_is_mirror(r)) nr_absent += end_pfn - start_pfn; @@ -7584,18 +7619,27 @@ static void __init calculate_node_totalpages(struct pglist_data *pgdat, { unsigned long totalpages = 0; enum zone_type i; + int nid = pgdat->node_id; + + /* + * If Designated Movable Blocks are defined on this node, ensure that + * zone_movable_pfn is also defined for this node. + */ + if (max_dmb_pfn[nid] && !zone_movable_pfn[nid]) + zone_movable_pfn[nid] = min(node_end_pfn, + arch_zone_highest_possible_pfn[movable_zone]); for (i = 0; i < MAX_NR_ZONES; i++) { struct zone *zone = pgdat->node_zones + i; unsigned long zone_start_pfn, zone_end_pfn; unsigned long spanned, absent, size; - spanned = zone_spanned_pages_in_node(pgdat->node_id, i, + spanned = zone_spanned_pages_in_node(nid, i, node_start_pfn, node_end_pfn, &zone_start_pfn, &zone_end_pfn); - absent = zone_absent_pages_in_node(pgdat->node_id, i, + absent = zone_absent_pages_in_node(nid, i, node_start_pfn, node_end_pfn); @@ -8047,15 +8091,27 @@ unsigned long __init node_map_pfn_alignment(void) static unsigned long __init early_calculate_totalpages(void) { unsigned long totalpages = 0; - unsigned long start_pfn, end_pfn; - int i, nid; + struct memblock_region *r; - for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) { - unsigned long pages = end_pfn - start_pfn; + for_each_mem_region(r) { + unsigned long start_pfn, end_pfn, pages; + int nid; + + nid = memblock_get_region_node(r); + start_pfn = memblock_region_memory_base_pfn(r); + end_pfn = memblock_region_memory_end_pfn(r); - totalpages += pages; - if (pages) + pages = end_pfn - start_pfn; + if (pages) { + totalpages += pages; node_set_state(nid, N_MEMORY); + if (memblock_is_movable(r)) { + if (start_pfn < min_dmb_pfn[nid]) + min_dmb_pfn[nid] = start_pfn; + if (end_pfn > max_dmb_pfn[nid]) + max_dmb_pfn[nid] = end_pfn; + } + } } return totalpages; } @@ -8068,7 +8124,7 @@ static unsigned long __init early_calculate_totalpages(void) */ static void __init find_zone_movable_pfns_for_nodes(void) { - int i, nid; + int nid; unsigned long usable_startpfn; unsigned long kernelcore_node, kernelcore_remaining; /* save the state before borrow the nodemask */ @@ -8196,13 +8252,24 @@ static void __init find_zone_movable_pfns_for_nodes(void) kernelcore_remaining = kernelcore_node; /* Go through each range of PFNs within this node */ - for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) { + for_each_mem_region(r) { unsigned long size_pages; + if (memblock_get_region_node(r) != nid) + continue; + + start_pfn = memblock_region_memory_base_pfn(r); + end_pfn = memblock_region_memory_end_pfn(r); start_pfn = max(start_pfn, zone_movable_pfn[nid]); if (start_pfn >= end_pfn) continue; + /* Skip over Designated Movable Blocks */ + if (memblock_is_movable(r)) { + zone_movable_pfn[nid] = end_pfn; + continue; + } + /* Account for what is only usable for kernelcore */ if (start_pfn < usable_startpfn) { unsigned long kernel_pages; @@ -8351,6 +8418,8 @@ void __init free_area_init(unsigned long *max_zone_pfn) } /* Find the PFNs that ZONE_MOVABLE begins at in each node */ + memset(min_dmb_pfn, 0xff, sizeof(min_dmb_pfn)); + memset(max_dmb_pfn, 0, sizeof(max_dmb_pfn)); memset(zone_movable_pfn, 0, sizeof(zone_movable_pfn)); find_zone_movable_pfns_for_nodes();