From patchwork Tue Feb 18 18:16:38 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frank van der Linden X-Patchwork-Id: 13980409 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14264C021AA for ; Tue, 18 Feb 2025 18:17:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 34BF928017E; Tue, 18 Feb 2025 13:17:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0CC8D6B00CB; Tue, 18 Feb 2025 13:17:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E3B5128017E; Tue, 18 Feb 2025 13:17:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id BFDA06B00C9 for ; Tue, 18 Feb 2025 13:17:25 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 76E691201B7 for ; Tue, 18 Feb 2025 18:17:25 +0000 (UTC) X-FDA: 83133872850.07.1B552EA Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) by imf04.hostedemail.com (Postfix) with ESMTP id 98F8E4000C for ; Tue, 18 Feb 2025 18:17:23 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=JXAF3nYg; spf=pass (imf04.hostedemail.com: domain of 3ss60ZwQKCGcKaIQLTTLQJ.HTRQNSZc-RRPaFHP.TWL@flex--fvdl.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3ss60ZwQKCGcKaIQLTTLQJ.HTRQNSZc-RRPaFHP.TWL@flex--fvdl.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739902643; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QHuPJqfQqZyD3xPuHYnDsZ7VFtOe3jnvWOipytOApzk=; b=FOgctML0Znt01b1X3iKqjRZRPxiCDRq3pKqALEKJ7O2CmKQfUTcGVdbP8sJ79ePCF8IfNB 0YjCyxbVUhhnGiZQzpFpZSHAz3jQVHVa/t/QnHx9hvAm2j6rAp0I67i1HGD/OlxA4ndaZ9 mBJOLmzDExuhmJKhBgE8GXfRnetFftc= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=JXAF3nYg; spf=pass (imf04.hostedemail.com: domain of 3ss60ZwQKCGcKaIQLTTLQJ.HTRQNSZc-RRPaFHP.TWL@flex--fvdl.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3ss60ZwQKCGcKaIQLTTLQJ.HTRQNSZc-RRPaFHP.TWL@flex--fvdl.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739902643; a=rsa-sha256; cv=none; b=L5ZQkU4lpl9C5neLGYkDQLl5mOqxtzjthRyA8dX3/GG+lzp2HEF2+b8/TbbCnfgl9UOH9N HE4G6cVylbQix1qCgnVjLp9Vqz1v24903b1UfcKN6z78EgXZ+dYLhNXRWPHYBiQypy6jJb 5ih6u+VxHEnV1/OwFWmp6Lp60a96OaM= Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-221063a808dso62884705ad.1 for ; Tue, 18 Feb 2025 10:17:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1739902642; x=1740507442; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=QHuPJqfQqZyD3xPuHYnDsZ7VFtOe3jnvWOipytOApzk=; b=JXAF3nYgMp/0e9rV5dFDLoMxT4DOscfW4RRowdITzS64wFgIx684H9qnQH9czbHTJB Q0/n321soXGW2EkhALduPmD0YJ8/HXzlFMkHmIDWdMm0XspMRBqrz1yI3RN8yTPqwTa2 GlbbwEFt9BZQP4OO9g8YW2yKyysMxOZKB4lUwGkRbi3ikUKycyKJk+nuGWfE6+4h2aDj rMV9e3a2E0SLMoPlo66s5Hl/eR0heuI095spYh2wVDTez7Ml6AMV4QH/zjG0q0UDbv0E tPNG9Foj9z8uijh3TjsGXnGGZNznUV+4EINoytrekm9OJLKLrSsEeR4IBy7TwGRlwnfA KDyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739902642; x=1740507442; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QHuPJqfQqZyD3xPuHYnDsZ7VFtOe3jnvWOipytOApzk=; b=sQNr9WvE1vXGqckOOAr8AFkRy8GS8/OzjydaDME9jWaHhdDO4SARdAVMiZpv/3q5Fb kGy+MP52SNyMp0t0GIFkA+8yP33pWtCzC7gw1swD8PC/Y8erWGLAYwFC4YSqQSNktVVm pIlQ16vSzt0UgCJOjvAk/du64wadaqoW9uI/ja6Bg7V9UGbhcunwGcIRT78YkISDxj3S X33NfvOjWNbQO9hWMT/OnYWr8Ht5YOcxOSk3JxWe7ifPLT6haxGfQWtM5qw0ZgwkU5Pf d8EmpYfy88/n2GgwFr9E9D+/9HykwVWchzTPxVGFf3VAaUkXcGw+iDlazq+QsyTVaY1+ c7RA== X-Forwarded-Encrypted: i=1; AJvYcCXr8YpyGH6SLFSHuOBm5AUPlHz5T/Yx7RfyP81Z1awJER+1fFMbGvGG3lwI6ZcgGp29VsH8BiEGFw==@kvack.org X-Gm-Message-State: AOJu0Yy+0G1UxzJZEKlYJVzGOYm1lSH8VebDlyjd7vwTr/fM+a+Rnu18 SjaPi8ihzjIUHSE5Z90twqwZlsKlIjoaSrLFUJ5cuWnrwO3mvDWvAuyr8KP7SkCBa4Eh9w== X-Google-Smtp-Source: AGHT+IF/tqrjIncWta5fdpCjLPTmY2z8K3edM8w7NT9CYkwW5Spjl+Yw51aD9LiAbauV0zFCuvILnlVC X-Received: from pfaq11.prod.google.com ([2002:a05:6a00:a88b:b0:730:9a11:69a3]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:880c:b0:1ee:b583:1b44 with SMTP id adf61e73a8af0-1eeb5831c62mr11114718637.4.1739902642566; Tue, 18 Feb 2025 10:17:22 -0800 (PST) Date: Tue, 18 Feb 2025 18:16:38 +0000 In-Reply-To: <20250218181656.207178-1-fvdl@google.com> Mime-Version: 1.0 References: <20250218181656.207178-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.601.g30ceb7b040-goog Message-ID: <20250218181656.207178-11-fvdl@google.com> Subject: [PATCH v4 10/27] mm/sparse: allow for alternate vmemmap section init at boot From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, Frank van der Linden X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 98F8E4000C X-Stat-Signature: 8jdzkpcz4aayf3coaayz9qk4mgekxycp X-Rspam-User: X-HE-Tag: 1739902643-877371 X-HE-Meta: U2FsdGVkX1/QdZ369u05j+Ec3xxYlUQp5/1M5TfOekivT7wMMdnQERoidmFKIwtODJjFnkgnEAP/WaDXF0tDo66OgZm751z/hEwQ562Q85kPimDNt23VNujhcuHbkMh/q74FZEhAehakLcWEju8XhY3HUXUTmcwVXubQTOe0rCi7UplhkdtaMdF2dIQX8CxQ08hAvm4wj6t3IaJf8dVdfqboxkaE3dO9yOHCGiw+ciAjJpBlwJMrqAoJkoorYJ45aoA2gtQebqnWUe7KgaA+TUzjhcMb/ZNWBE5dlkauqDQtVT9VrGeCuES9a/ScObeC8lz1S5JV+amqjwo5DkHa7tyvP7pwjNU/pzl59/JsuqaoXF7EfCg7nh9bGmLP3r16uZiTT3WMW0azecVbHhkVPhXGVXlRtNcxeV/Nw4UmNMIL4Bjw5aRKu1NE2M/zUxix4SnUX9JqyarCGE9soP81dpld2FWLoJ4tPd/zP9QGvU8REeFTLeEbKnctUcCuxyCKJNtcnnlNopOZOen7kN+y7NyjMdsbiuZ3yHhwdaP3b2yAcPnPJUkb2LTIsyB5tTYGa7zYUW0XKy8X+7zTfyZB5cHTbb1Foh1f9I4vwkafro8XZd4mdA4EqOMZLwRJYbiHf+pAHyt4BWEZeX4tJelgRUdOve1pVIL6hC4TMd3d4nRVD93FD/mGjB7KxaU+ua5H4ygyGHSO7yoGhdNkDSVyXUkwX79rEMY9xiQPyoq2/YCq5yXo4CrjaCUAgeDZCMruJHHWePBlJMWGEQo9ZqZzMQUahoz52H3gaJTawfgA054G/weMn7ng/A1jsvFPhFTzUOPCxoFV8ToK72jIL0w+coIZAxd1BGvNJKBfI03fi9oudzaFNQdNWOjYmYVa2vp1qSLZxLiqJWx6DIfhhPXo8YfWab5Jz6fONnjnPdTBT85o2a3Rs0TgX/YbsUW6wGqAGlu/JDlGYTznBehxl7b XdL3CMdb JAhtl+BPGNwY0RcEcISpKtGF80KULRhPWCOQAe05U7kj8z4RSamXfMhtqK3fm7guR/3EYopnwDj2DmDovCis2PNzD2Uyy1ajN6nGhGHFqneXo7zNw8HWLqCp+lb5MssJJzVoGZ4x/X5RaDYsAd85n2DpabagS3YVpLsUCU1xGYvm+fUcmtG81jrCa4iZf4mGxEGieFXbm3npGyNZXQuuiorfnG6bH1Ehbp3XQhyMCQom0FqyCtYfZ9Nn3vAmZ2476EoP5H4x4Q7kBRiLndZmOVFe/QO/ucc/3sCRRIjeVUPOo27XA2wEyDB80vWBNlxu4+pKiibuMU7xOOO4mQ1R2RJ8g/EntEJEm9GWrGZiXL/FNPfG/ds46GD2Z+Ntf8PomIolKZrGbMJsXWjirTHBy/OMua9yZaiZNRVLOegY2hAtmr5n9+LZPR28hHSbUoHTN9TKDqvb1pqMl3ExsLSpyElI1LFj8nFr3gfDHs/ULMXUETzimnwIul31joelI//BHIsc6jpB+1HHdnNiDYU76fwS+8w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add functions that are called just before the per-section memmap is initialized and just before the memmap page structures are initialized. They are called sparse_vmemmap_init_nid_early and sparse_vmemmap_init_nid_late, respectively. This allows for mm subsystems to add calls to initialize memmap and page structures in a specific way, if using SPARSEMEM_VMEMMAP. Specifically, hugetlb can pre-HVO bootmem allocated pages that way, so that no time and resources are wasted on allocating vmemmap pages, only to free them later (and possibly unnecessarily running the system out of memory in the process). Refactor some code and export a few convenience functions for external use. In sparse_init_nid, skip any sections that are already initialized, e.g. they have been initialized by sparse_vmemmap_init_nid_early already. The hugetlb code to use these functions will be added in a later commit. Export section_map_size, as any alternate memmap init code will want to use it. THe config option to enable this is SPARSEMEM_VMEMMAP_PREINIT, which is dependent on and architecture-specific option, ARCH_WANT_SPARSEMEM_VMEMMAP_PREINIT. This is done because a section flag is used, and the number of flags available is architecture-dependent (see mmzone.h). Architecures can decide if there is room for the flag and enable the option. Fortunately, as of right now, all sparse vmemmap using architectures do have room. Signed-off-by: Frank van der Linden --- include/linux/mm.h | 1 + include/linux/mmzone.h | 35 +++++++++++++++++ mm/Kconfig | 8 ++++ mm/bootmem_info.c | 4 +- mm/mm_init.c | 3 ++ mm/sparse-vmemmap.c | 23 +++++++++++ mm/sparse.c | 87 ++++++++++++++++++++++++++++++++---------- 7 files changed, 139 insertions(+), 22 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 6dfc41b461af..df83653ed6e3 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3828,6 +3828,7 @@ static inline void print_vma_addr(char *prefix, unsigned long rip) #endif void *sparse_buffer_alloc(unsigned long size); +unsigned long section_map_size(void); struct page * __populate_section_memmap(unsigned long pfn, unsigned long nr_pages, int nid, struct vmem_altmap *altmap, struct dev_pagemap *pgmap); diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 9540b41894da..44ecb2f90db4 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1933,6 +1933,9 @@ enum { SECTION_IS_EARLY_BIT, #ifdef CONFIG_ZONE_DEVICE SECTION_TAINT_ZONE_DEVICE_BIT, +#endif +#ifdef CONFIG_SPARSEMEM_VMEMMAP_PREINIT + SECTION_IS_VMEMMAP_PREINIT_BIT, #endif SECTION_MAP_LAST_BIT, }; @@ -1944,6 +1947,9 @@ enum { #ifdef CONFIG_ZONE_DEVICE #define SECTION_TAINT_ZONE_DEVICE BIT(SECTION_TAINT_ZONE_DEVICE_BIT) #endif +#ifdef CONFIG_SPARSEMEM_VMEMMAP_PREINIT +#define SECTION_IS_VMEMMAP_PREINIT BIT(SECTION_IS_VMEMMAP_PREINIT_BIT) +#endif #define SECTION_MAP_MASK (~(BIT(SECTION_MAP_LAST_BIT) - 1)) #define SECTION_NID_SHIFT SECTION_MAP_LAST_BIT @@ -1998,6 +2004,30 @@ static inline int online_device_section(struct mem_section *section) } #endif +#ifdef CONFIG_SPARSEMEM_VMEMMAP_PREINIT +static inline int preinited_vmemmap_section(struct mem_section *section) +{ + return (section && + (section->section_mem_map & SECTION_IS_VMEMMAP_PREINIT)); +} + +void sparse_vmemmap_init_nid_early(int nid); +void sparse_vmemmap_init_nid_late(int nid); + +#else +static inline int preinited_vmemmap_section(struct mem_section *section) +{ + return 0; +} +static inline void sparse_vmemmap_init_nid_early(int nid) +{ +} + +static inline void sparse_vmemmap_init_nid_late(int nid) +{ +} +#endif + static inline int online_section_nr(unsigned long nr) { return online_section(__nr_to_section(nr)); @@ -2035,6 +2065,9 @@ static inline int pfn_section_valid(struct mem_section *ms, unsigned long pfn) } #endif +void sparse_init_early_section(int nid, struct page *map, unsigned long pnum, + unsigned long flags); + #ifndef CONFIG_HAVE_ARCH_PFN_VALID /** * pfn_valid - check if there is a valid memory map entry for a PFN @@ -2116,6 +2149,8 @@ void sparse_init(void); #else #define sparse_init() do {} while (0) #define sparse_index_init(_sec, _nid) do {} while (0) +#define sparse_vmemmap_init_nid_early(_nid, _use) do {} while (0) +#define sparse_vmemmap_init_nid_late(_nid) do {} while (0) #define pfn_in_present_section pfn_valid #define subsection_map_init(_pfn, _nr_pages) do {} while (0) #endif /* CONFIG_SPARSEMEM */ diff --git a/mm/Kconfig b/mm/Kconfig index 1b501db06417..f984dd928ce7 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -489,6 +489,14 @@ config SPARSEMEM_VMEMMAP SPARSEMEM_VMEMMAP uses a virtually mapped memmap to optimise pfn_to_page and page_to_pfn operations. This is the most efficient option when sufficient kernel resources are available. + +config ARCH_WANT_SPARSEMEM_VMEMMAP_PREINIT + bool + +config SPARSEMEM_VMEMMAP_PREINIT + bool "Early init of sparse memory virtual memmap" + depends on SPARSEMEM_VMEMMAP && ARCH_WANT_SPARSEMEM_VMEMMAP_PREINIT + default y # # Select this config option from the architecture Kconfig, if it is preferred # to enable the feature of HugeTLB/dev_dax vmemmap optimization. diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c index 95f288169a38..b0e2a9fa641f 100644 --- a/mm/bootmem_info.c +++ b/mm/bootmem_info.c @@ -88,7 +88,9 @@ static void __init register_page_bootmem_info_section(unsigned long start_pfn) memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); - register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION); + if (!preinited_vmemmap_section(ms)) + register_page_bootmem_memmap(section_nr, memmap, + PAGES_PER_SECTION); usage = ms->usage; page = virt_to_page(usage); diff --git a/mm/mm_init.c b/mm/mm_init.c index d2dee53e95dd..9f1e41c3dde6 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -1862,6 +1862,9 @@ void __init free_area_init(unsigned long *max_zone_pfn) } } + for_each_node_state(nid, N_MEMORY) + sparse_vmemmap_init_nid_late(nid); + calc_nr_kernel_pages(); memmap_init(); diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 3287ebadd167..8751c46c35e4 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -470,3 +470,26 @@ struct page * __meminit __populate_section_memmap(unsigned long pfn, return pfn_to_page(pfn); } + +#ifdef CONFIG_SPARSEMEM_VMEMMAP_PREINIT +/* + * This is called just before initializing sections for a NUMA node. + * Any special initialization that needs to be done before the + * generic initialization can be done from here. Sections that + * are initialized in hooks called from here will be skipped by + * the generic initialization. + */ +void __init sparse_vmemmap_init_nid_early(int nid) +{ +} + +/* + * This is called just before the initialization of page structures + * through memmap_init. Zones are now initialized, so any work that + * needs to be done that needs zone information can be done from + * here. + */ +void __init sparse_vmemmap_init_nid_late(int nid) +{ +} +#endif diff --git a/mm/sparse.c b/mm/sparse.c index 133b033d0cba..ee0234a77c7f 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -408,13 +408,13 @@ static void __init check_usemap_section_nr(int nid, #endif /* CONFIG_MEMORY_HOTREMOVE */ #ifdef CONFIG_SPARSEMEM_VMEMMAP -static unsigned long __init section_map_size(void) +unsigned long __init section_map_size(void) { return ALIGN(sizeof(struct page) * PAGES_PER_SECTION, PMD_SIZE); } #else -static unsigned long __init section_map_size(void) +unsigned long __init section_map_size(void) { return PAGE_ALIGN(sizeof(struct page) * PAGES_PER_SECTION); } @@ -495,6 +495,44 @@ void __weak __meminit vmemmap_populate_print_last(void) { } +static void *sparse_usagebuf __meminitdata; +static void *sparse_usagebuf_end __meminitdata; + +/* + * Helper function that is used for generic section initialization, and + * can also be used by any hooks added above. + */ +void __init sparse_init_early_section(int nid, struct page *map, + unsigned long pnum, unsigned long flags) +{ + BUG_ON(!sparse_usagebuf || sparse_usagebuf >= sparse_usagebuf_end); + check_usemap_section_nr(nid, sparse_usagebuf); + sparse_init_one_section(__nr_to_section(pnum), pnum, map, + sparse_usagebuf, SECTION_IS_EARLY | flags); + sparse_usagebuf = (void *)sparse_usagebuf + mem_section_usage_size(); +} + +static int __init sparse_usage_init(int nid, unsigned long map_count) +{ + unsigned long size; + + size = mem_section_usage_size() * map_count; + sparse_usagebuf = sparse_early_usemaps_alloc_pgdat_section( + NODE_DATA(nid), size); + if (!sparse_usagebuf) { + sparse_usagebuf_end = NULL; + return -ENOMEM; + } + + sparse_usagebuf_end = sparse_usagebuf + size; + return 0; +} + +static void __init sparse_usage_fini(void) +{ + sparse_usagebuf = sparse_usagebuf_end = NULL; +} + /* * Initialize sparse on a specific node. The node spans [pnum_begin, pnum_end) * And number of present sections in this node is map_count. @@ -503,47 +541,54 @@ static void __init sparse_init_nid(int nid, unsigned long pnum_begin, unsigned long pnum_end, unsigned long map_count) { - struct mem_section_usage *usage; unsigned long pnum; struct page *map; + struct mem_section *ms; - usage = sparse_early_usemaps_alloc_pgdat_section(NODE_DATA(nid), - mem_section_usage_size() * map_count); - if (!usage) { + if (sparse_usage_init(nid, map_count)) { pr_err("%s: node[%d] usemap allocation failed", __func__, nid); goto failed; } + sparse_buffer_init(map_count * section_map_size(), nid); + + sparse_vmemmap_init_nid_early(nid); + for_each_present_section_nr(pnum_begin, pnum) { unsigned long pfn = section_nr_to_pfn(pnum); if (pnum >= pnum_end) break; - map = __populate_section_memmap(pfn, PAGES_PER_SECTION, - nid, NULL, NULL); - if (!map) { - pr_err("%s: node[%d] memory map backing failed. Some memory will not be available.", - __func__, nid); - pnum_begin = pnum; - sparse_buffer_fini(); - goto failed; + ms = __nr_to_section(pnum); + if (!preinited_vmemmap_section(ms)) { + map = __populate_section_memmap(pfn, PAGES_PER_SECTION, + nid, NULL, NULL); + if (!map) { + pr_err("%s: node[%d] memory map backing failed. Some memory will not be available.", + __func__, nid); + pnum_begin = pnum; + sparse_usage_fini(); + sparse_buffer_fini(); + goto failed; + } + sparse_init_early_section(nid, map, pnum, 0); } - check_usemap_section_nr(nid, usage); - sparse_init_one_section(__nr_to_section(pnum), pnum, map, usage, - SECTION_IS_EARLY); - usage = (void *) usage + mem_section_usage_size(); } + sparse_usage_fini(); sparse_buffer_fini(); return; failed: - /* We failed to allocate, mark all the following pnums as not present */ + /* + * We failed to allocate, mark all the following pnums as not present, + * except the ones already initialized earlier. + */ for_each_present_section_nr(pnum_begin, pnum) { - struct mem_section *ms; - if (pnum >= pnum_end) break; ms = __nr_to_section(pnum); + if (!preinited_vmemmap_section(ms)) + ms->section_mem_map = 0; ms->section_mem_map = 0; } }