From patchwork Fri Nov 30 17:59:20 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 10706961 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7E60213B0 for ; Fri, 30 Nov 2018 18:00:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6AACF2FF6F for ; Fri, 30 Nov 2018 18:00:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5DA212FF86; Fri, 30 Nov 2018 18:00:21 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1C9D82FF6F for ; Fri, 30 Nov 2018 18:00:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C02A76B597B; Fri, 30 Nov 2018 13:00:17 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BB45E6B597C; Fri, 30 Nov 2018 13:00:17 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A2CB56B597D; Fri, 30 Nov 2018 13:00:17 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) by kanga.kvack.org (Postfix) with ESMTP id 6EBFA6B597B for ; Fri, 30 Nov 2018 13:00:17 -0500 (EST) Received: by mail-qk1-f200.google.com with SMTP id 92so6050716qkx.19 for ; Fri, 30 Nov 2018 10:00:17 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=LcaNqNrbyAzbnE0+wFOBNhvZUbUM5bvFtHi2pln6ues=; b=Ae1sE1htPJJjZoCB84mOHqYIfoijOpOQhK30ibEqomHsLshuUw85px5sJO4CV5gt7J Kv2vn/ldDrybXrqyDAJDcX/9YfAuTiABjAF+Ps35mg57GtaPHaMKnfXx0cMfMOa3w7tP LSrUDdDbYU8bTPpnIzpMQvQ8TyEyDybUH/EqHrBys9Q3P378E/BlBG4NYkNjygEwkxik CY918DewyaJZJaeOftM2jWIxQBpvdcR8uwo+Ma5i12a/yUeCgGmqSQ6v2gVpeBcoJu7m Q6OeNLq0XEGf8iVIHBiLNdm5zUfOOH86ymYkTLvIej1z/Arvi4sHGFUzHgL5hraFJRdZ VBXA== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of david@redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Gm-Message-State: AA+aEWanbMLdFC9xVShZxnDJrGKdslC+CoXhevJ9Q0WORVEmaKCS6La1 fb2taDsqa8TkltJbpYv9pmYl05lIk8T+WXbjlae6tT8qM+/O5mlI0yApjFZz1Hq5hCi8nYinjNm uu14y9SNK6HYEbM6qyc3dmqXVBIDiyHVi3+M0brIPM2y+tTGdrviVkduWe1QUtR7/iw== X-Received: by 2002:a0c:d80f:: with SMTP id h15mr6540347qvj.228.1543600817162; Fri, 30 Nov 2018 10:00:17 -0800 (PST) X-Google-Smtp-Source: AFSGD/UuRXsxys6qsrMawy8zFbI9/1jvoMz2dozm8TYkLKQ1140IghLHnnxx0aRWPIXTdJPNNuBZ X-Received: by 2002:a0c:d80f:: with SMTP id h15mr6540237qvj.228.1543600815552; Fri, 30 Nov 2018 10:00:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543600815; cv=none; d=google.com; s=arc-20160816; b=VLM5g5xfIrzFjdVrD4+OZzBg2kP1cjOoLy/UmEKE2IRc457JCj3RadH69BrJZUCRJc 7C7z4FuDS+rQK3fpP5mcEFib1AyJ36KIkfQH6l3KEYNtct9m3FWXIz71uPUG4Ye0REYS qqrt0jerll629c/TtuXDH3uoCbB/JSulX0k24nV7VvTQA+kkOL8iqPBk689fgSzMROWt 4JqJ1fCG8Y4hLdOZAT41/qFiqw2o0QFdwSNZqZnnJAYCvYUI5yxdGxrVne4z8uas/wPW 1k7B1qDui3T2bwNkc+kEw5H2YEoAl9s+vWs4kMeUhgT+Mb+g8rSOLD1QT7hhcT3PDGH5 hxRA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=LcaNqNrbyAzbnE0+wFOBNhvZUbUM5bvFtHi2pln6ues=; b=cuA5eqan3qrwh0YgmIk0yf5wG8pF7OKcEqyG8FPrZpUU+yDmKf7cflhHqLjld0W6zr Pazb6MgoYyITLellVunCUGTWLEwD54libo8wBugh1oyhXftaxMgsyTneFnaoby5qRWpP 2sF25XWX+OfIRagit8bElEQAijEJs29wZVwNjwLsyDMshAzqFSxQY0Dh0J8oTLXFlJyh a4nUyXaQmWsYIc9KiDaF2o/MxpCosM6fQpkoC5TpbPJBc/p1BLGR9hmg6tXPBJGHAQyF 3vCz1POnhiSjsELPvJDCdHAqimYrl2kYB1+WFMD2m7wZdUemWeL1DRV5Z6qVpzPPKbb3 CX3A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of david@redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from mx1.redhat.com (mx1.redhat.com. [209.132.183.28]) by mx.google.com with ESMTPS id u189si482290qkf.44.2018.11.30.10.00.15 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 30 Nov 2018 10:00:15 -0800 (PST) Received-SPF: pass (google.com: domain of david@redhat.com designates 209.132.183.28 as permitted sender) client-ip=209.132.183.28; Authentication-Results: mx.google.com; spf=pass (google.com: domain of david@redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 643938E3EA; Fri, 30 Nov 2018 18:00:13 +0000 (UTC) Received: from t460s.redhat.com (ovpn-126-156.rdu2.redhat.com [10.10.126.156]) by smtp.corp.redhat.com (Postfix) with ESMTP id B844C5D9C9; Fri, 30 Nov 2018 17:59:57 +0000 (UTC) From: David Hildenbrand To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, linux-ia64@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, linux-acpi@vger.kernel.org, devel@linuxdriverproject.org, xen-devel@lists.xenproject.org, x86@kernel.org, David Hildenbrand , Tony Luck , Fenghua Yu , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , Martin Schwidefsky , Heiko Carstens , Yoshinori Sato , Rich Felker , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , Greg Kroah-Hartman , "Rafael J. Wysocki" , Andrew Morton , Mike Rapoport , Michal Hocko , Dan Williams , "Kirill A. Shutemov" , Oscar Salvador , Nicholas Piggin , Stephen Rothwell , Christophe Leroy , =?utf-8?q?Jonathan_Neusch=C3=A4?= =?utf-8?q?fer?= , Mauricio Faria de Oliveira , Vasily Gorbik , Arun KS , Rob Herring , Pavel Tatashin , "mike.travis@hpe.com" , Joonsoo Kim , Wei Yang , Logan Gunthorpe , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , =?utf-8?q?Jan_H=2E_S?= =?utf-8?q?ch=C3=B6nherr?= , Dave Jiang , Matthew Wilcox , Mathieu Malaterre Subject: [PATCH RFCv2 2/4] mm/memory_hotplug: Replace "bool want_memblock" by "int type" Date: Fri, 30 Nov 2018 18:59:20 +0100 Message-Id: <20181130175922.10425-3-david@redhat.com> In-Reply-To: <20181130175922.10425-1-david@redhat.com> References: <20181130175922.10425-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Fri, 30 Nov 2018 18:00:14 +0000 (UTC) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Let's pass a memory block type instead. Pass "MEMORY_BLOCK_NONE" for device memory and for now "MEMORY_BLOCK_UNSPECIFIED" for anything else. No functional change. Cc: Tony Luck Cc: Fenghua Yu Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: Michael Ellerman Cc: Martin Schwidefsky Cc: Heiko Carstens Cc: Yoshinori Sato Cc: Rich Felker Cc: Dave Hansen Cc: Andy Lutomirski Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: "H. Peter Anvin" Cc: x86@kernel.org Cc: Greg Kroah-Hartman Cc: "Rafael J. Wysocki" Cc: Andrew Morton Cc: Mike Rapoport Cc: Michal Hocko Cc: Dan Williams Cc: "Kirill A. Shutemov" Cc: Oscar Salvador Cc: Nicholas Piggin Cc: Stephen Rothwell Cc: Christophe Leroy Cc: "Jonathan Neuschäfer" Cc: Mauricio Faria de Oliveira Cc: Vasily Gorbik Cc: Arun KS Cc: Rob Herring Cc: Pavel Tatashin Cc: "mike.travis@hpe.com" Cc: Joonsoo Kim Cc: Wei Yang Cc: Logan Gunthorpe Cc: "Jérôme Glisse" Cc: "Jan H. Schönherr" Cc: Dave Jiang Cc: Matthew Wilcox Cc: Mathieu Malaterre Signed-off-by: David Hildenbrand --- arch/ia64/mm/init.c | 4 ++-- arch/powerpc/mm/mem.c | 4 ++-- arch/s390/mm/init.c | 4 ++-- arch/sh/mm/init.c | 4 ++-- arch/x86/mm/init_32.c | 4 ++-- arch/x86/mm/init_64.c | 8 ++++---- drivers/base/memory.c | 11 +++++++---- include/linux/memory.h | 2 +- include/linux/memory_hotplug.h | 12 ++++++------ kernel/memremap.c | 6 ++++-- mm/memory_hotplug.c | 16 ++++++++-------- 11 files changed, 40 insertions(+), 35 deletions(-) diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c index 904fe55e10fc..408635d2902f 100644 --- a/arch/ia64/mm/init.c +++ b/arch/ia64/mm/init.c @@ -646,13 +646,13 @@ mem_init (void) #ifdef CONFIG_MEMORY_HOTPLUG int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, - bool want_memblock) + int type) { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; int ret; - ret = __add_pages(nid, start_pfn, nr_pages, altmap, want_memblock); + ret = __add_pages(nid, start_pfn, nr_pages, altmap, type); if (ret) printk("%s: Problem encountered in __add_pages() as ret=%d\n", __func__, ret); diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index b3c9ee5c4f78..e394637da270 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -118,7 +118,7 @@ int __weak remove_section_mapping(unsigned long start, unsigned long end) } int __meminit arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, - bool want_memblock) + int type) { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; @@ -135,7 +135,7 @@ int __meminit arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap * } flush_inval_dcache_range(start, start + size); - return __add_pages(nid, start_pfn, nr_pages, altmap, want_memblock); + return __add_pages(nid, start_pfn, nr_pages, altmap, type); } #ifdef CONFIG_MEMORY_HOTREMOVE diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index 3e82f66d5c61..ba2c56328e6d 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -225,7 +225,7 @@ device_initcall(s390_cma_mem_init); #endif /* CONFIG_CMA */ int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, - bool want_memblock) + int type) { unsigned long start_pfn = PFN_DOWN(start); unsigned long size_pages = PFN_DOWN(size); @@ -235,7 +235,7 @@ int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, if (rc) return rc; - rc = __add_pages(nid, start_pfn, size_pages, altmap, want_memblock); + rc = __add_pages(nid, start_pfn, size_pages, altmap, type); if (rc) vmem_remove_mapping(start, size); return rc; diff --git a/arch/sh/mm/init.c b/arch/sh/mm/init.c index 1a483a008872..5fbb8724e0f2 100644 --- a/arch/sh/mm/init.c +++ b/arch/sh/mm/init.c @@ -419,14 +419,14 @@ void free_initrd_mem(unsigned long start, unsigned long end) #ifdef CONFIG_MEMORY_HOTPLUG int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, - bool want_memblock) + int type) { unsigned long start_pfn = PFN_DOWN(start); unsigned long nr_pages = size >> PAGE_SHIFT; int ret; /* We only have ZONE_NORMAL, so this is easy.. */ - ret = __add_pages(nid, start_pfn, nr_pages, altmap, want_memblock); + ret = __add_pages(nid, start_pfn, nr_pages, altmap, type); if (unlikely(ret)) printk("%s: Failed, __add_pages() == %d\n", __func__, ret); diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c index 0b8c7b0033d2..41e409b29d2b 100644 --- a/arch/x86/mm/init_32.c +++ b/arch/x86/mm/init_32.c @@ -851,12 +851,12 @@ void __init mem_init(void) #ifdef CONFIG_MEMORY_HOTPLUG int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, - bool want_memblock) + int type) { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; - return __add_pages(nid, start_pfn, nr_pages, altmap, want_memblock); + return __add_pages(nid, start_pfn, nr_pages, altmap, type); } #ifdef CONFIG_MEMORY_HOTREMOVE diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index f80d98381a97..5b4f3dcd44cf 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -783,11 +783,11 @@ static void update_end_of_memory_vars(u64 start, u64 size) } int add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages, - struct vmem_altmap *altmap, bool want_memblock) + struct vmem_altmap *altmap, int type) { int ret; - ret = __add_pages(nid, start_pfn, nr_pages, altmap, want_memblock); + ret = __add_pages(nid, start_pfn, nr_pages, altmap, type); WARN_ON_ONCE(ret); /* update max_pfn, max_low_pfn and high_memory */ @@ -798,14 +798,14 @@ int add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages, } int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, - bool want_memblock) + int type) { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; init_memory_mapping(start, start + size); - return add_pages(nid, start_pfn, nr_pages, altmap, want_memblock); + return add_pages(nid, start_pfn, nr_pages, altmap, type); } #define PAGE_INUSE 0xFD diff --git a/drivers/base/memory.c b/drivers/base/memory.c index 17f2985c07c5..c42300082c88 100644 --- a/drivers/base/memory.c +++ b/drivers/base/memory.c @@ -741,7 +741,7 @@ static int add_memory_block(int base_section_nr) * need an interface for the VM to add new memory regions, * but without onlining it. */ -int hotplug_memory_register(int nid, struct mem_section *section) +int hotplug_memory_register(int nid, struct mem_section *section, int type) { int ret = 0; struct memory_block *mem; @@ -750,11 +750,14 @@ int hotplug_memory_register(int nid, struct mem_section *section) mem = find_memory_block(section); if (mem) { - mem->section_count++; + /* make sure the type matches */ + if (mem->type == type) + mem->section_count++; + else + ret = -EINVAL; put_device(&mem->dev); } else { - ret = init_memory_block(&mem, section, MEM_OFFLINE, - MEMORY_BLOCK_UNSPECIFIED); + ret = init_memory_block(&mem, section, MEM_OFFLINE, type); if (ret) goto out; mem->section_count++; diff --git a/include/linux/memory.h b/include/linux/memory.h index 06268e96e0da..9f39ef41e6d2 100644 --- a/include/linux/memory.h +++ b/include/linux/memory.h @@ -138,7 +138,7 @@ extern int register_memory_notifier(struct notifier_block *nb); extern void unregister_memory_notifier(struct notifier_block *nb); extern int register_memory_isolate_notifier(struct notifier_block *nb); extern void unregister_memory_isolate_notifier(struct notifier_block *nb); -int hotplug_memory_register(int nid, struct mem_section *section); +int hotplug_memory_register(int nid, struct mem_section *section, int type); #ifdef CONFIG_MEMORY_HOTREMOVE extern int unregister_memory_section(int nid, struct mem_section *); #endif diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 5493d3fa0c7f..667a37aa9a3c 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -117,18 +117,18 @@ extern void shrink_zone(struct zone *zone, unsigned long start_pfn, /* reasonably generic interface to expand the physical pages */ extern int __add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages, - struct vmem_altmap *altmap, bool want_memblock); + struct vmem_altmap *altmap, int type); #ifndef CONFIG_ARCH_HAS_ADD_PAGES static inline int add_pages(int nid, unsigned long start_pfn, - unsigned long nr_pages, struct vmem_altmap *altmap, - bool want_memblock) + unsigned long nr_pages, struct vmem_altmap *altmap, + int type) { - return __add_pages(nid, start_pfn, nr_pages, altmap, want_memblock); + return __add_pages(nid, start_pfn, nr_pages, altmap, type); } #else /* ARCH_HAS_ADD_PAGES */ int add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages, - struct vmem_altmap *altmap, bool want_memblock); + struct vmem_altmap *altmap, int type); #endif /* ARCH_HAS_ADD_PAGES */ #ifdef CONFIG_NUMA @@ -330,7 +330,7 @@ extern int __add_memory(int nid, u64 start, u64 size); extern int add_memory(int nid, u64 start, u64 size); extern int add_memory_resource(int nid, struct resource *resource); extern int arch_add_memory(int nid, u64 start, u64 size, - struct vmem_altmap *altmap, bool want_memblock); + struct vmem_altmap *altmap, int type); extern void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn, unsigned long nr_pages, struct vmem_altmap *altmap); extern int offline_pages(unsigned long start_pfn, unsigned long nr_pages); diff --git a/kernel/memremap.c b/kernel/memremap.c index 66cbf334203b..422e4e779208 100644 --- a/kernel/memremap.c +++ b/kernel/memremap.c @@ -4,6 +4,7 @@ #include #include #include +#include #include #include #include @@ -215,7 +216,8 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) */ if (pgmap->type == MEMORY_DEVICE_PRIVATE) { error = add_pages(nid, align_start >> PAGE_SHIFT, - align_size >> PAGE_SHIFT, NULL, false); + align_size >> PAGE_SHIFT, NULL, + MEMORY_BLOCK_NONE); } else { error = kasan_add_zero_shadow(__va(align_start), align_size); if (error) { @@ -224,7 +226,7 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) } error = arch_add_memory(nid, align_start, align_size, altmap, - false); + MEMORY_BLOCK_NONE); } if (!error) { diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 16c600771298..7246faa44488 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -246,7 +246,7 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat) #endif /* CONFIG_HAVE_BOOTMEM_INFO_NODE */ static int __meminit __add_section(int nid, unsigned long phys_start_pfn, - struct vmem_altmap *altmap, bool want_memblock) + struct vmem_altmap *altmap, int type) { int ret; @@ -257,10 +257,11 @@ static int __meminit __add_section(int nid, unsigned long phys_start_pfn, if (ret < 0) return ret; - if (!want_memblock) + if (type == MEMORY_BLOCK_NONE) return 0; - return hotplug_memory_register(nid, __pfn_to_section(phys_start_pfn)); + return hotplug_memory_register(nid, __pfn_to_section(phys_start_pfn), + type); } /* @@ -270,8 +271,8 @@ static int __meminit __add_section(int nid, unsigned long phys_start_pfn, * add the new pages. */ int __ref __add_pages(int nid, unsigned long phys_start_pfn, - unsigned long nr_pages, struct vmem_altmap *altmap, - bool want_memblock) + unsigned long nr_pages, struct vmem_altmap *altmap, + int type) { unsigned long i; int err = 0; @@ -295,8 +296,7 @@ int __ref __add_pages(int nid, unsigned long phys_start_pfn, } for (i = start_sec; i <= end_sec; i++) { - err = __add_section(nid, section_nr_to_pfn(i), altmap, - want_memblock); + err = __add_section(nid, section_nr_to_pfn(i), altmap, type); /* * EEXIST is finally dealt with by ioresource collision @@ -1100,7 +1100,7 @@ int __ref add_memory_resource(int nid, struct resource *res) new_node = ret; /* call arch's memory hotadd */ - ret = arch_add_memory(nid, start, size, NULL, true); + ret = arch_add_memory(nid, start, size, NULL, MEMORY_TYPE_UNSPECIFIED); if (ret < 0) goto error;