From patchwork Mon Oct 15 15:30:33 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oscar Salvador X-Patchwork-Id: 10642059 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id ABD23925 for ; Mon, 15 Oct 2018 15:31:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 971F329B01 for ; Mon, 15 Oct 2018 15:31:17 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 89A9129B0A; Mon, 15 Oct 2018 15:31:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6F91329B01 for ; Mon, 15 Oct 2018 15:31:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 01A7A6B0010; Mon, 15 Oct 2018 11:31:00 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E90576B0269; Mon, 15 Oct 2018 11:30:59 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C93B06B026A; Mon, 15 Oct 2018 11:30:59 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) by kanga.kvack.org (Postfix) with ESMTP id 6A1F16B0010 for ; Mon, 15 Oct 2018 11:30:59 -0400 (EDT) Received: by mail-wm1-f69.google.com with SMTP id h67-v6so14554588wmh.0 for ; Mon, 15 Oct 2018 08:30:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=3g9o7Nt8+lLup8PtXVUtBhddDfaQmR8XNXgLSCW+34w=; b=g/bSemqRR4SHjCwMTOxMuHDLv5Tg4x7Lz2Vy/8YEtOkECo2ODOHUBwle4kwLsX+DGu KeE4bIyuphefoZQr7s7W1CthvW0WyVIwWAQ7JxlpSBWLU1VIs0mO7x6W5t6sx6QbrA2O 1OEvB8vHU35103Tlwzf+mwwr0//84djMghTEi2d3gTsJ2/sVWm3XMhLYME4Kmz5plvA5 HvHK6CMqLi7ItAcVCS+zv8VEvzF9f0GqTYmPWmlO010UtaKnigBQ6Jp2izUS7MS2ieXq xFN7GlQ/FZHWDMJNDiXVhlSmhjzMxj/gQPloUlr56DUO4WIIegnm/2lytpkYSWC2b+Jd d8vA== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of osalvador.vilardaga@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=osalvador.vilardaga@gmail.com X-Gm-Message-State: ABuFfoiT5QPwLoqbOTxTxkn9E30BzNerYJJrI/X5TsPWRyo5Jgo/DcAB hQMLK7pAN+umuKV567ftLCJ+9KulxwfzMFcNqRfuBhzbhANVQxilIv4OHDI8mlq4mi/43WM1AL0 QITNeNAfzXRlglYFFCJ9m7KaGeep7n/2ys9BXxyDW3exeugqpXfBhNri/VVATmlcxHSTbO//l3G JhiyRaBY4Kgez2hJCjD3h72RQjaSYYigUt0Vp5E1ip/jcpDo0CZq7sFLu5egz/dnHFGfcae6WRM TdSOBJxVFiFpfJd/oP6nxI7YozazGiXuGYKIY4RAoP3/QHy2ZDZIW2g8J7bTuX9qkaBw78SzKaN ZlBZEafM8H9Igaflgx4gHo9KMA0NASTQpmvY4ImAPMRUmuUNPhhdG15qtW+pZyJ1Yf9/pCIE8Q= = X-Received: by 2002:a1c:1c13:: with SMTP id c19-v6mr12805874wmc.85.1539617458776; Mon, 15 Oct 2018 08:30:58 -0700 (PDT) X-Received: by 2002:a1c:1c13:: with SMTP id c19-v6mr12805773wmc.85.1539617456886; Mon, 15 Oct 2018 08:30:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539617456; cv=none; d=google.com; s=arc-20160816; b=mQjZ/afRURqKSp1RrMjFK8eE4P5uI3KvsXo7KpuImQ5/EhlsihngMYg0ved49SGM5/ wYe9WofY0EH0G1IBOHMxYCu5K6VZm8y9wNUDBg1Ullsqv2o1oOYMIQP/y7qBb3d1AZGH UDg3dW53DbJeEU/WAYqfSmdcexa6ltFS/z+v/N8q1hl1vbJZbeFKDX2hVdgjizOS8ko9 g83yyAZpmkde5TMMBv3wCIh88rimEu4e1hvN3nTgRyA1dgqHei6YR+a1dLG2t8YatUj0 YyuoSofNoKh/vMBoVsiexFs0qXo9Ngoexrh6MnFaYfCcM2RWQvZ/NKhyLoBUg8vsDnC6 mJxQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from; bh=3g9o7Nt8+lLup8PtXVUtBhddDfaQmR8XNXgLSCW+34w=; b=cXysjxcGxAeNQnni9Ivpc8kNy005/PA8xA0VRG3Nquw2LeQMI80ABYIJRaJ1QCudHf CbPo1v/pnKz+VPbuvQ+5t33M3hrOCJFCIMOzi62YC6GRWYtniKvnBVsQyrs4xh4H+J1y JyPY0GIVug/Sz60B5mEpj/C7lHHvD6bQ8K11KjJ5J/tK9qEccDTr+nob60IKgAfKHceE raaImZDv8MTe0PdjtvwKe+0jhQ/KLCU5wgebEG5PcbVud1loH03PHRpEvvz0J7Bu0xh+ rXzG1e8e+Em1VEJDVW6OGN1KBp/mXP9dm1q+LVcQtvmXWvDkKRPPMtPVo2CQYTKHjGzx f8rQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of osalvador.vilardaga@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=osalvador.vilardaga@gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id o131-v6sor6097661wmd.8.2018.10.15.08.30.56 for (Google Transport Security); Mon, 15 Oct 2018 08:30:56 -0700 (PDT) Received-SPF: pass (google.com: domain of osalvador.vilardaga@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; spf=pass (google.com: domain of osalvador.vilardaga@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=osalvador.vilardaga@gmail.com X-Google-Smtp-Source: ACcGV61EIJTVWGdHuBwbNe7i0Ly+frUOWJgDGXTNEMhQRToVw0j2DrRR2la1ckBZVexNQqV2ckKkSg== X-Received: by 2002:a1c:a9d4:: with SMTP id s203-v6mr13138375wme.58.1539617456135; Mon, 15 Oct 2018 08:30:56 -0700 (PDT) Received: from techadventures.net (techadventures.net. [62.201.165.239]) by smtp.gmail.com with ESMTPSA id e21-v6sm11992704wma.8.2018.10.15.08.30.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 15 Oct 2018 08:30:54 -0700 (PDT) Received: from d104.suse.de (charybdis-ext.suse.de [195.135.221.2]) by techadventures.net (Postfix) with ESMTPA id 96AC71259A9; Mon, 15 Oct 2018 17:30:52 +0200 (CEST) From: Oscar Salvador To: akpm@linux-foundation.org Cc: mhocko@suse.com, dan.j.williams@intel.com, yasu.isimatu@gmail.com, rppt@linux.vnet.ibm.com, malat@debian.org, linux-kernel@vger.kernel.org, pavel.tatashin@microsoft.com, jglisse@redhat.com, Jonathan.Cameron@huawei.com, rafael@kernel.org, david@redhat.com, dave.jiang@intel.com, linux-mm@kvack.org, alexander.h.duyck@linux.intel.com, Oscar Salvador Subject: [PATCH 4/5] mm/memory_hotplug: Move zone/pages handling to offline stage Date: Mon, 15 Oct 2018 17:30:33 +0200 Message-Id: <20181015153034.32203-5-osalvador@techadventures.net> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20181015153034.32203-1-osalvador@techadventures.net> References: <20181015153034.32203-1-osalvador@techadventures.net> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Oscar Salvador Currently, we decrement zone/node spanned_pages during the hot-remove path. The picture we have now is: - hot-add memory: a) Allocate a new resource based on the hot-added memory b) Add memory sections for the hot-added memory - online memory: c) Re-adjust zone/pgdat nr of pages (managed, spanned, present) d) Initialize the pages from the new memory-range e) Online memory sections - offline memory: f) Offline memory sections g) Re-adjust zone/pgdat nr of managed/present pages - hot-remove memory: i) Re-adjust zone/pgdat nr of spanned pages j) Remove memory sections k) Release resource This, besides of not being consistent with the current code, implies that we can access steal pages if we never get to online that memory. So we should move i) to the offline stage. Hot-remove path should only care about memory sections and memory blocks. There is a particularity and that is HMM/devm. When the memory is being handled by HMM/devm, this memory is moved to ZONE_DEVICE by means of move_pfn_range_to_zone, but since this memory does not get "online", the sections do not get online either. This is a problem because shrink_zone_pgdat_pages will now look for online sections, so we need to explicitly offline the sections before calling in. add_device_memory takes care of online them, and del_device_memory offlines them. Finally, shrink_zone_pgdat_pages() is moved to offline_pages(), so now, all pages/zone handling is being taken care in online/offline stage. So now we will have: - hot-add memory: a) Allocate a new resource based on the hot-added memory b) Add memory sections for the hot-added memory - online memory: c) Re-adjust zone/pgdat nr of pages (managed, spanned, present) d) Initialize the pages from the new memory-range e) Online memory sections - offline memory: f) Offline memory sections g) Re-adjust zone/pgdat nr of managed/present/spanned pages - hot-remove memory: i) Remove memory sections j) Release resource Signed-off-by: Oscar Salvador --- arch/ia64/mm/init.c | 4 +- arch/powerpc/mm/mem.c | 11 +---- arch/sh/mm/init.c | 4 +- arch/x86/mm/init_32.c | 4 +- arch/x86/mm/init_64.c | 8 +--- include/linux/memory_hotplug.h | 8 ++-- mm/memory_hotplug.c | 99 ++++++++++++++++++++---------------------- mm/sparse.c | 6 +-- 8 files changed, 58 insertions(+), 86 deletions(-) diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c index 904fe55e10fc..a6b5f351620c 100644 --- a/arch/ia64/mm/init.c +++ b/arch/ia64/mm/init.c @@ -665,11 +665,9 @@ int arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap) { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; - struct zone *zone; int ret; - zone = page_zone(pfn_to_page(start_pfn)); - ret = __remove_pages(zone, start_pfn, nr_pages, altmap); + ret = __remove_pages(nid, start_pfn, nr_pages, altmap); if (ret) pr_warn("%s: Problem encountered in __remove_pages() as" " ret=%d\n", __func__, ret); diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index 6db8f527babb..14b6c859bc04 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -144,18 +144,9 @@ int __meminit arch_remove_memory(int nid, u64 start, u64 size, { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; - struct page *page; int ret; - /* - * If we have an altmap then we need to skip over any reserved PFNs - * when querying the zone. - */ - page = pfn_to_page(start_pfn); - if (altmap) - page += vmem_altmap_offset(altmap); - - ret = __remove_pages(page_zone(page), start_pfn, nr_pages, altmap); + ret = __remove_pages(nid, start_pfn, nr_pages, altmap); if (ret) return ret; diff --git a/arch/sh/mm/init.c b/arch/sh/mm/init.c index a8e5c0e00fca..6e80a7a50f8b 100644 --- a/arch/sh/mm/init.c +++ b/arch/sh/mm/init.c @@ -447,11 +447,9 @@ int arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap) { unsigned long start_pfn = PFN_DOWN(start); unsigned long nr_pages = size >> PAGE_SHIFT; - struct zone *zone; int ret; - zone = page_zone(pfn_to_page(start_pfn)); - ret = __remove_pages(zone, start_pfn, nr_pages, altmap); + ret = __remove_pages(nid, start_pfn, nr_pages, altmap); if (unlikely(ret)) pr_warn("%s: Failed, __remove_pages() == %d\n", __func__, ret); diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c index 85c94f9a87f8..4719af142c10 100644 --- a/arch/x86/mm/init_32.c +++ b/arch/x86/mm/init_32.c @@ -864,10 +864,8 @@ int arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap) { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; - struct zone *zone; - zone = page_zone(pfn_to_page(start_pfn)); - return __remove_pages(zone, start_pfn, nr_pages, altmap); + return __remove_pages(nid, start_pfn, nr_pages, altmap); } #endif #endif diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 449958da97a4..5f73b6deb794 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1152,15 +1152,9 @@ int __ref arch_remove_memory(int nid, u64 start, u64 size, { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; - struct page *page = pfn_to_page(start_pfn); - struct zone *zone; int ret; - /* With altmap the first mapped page is offset from @start */ - if (altmap) - page += vmem_altmap_offset(altmap); - zone = page_zone(page); - ret = __remove_pages(zone, start_pfn, nr_pages, altmap); + ret = __remove_pages(nid, start_pfn, nr_pages, altmap); WARN_ON_ONCE(ret); kernel_physical_mapping_remove(start, start + size); diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index cf014d5edbb2..a1be5e76be14 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -109,8 +109,8 @@ static inline bool movable_node_is_enabled(void) #ifdef CONFIG_MEMORY_HOTREMOVE extern int arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap); -extern int __remove_pages(struct zone *zone, unsigned long start_pfn, - unsigned long nr_pages, struct vmem_altmap *altmap); +extern int __remove_pages(int nid, unsigned long start_pfn, + unsigned long nr_pages, struct vmem_altmap *altmap); #ifdef CONFIG_ZONE_DEVICE extern int del_device_memory(int nid, unsigned long start, unsigned long size, @@ -346,8 +346,8 @@ extern int offline_pages(unsigned long start_pfn, unsigned long nr_pages); extern bool is_memblock_offlined(struct memory_block *mem); extern int sparse_add_one_section(struct pglist_data *pgdat, unsigned long start_pfn, struct vmem_altmap *altmap); -extern void sparse_remove_one_section(struct zone *zone, struct mem_section *ms, - unsigned long map_offset, struct vmem_altmap *altmap); +extern void sparse_remove_one_section(int nid, struct mem_section *ms, + unsigned long map_offset, struct vmem_altmap *altmap); extern struct page *sparse_decode_mem_map(unsigned long coded_mem_map, unsigned long pnum); extern bool allow_online_pfn_range(int nid, unsigned long pfn, unsigned long nr_pages, diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 5874aceb81ac..6b98321aa52f 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -319,12 +319,10 @@ static unsigned long find_smallest_section_pfn(int nid, struct zone *zone, unsigned long start_pfn, unsigned long end_pfn) { - struct mem_section *ms; - for (; start_pfn < end_pfn; start_pfn += PAGES_PER_SECTION) { - ms = __pfn_to_section(start_pfn); + struct mem_section *ms = __pfn_to_section(start_pfn); - if (unlikely(!valid_section(ms))) + if (unlikely(!online_section(ms))) continue; if (unlikely(pfn_to_nid(start_pfn) != nid)) @@ -344,15 +342,14 @@ static unsigned long find_biggest_section_pfn(int nid, struct zone *zone, unsigned long start_pfn, unsigned long end_pfn) { - struct mem_section *ms; unsigned long pfn; /* pfn is the end pfn of a memory section. */ pfn = end_pfn - 1; for (; pfn >= start_pfn; pfn -= PAGES_PER_SECTION) { - ms = __pfn_to_section(pfn); + struct mem_section *ms = __pfn_to_section(pfn); - if (unlikely(!valid_section(ms))) + if (unlikely(!online_section(ms))) continue; if (unlikely(pfn_to_nid(pfn) != nid)) @@ -414,7 +411,7 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn, for (; pfn < zone_end_pfn; pfn += PAGES_PER_SECTION) { ms = __pfn_to_section(pfn); - if (unlikely(!valid_section(ms))) + if (unlikely(!online_section(ms))) continue; if (page_zone(pfn_to_page(pfn)) != zone) @@ -482,7 +479,7 @@ static void shrink_pgdat_span(struct pglist_data *pgdat, for (; pfn < pgdat_end_pfn; pfn += PAGES_PER_SECTION) { ms = __pfn_to_section(pfn); - if (unlikely(!valid_section(ms))) + if (unlikely(!online_section(ms))) continue; if (pfn_to_nid(pfn) != nid) @@ -501,23 +498,31 @@ static void shrink_pgdat_span(struct pglist_data *pgdat, pgdat->node_spanned_pages = 0; } -static void __remove_zone(struct zone *zone, unsigned long start_pfn) +static void shrink_zone_pgdat_pages(struct zone *zone, unsigned long start_pfn, + unsigned long end_pfn, unsigned long offlined_pages) { struct pglist_data *pgdat = zone->zone_pgdat; int nr_pages = PAGES_PER_SECTION; unsigned long flags; + unsigned long pfn; - pgdat_resize_lock(zone->zone_pgdat, &flags); - shrink_zone_span(zone, start_pfn, start_pfn + nr_pages); - shrink_pgdat_span(pgdat, start_pfn, start_pfn + nr_pages); - pgdat_resize_unlock(zone->zone_pgdat, &flags); + zone->present_pages -= offlined_pages; + clear_zone_contiguous(zone); + + pgdat_resize_lock(pgdat, &flags); + pgdat->node_present_pages -= offlined_pages; + for (pfn = start_pfn; pfn < end_pfn; pfn += nr_pages) { + shrink_zone_span(zone, pfn, pfn + nr_pages); + shrink_pgdat_span(pgdat, pfn, pfn + nr_pages); + } + pgdat_resize_unlock(pgdat, &flags); + + set_zone_contiguous(zone); } -static int __remove_section(struct zone *zone, struct mem_section *ms, - unsigned long map_offset, struct vmem_altmap *altmap) +static int __remove_section(int nid, struct mem_section *ms, + unsigned long map_offset, struct vmem_altmap *altmap) { - unsigned long start_pfn; - int scn_nr; int ret = -EINVAL; if (!valid_section(ms)) @@ -527,17 +532,13 @@ static int __remove_section(struct zone *zone, struct mem_section *ms, if (ret) return ret; - scn_nr = __section_nr(ms); - start_pfn = section_nr_to_pfn((unsigned long)scn_nr); - __remove_zone(zone, start_pfn); - - sparse_remove_one_section(zone, ms, map_offset, altmap); + sparse_remove_one_section(nid, ms, map_offset, altmap); return 0; } /** * __remove_pages() - remove sections of pages from a zone - * @zone: zone from which pages need to be removed + * @nid: nid from which pages belong to * @phys_start_pfn: starting pageframe (must be aligned to start of a section) * @nr_pages: number of pages to remove (must be multiple of section size) * @altmap: alternative device page map or %NULL if default memmap is used @@ -547,35 +548,28 @@ static int __remove_section(struct zone *zone, struct mem_section *ms, * sure that pages are marked reserved and zones are adjust properly by * calling offline_pages(). */ -int __remove_pages(struct zone *zone, unsigned long phys_start_pfn, - unsigned long nr_pages, struct vmem_altmap *altmap) +int __remove_pages(int nid, unsigned long phys_start_pfn, + unsigned long nr_pages, struct vmem_altmap *altmap) { unsigned long i; unsigned long map_offset = 0; int sections_to_remove, ret = 0; + resource_size_t start, size; - /* In the ZONE_DEVICE case device driver owns the memory region */ - if (is_dev_zone(zone)) { - if (altmap) - map_offset = vmem_altmap_offset(altmap); - } else { - resource_size_t start, size; + start = phys_start_pfn << PAGE_SHIFT; + size = nr_pages * PAGE_SIZE; - start = phys_start_pfn << PAGE_SHIFT; - size = nr_pages * PAGE_SIZE; + if (altmap) + map_offset = vmem_altmap_offset(altmap); - ret = release_mem_region_adjustable(&iomem_resource, start, - size); - if (ret) { - resource_size_t endres = start + size - 1; + ret = release_mem_region_adjustable(&iomem_resource, start, size); + if (ret) { + resource_size_t endres = start + size - 1; - pr_warn("Unable to release resource <%pa-%pa> (%d)\n", - &start, &endres, ret); - } + pr_warn("Unable to release resource <%pa-%pa> (%d)\n", + &start, &endres, ret); } - clear_zone_contiguous(zone); - /* * We can only remove entire sections */ @@ -586,15 +580,13 @@ int __remove_pages(struct zone *zone, unsigned long phys_start_pfn, for (i = 0; i < sections_to_remove; i++) { unsigned long pfn = phys_start_pfn + i*PAGES_PER_SECTION; - ret = __remove_section(zone, __pfn_to_section(pfn), map_offset, - altmap); + ret = __remove_section(nid, __pfn_to_section(pfn), map_offset, + altmap); map_offset = 0; if (ret) break; } - set_zone_contiguous(zone); - return ret; } #endif /* CONFIG_MEMORY_HOTREMOVE */ @@ -1566,7 +1558,6 @@ static int __ref __offline_pages(unsigned long start_pfn, unsigned long pfn, nr_pages; long offlined_pages; int ret, node; - unsigned long flags; unsigned long valid_start, valid_end; struct zone *zone; struct memory_notify arg; @@ -1644,11 +1635,9 @@ static int __ref __offline_pages(unsigned long start_pfn, undo_isolate_page_range(start_pfn, end_pfn, MIGRATE_MOVABLE); /* removal success */ adjust_managed_page_count(pfn_to_page(start_pfn), -offlined_pages); - zone->present_pages -= offlined_pages; - pgdat_resize_lock(zone->zone_pgdat, &flags); - zone->zone_pgdat->node_present_pages -= offlined_pages; - pgdat_resize_unlock(zone->zone_pgdat, &flags); + /* Here we will shrink zone/node's spanned/present_pages */ + shrink_zone_pgdat_pages(zone, valid_start, valid_end, offlined_pages); init_per_zone_wmark_min(); @@ -1899,10 +1888,13 @@ int del_device_memory(int nid, unsigned long start, unsigned long size, unsigned long nr_pages = size >> PAGE_SHIFT; struct zone *zone = page_zone(pfn_to_page(pfn)); + offline_mem_sections(start_pfn, start_pfn + nr_pages); + shrink_zone_pgdat_pages(zone, start_pfn, start_pfn + nr_pages, 0); + if (mapping) ret = arch_remove_memory(nid, start, size, altmap); else - ret = __remove_pages(zone, start_pfn, nr_pages, altmap); + ret = __remove_pages(nid, start_pfn, nr_pages, altmap); return ret; } @@ -1925,6 +1917,7 @@ int add_device_memory(int nid, unsigned long start, unsigned long size, if (!ret) { struct zone *zone = &NODE_DATA(nid)->node_zones[ZONE_DEVICE]; + online_mem_sections(start_pfn, start_pfn + nr_pages); move_pfn_range_to_zone(zone, start_pfn, nr_pages, altmap); } diff --git a/mm/sparse.c b/mm/sparse.c index 33307fc05c4d..76974479a14b 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -765,12 +765,12 @@ static void free_section_usemap(struct page *memmap, unsigned long *usemap, free_map_bootmem(memmap); } -void sparse_remove_one_section(struct zone *zone, struct mem_section *ms, - unsigned long map_offset, struct vmem_altmap *altmap) +void sparse_remove_one_section(int nid, struct mem_section *ms, + unsigned long map_offset, struct vmem_altmap *altmap) { struct page *memmap = NULL; unsigned long *usemap = NULL, flags; - struct pglist_data *pgdat = zone->zone_pgdat; + struct pglist_data *pgdat = NODE_DATA(nid); pgdat_resize_lock(pgdat, &flags); if (ms->section_mem_map) {