From patchwork Mon Aug 26 10:10:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11114317 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4CEEF14F7 for ; Mon, 26 Aug 2019 10:10:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 229D521852 for ; Mon, 26 Aug 2019 10:10:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 229D521852 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3211E6B054F; Mon, 26 Aug 2019 06:10:32 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2D0E06B0550; Mon, 26 Aug 2019 06:10:32 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1986B6B0551; Mon, 26 Aug 2019 06:10:32 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0109.hostedemail.com [216.40.44.109]) by kanga.kvack.org (Postfix) with ESMTP id EB2D96B054F for ; Mon, 26 Aug 2019 06:10:31 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 8B3ED3499 for ; Mon, 26 Aug 2019 10:10:31 +0000 (UTC) X-FDA: 75864159462.06.tray63_21a9ac4944a01 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,david@redhat.com,:linux-kernel@vger.kernel.org::david@redhat.com:akpm@linux-foundation.org:osalvador@suse.de:mhocko@suse.com:pasha.tatashin@soleen.com:dan.j.williams@intel.com:richardw.yang@linux.intel.com,RULES_HIT:30054:30064:30070,0,RBL:209.132.183.28:@redhat.com:.lbl8.mailshell.net-62.18.0.0 64.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: tray63_21a9ac4944a01 X-Filterd-Recvd-Size: 2906 Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by imf37.hostedemail.com (Postfix) with ESMTP for ; Mon, 26 Aug 2019 10:10:31 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 5866981DF1; Mon, 26 Aug 2019 10:10:30 +0000 (UTC) Received: from t460s.redhat.com (ovpn-116-227.ams2.redhat.com [10.36.116.227]) by smtp.corp.redhat.com (Postfix) with ESMTP id 72C27608C1; Mon, 26 Aug 2019 10:10:28 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , Oscar Salvador , Michal Hocko , Pavel Tatashin , Dan Williams , Wei Yang Subject: [PATCH v2 1/6] mm/memory_hotplug: Exit early in __remove_pages() on BUGs Date: Mon, 26 Aug 2019 12:10:07 +0200 Message-Id: <20190826101012.10575-2-david@redhat.com> In-Reply-To: <20190826101012.10575-1-david@redhat.com> References: <20190826101012.10575-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Mon, 26 Aug 2019 10:10:30 +0000 (UTC) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The error path should never happen in practice (unless bringing up a new device driver, or on BUGs). However, it's clearer to not touch anything in case we are going to return right away. Move the check/return. Cc: Andrew Morton Cc: Oscar Salvador Cc: Michal Hocko Cc: David Hildenbrand Cc: Pavel Tatashin Cc: Dan Williams Cc: Wei Yang Signed-off-by: David Hildenbrand --- mm/memory_hotplug.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 32a5386758ce..71779b7b14df 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -542,13 +542,13 @@ void __remove_pages(struct zone *zone, unsigned long pfn, unsigned long map_offset = 0; unsigned long nr, start_sec, end_sec; + if (check_pfn_span(pfn, nr_pages, "remove")) + return; + map_offset = vmem_altmap_offset(altmap); clear_zone_contiguous(zone); - if (check_pfn_span(pfn, nr_pages, "remove")) - return; - start_sec = pfn_to_section_nr(pfn); end_sec = pfn_to_section_nr(pfn + nr_pages - 1); for (nr = start_sec; nr <= end_sec; nr++) { From patchwork Mon Aug 26 10:10:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11114319 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8DFE51395 for ; Mon, 26 Aug 2019 10:10:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 633B921872 for ; Mon, 26 Aug 2019 10:10:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 633B921872 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F06136B0550; Mon, 26 Aug 2019 06:10:34 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id EE2DF6B0551; Mon, 26 Aug 2019 06:10:34 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CE3996B0552; Mon, 26 Aug 2019 06:10:34 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0205.hostedemail.com [216.40.44.205]) by kanga.kvack.org (Postfix) with ESMTP id A35766B0550 for ; Mon, 26 Aug 2019 06:10:34 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 4F050181AC9AE for ; Mon, 26 Aug 2019 10:10:34 +0000 (UTC) X-FDA: 75864159588.27.food83_220c2211f334d X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,david@redhat.com,:linux-kernel@vger.kernel.org::david@redhat.com:akpm@linux-foundation.org:mhocko@suse.com:vbabka@suse.cz:osalvador@suse.de:pavel.tatashin@microsoft.com:mgorman@techsingularity.net:rppt@linux.ibm.com:dan.j.williams@intel.com:alexander.h.duyck@linux.intel.com,RULES_HIT:30054:30064,0,RBL:209.132.183.28:@redhat.com:.lbl8.mailshell.net-62.18.0.0 64.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:2,LUA_SUMMARY:none X-HE-Tag: food83_220c2211f334d X-Filterd-Recvd-Size: 2957 Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Mon, 26 Aug 2019 10:10:33 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 0892010C696F; Mon, 26 Aug 2019 10:10:33 +0000 (UTC) Received: from t460s.redhat.com (ovpn-116-227.ams2.redhat.com [10.36.116.227]) by smtp.corp.redhat.com (Postfix) with ESMTP id AF6D86060D; Mon, 26 Aug 2019 10:10:30 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , Michal Hocko , Vlastimil Babka , Oscar Salvador , Pavel Tatashin , Mel Gorman , Mike Rapoport , Dan Williams , Alexander Duyck Subject: [PATCH v2 2/6] mm: Exit early in set_zone_contiguous() if already contiguous Date: Mon, 26 Aug 2019 12:10:08 +0200 Message-Id: <20190826101012.10575-3-david@redhat.com> In-Reply-To: <20190826101012.10575-1-david@redhat.com> References: <20190826101012.10575-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.6.2 (mx1.redhat.com [10.5.110.65]); Mon, 26 Aug 2019 10:10:33 +0000 (UTC) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: No need to recompute in case the zone is already marked contiguous. We will soon exploit this on the memory removal path, where we will only clear zone->contiguous on zones that intersect with the memory to be removed. Cc: Andrew Morton Cc: Michal Hocko Cc: Vlastimil Babka Cc: Oscar Salvador Cc: Pavel Tatashin Cc: Mel Gorman Cc: Mike Rapoport Cc: Dan Williams Cc: Alexander Duyck Signed-off-by: David Hildenbrand --- mm/page_alloc.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 5b799e11fba3..995708e05cde 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1546,6 +1546,9 @@ void set_zone_contiguous(struct zone *zone) unsigned long block_start_pfn = zone->zone_start_pfn; unsigned long block_end_pfn; + if (zone->contiguous) + return; + block_end_pfn = ALIGN(block_start_pfn + 1, pageblock_nr_pages); for (; block_start_pfn < zone_end_pfn(zone); block_start_pfn = block_end_pfn, From patchwork Mon Aug 26 10:10:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11114321 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2912E1395 for ; Mon, 26 Aug 2019 10:10:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F2DC521872 for ; Mon, 26 Aug 2019 10:10:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F2DC521872 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B36AE6B0554; Mon, 26 Aug 2019 06:10:39 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B0E1D6B0555; Mon, 26 Aug 2019 06:10:39 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A22FD6B0556; Mon, 26 Aug 2019 06:10:39 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0152.hostedemail.com [216.40.44.152]) by kanga.kvack.org (Postfix) with ESMTP id 7BE4B6B0554 for ; Mon, 26 Aug 2019 06:10:39 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 36267181AC9AE for ; Mon, 26 Aug 2019 10:10:39 +0000 (UTC) X-FDA: 75864159798.29.dog88_22c0ee781ee5b X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,david@redhat.com,:linux-kernel@vger.kernel.org::david@redhat.com:akpm@linux-foundation.org:osalvador@suse.de:mhocko@suse.com:pasha.tatashin@soleen.com:dan.j.williams@intel.com:richardw.yang@linux.intel.com,RULES_HIT:30003:30012:30029:30054:30056:30064:30070,0,RBL:209.132.183.28:@redhat.com:.lbl8.mailshell.net-62.18.0.0 64.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: dog88_22c0ee781ee5b X-Filterd-Recvd-Size: 8856 Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Mon, 26 Aug 2019 10:10:38 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id C66C73082B43; Mon, 26 Aug 2019 10:10:37 +0000 (UTC) Received: from t460s.redhat.com (ovpn-116-227.ams2.redhat.com [10.36.116.227]) by smtp.corp.redhat.com (Postfix) with ESMTP id 59E5D6060D; Mon, 26 Aug 2019 10:10:33 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , Oscar Salvador , Michal Hocko , Pavel Tatashin , Dan Williams , Wei Yang Subject: [PATCH v2 3/6] mm/memory_hotplug: Process all zones when removing memory Date: Mon, 26 Aug 2019 12:10:09 +0200 Message-Id: <20190826101012.10575-4-david@redhat.com> In-Reply-To: <20190826101012.10575-1-david@redhat.com> References: <20190826101012.10575-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.45]); Mon, 26 Aug 2019 10:10:37 +0000 (UTC) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: It is easier than I though to trigger a kernel bug by removing memory that was never onlined. With CONFIG_DEBUG_VM the memmap is initialized with garbage, resulting in the detection of a broken zone when removing memory. Without CONFIG_DEBUG_VM it is less likely - but we could still have garbage in the memmap. :/# [ 23.912993] BUG: unable to handle page fault for address: 000000000000353d [ 23.914219] #PF: supervisor write access in kernel mode [ 23.915199] #PF: error_code(0x0002) - not-present page [ 23.916160] PGD 0 P4D 0 [ 23.916627] Oops: 0002 [#1] SMP PTI [ 23.917256] CPU: 1 PID: 7 Comm: kworker/u8:0 Not tainted 5.3.0-rc5-next-20190820+ #317 [ 23.918900] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.1-0-ga5cab58e9a3f-prebuilt.qemu.4 [ 23.921194] Workqueue: kacpi_hotplug acpi_hotplug_work_fn [ 23.922249] RIP: 0010:clear_zone_contiguous+0x5/0x10 [ 23.923173] Code: 48 89 c6 48 89 c3 e8 2a fe ff ff 48 85 c0 75 cf 5b 5d c3 c6 85 fd 05 00 00 01 5b 5d c3 0f 1f 840 [ 23.926876] RSP: 0018:ffffad2400043c98 EFLAGS: 00010246 [ 23.927928] RAX: 0000000000000000 RBX: 0000000200000000 RCX: 0000000000000000 [ 23.929458] RDX: 0000000000200000 RSI: 0000000000140000 RDI: 0000000000002f40 [ 23.930899] RBP: 0000000140000000 R08: 0000000000000000 R09: 0000000000000001 [ 23.932362] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000140000 [ 23.933603] R13: 0000000000140000 R14: 0000000000002f40 R15: ffff9e3e7aff3680 [ 23.934913] FS: 0000000000000000(0000) GS:ffff9e3e7bb00000(0000) knlGS:0000000000000000 [ 23.936294] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 23.937481] CR2: 000000000000353d CR3: 0000000058610000 CR4: 00000000000006e0 [ 23.938687] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 23.939889] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 23.941168] Call Trace: [ 23.941580] __remove_pages+0x4b/0x640 [ 23.942303] ? mark_held_locks+0x49/0x70 [ 23.943149] arch_remove_memory+0x63/0x8d [ 23.943921] try_remove_memory+0xdb/0x130 [ 23.944766] ? walk_memory_blocks+0x7f/0x9e [ 23.945616] __remove_memory+0xa/0x11 [ 23.946274] acpi_memory_device_remove+0x70/0x100 [ 23.947308] acpi_bus_trim+0x55/0x90 [ 23.947914] acpi_device_hotplug+0x227/0x3a0 [ 23.948714] acpi_hotplug_work_fn+0x1a/0x30 [ 23.949433] process_one_work+0x221/0x550 [ 23.950190] worker_thread+0x50/0x3b0 [ 23.950993] kthread+0x105/0x140 [ 23.951644] ? process_one_work+0x550/0x550 [ 23.952508] ? kthread_park+0x80/0x80 [ 23.953367] ret_from_fork+0x3a/0x50 [ 23.954025] Modules linked in: [ 23.954613] CR2: 000000000000353d [ 23.955248] ---[ end trace 93d982b1fb3e1a69 ]--- But the problem is more extreme: When removing memory we could have - Single memory blocks that fall into no zone (never onlined) - Single memory blocks that fall into multiple zones (offlined+re-onlined) - Multiple memory blocks that fall into different zones Right now, the zones don't get updated properly in these cases. So let's simply process all zones for now until we can properly handle this via the reverse of move_pfn_range_to_zone() (which would then be called something like remove_pfn_range_from_zone()), for example, when offlining memory or before removing ZONE_DEVICE memory. To speed things up, only mark applicable zones non-contiguous (and therefore reduce the zones to recompute) and skip non-intersecting zones when trying to resize. shrink_zone_span() and shrink_pgdat_span() seem to be able to cope just fine with pfn ranges they don't actually contain (but still intersect with). Don't check for zone_intersects() when triggering set_zone_contiguous() - we might have resized the zone and the check might no longer hold. For now, we have to try to recompute any zone (which will be skipped in case the zone is already contiguous). Note1: Detecting which memory is still part of a zone is not easy before removing memory as the detection relies almost completely on pfn_valid() right now. pfn_online() cannot be used as ZONE_DEVICE memory is never online. pfn_present() cannot be used as all memory is present once it was added (but not onlined). We need to rethink/refactor this properly. Note2: We are safe to call zone_intersects() without locking (as already done by onlining code in default_zone_for_pfn()), as we are protected by the memory hotplug lock - just like zone->contiguous. Cc: Andrew Morton Cc: Oscar Salvador Cc: Michal Hocko Cc: Pavel Tatashin Cc: Dan Williams Cc: Wei Yang Signed-off-by: David Hildenbrand --- mm/memory_hotplug.c | 25 ++++++++++++++++++------- 1 file changed, 18 insertions(+), 7 deletions(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 71779b7b14df..27f0457b7512 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -505,22 +505,28 @@ static void __remove_zone(struct zone *zone, unsigned long start_pfn, struct pglist_data *pgdat = zone->zone_pgdat; unsigned long flags; + if (!zone_intersects(zone, start_pfn, nr_pages)) + return; + pgdat_resize_lock(zone->zone_pgdat, &flags); shrink_zone_span(zone, start_pfn, start_pfn + nr_pages); shrink_pgdat_span(pgdat, start_pfn, start_pfn + nr_pages); pgdat_resize_unlock(zone->zone_pgdat, &flags); } -static void __remove_section(struct zone *zone, unsigned long pfn, - unsigned long nr_pages, unsigned long map_offset, - struct vmem_altmap *altmap) +static void __remove_section(unsigned long pfn, unsigned long nr_pages, + unsigned long map_offset, + struct vmem_altmap *altmap) { struct mem_section *ms = __nr_to_section(pfn_to_section_nr(pfn)); + struct zone *zone; if (WARN_ON_ONCE(!valid_section(ms))) return; - __remove_zone(zone, pfn, nr_pages); + /* TODO: move zone handling out of memory removal path */ + for_each_zone(zone) + __remove_zone(zone, pfn, nr_pages); sparse_remove_section(ms, pfn, nr_pages, map_offset, altmap); } @@ -547,7 +553,10 @@ void __remove_pages(struct zone *zone, unsigned long pfn, map_offset = vmem_altmap_offset(altmap); - clear_zone_contiguous(zone); + /* TODO: move zone handling out of memory removal path */ + for_each_zone(zone) + if (zone_intersects(zone, pfn, nr_pages)) + clear_zone_contiguous(zone); start_sec = pfn_to_section_nr(pfn); end_sec = pfn_to_section_nr(pfn + nr_pages - 1); @@ -557,13 +566,15 @@ void __remove_pages(struct zone *zone, unsigned long pfn, cond_resched(); pfns = min(nr_pages, PAGES_PER_SECTION - (pfn & ~PAGE_SECTION_MASK)); - __remove_section(zone, pfn, pfns, map_offset, altmap); + __remove_section(pfn, pfns, map_offset, altmap); pfn += pfns; nr_pages -= pfns; map_offset = 0; } - set_zone_contiguous(zone); + /* TODO: move zone handling out of memory removal path */ + for_each_zone(zone) + set_zone_contiguous(zone); } int set_online_page_callback(online_page_callback_t callback) From patchwork Mon Aug 26 10:10:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11114323 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9E8A9184E for ; Mon, 26 Aug 2019 10:10:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 744B921872 for ; Mon, 26 Aug 2019 10:10:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 744B921872 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9B0BC6B0555; Mon, 26 Aug 2019 06:10:41 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 95FE66B0556; Mon, 26 Aug 2019 06:10:41 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 827FB6B0557; Mon, 26 Aug 2019 06:10:41 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0092.hostedemail.com [216.40.44.92]) by kanga.kvack.org (Postfix) with ESMTP id 5B9486B0555 for ; Mon, 26 Aug 2019 06:10:41 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 09668180AD801 for ; Mon, 26 Aug 2019 10:10:41 +0000 (UTC) X-FDA: 75864159882.20.play69_2309988b4263b X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,david@redhat.com,:linux-kernel@vger.kernel.org::david@redhat.com:akpm@linux-foundation.org:osalvador@suse.de:mhocko@suse.com:pasha.tatashin@soleen.com:dan.j.williams@intel.com:richardw.yang@linux.intel.com,RULES_HIT:30034:30054:30064,0,RBL:209.132.183.28:@redhat.com:.lbl8.mailshell.net-62.18.0.0 64.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:27,LUA_SUMMARY:none X-HE-Tag: play69_2309988b4263b X-Filterd-Recvd-Size: 3537 Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by imf28.hostedemail.com (Postfix) with ESMTP for ; Mon, 26 Aug 2019 10:10:40 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id F05FD307D915; Mon, 26 Aug 2019 10:10:39 +0000 (UTC) Received: from t460s.redhat.com (ovpn-116-227.ams2.redhat.com [10.36.116.227]) by smtp.corp.redhat.com (Postfix) with ESMTP id 24EC5608C1; Mon, 26 Aug 2019 10:10:37 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , Oscar Salvador , Michal Hocko , Pavel Tatashin , Dan Williams , Wei Yang Subject: [PATCH v2 4/6] mm/memory_hotplug: Cleanup __remove_pages() Date: Mon, 26 Aug 2019 12:10:10 +0200 Message-Id: <20190826101012.10575-5-david@redhat.com> In-Reply-To: <20190826101012.10575-1-david@redhat.com> References: <20190826101012.10575-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.48]); Mon, 26 Aug 2019 10:10:40 +0000 (UTC) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Let's drop the basically unused section stuff and simplify. Also, let's use a shorter variant to calculate the number of pages to the next section boundary. Cc: Andrew Morton Cc: Oscar Salvador Cc: Michal Hocko Cc: Pavel Tatashin Cc: Dan Williams Cc: Wei Yang Signed-off-by: David Hildenbrand --- mm/memory_hotplug.c | 17 ++++++----------- 1 file changed, 6 insertions(+), 11 deletions(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 27f0457b7512..e88c96cf9d77 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -545,8 +545,9 @@ static void __remove_section(unsigned long pfn, unsigned long nr_pages, void __remove_pages(struct zone *zone, unsigned long pfn, unsigned long nr_pages, struct vmem_altmap *altmap) { + const unsigned long end_pfn = pfn + nr_pages; + unsigned long cur_nr_pages; unsigned long map_offset = 0; - unsigned long nr, start_sec, end_sec; if (check_pfn_span(pfn, nr_pages, "remove")) return; @@ -558,17 +559,11 @@ void __remove_pages(struct zone *zone, unsigned long pfn, if (zone_intersects(zone, pfn, nr_pages)) clear_zone_contiguous(zone); - start_sec = pfn_to_section_nr(pfn); - end_sec = pfn_to_section_nr(pfn + nr_pages - 1); - for (nr = start_sec; nr <= end_sec; nr++) { - unsigned long pfns; - + for (; pfn < end_pfn; pfn += cur_nr_pages) { cond_resched(); - pfns = min(nr_pages, PAGES_PER_SECTION - - (pfn & ~PAGE_SECTION_MASK)); - __remove_section(pfn, pfns, map_offset, altmap); - pfn += pfns; - nr_pages -= pfns; + /* Select all remaining pages up to the next section boundary */ + cur_nr_pages = min(end_pfn - pfn, -(pfn | PAGE_SECTION_MASK)); + __remove_section(pfn, cur_nr_pages, map_offset, altmap); map_offset = 0; } From patchwork Mon Aug 26 10:10:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11114325 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7A58B14F7 for ; Mon, 26 Aug 2019 10:10:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4FF9321872 for ; Mon, 26 Aug 2019 10:10:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4FF9321872 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D7FAC6B0557; Mon, 26 Aug 2019 06:10:44 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D0D0C6B0558; Mon, 26 Aug 2019 06:10:44 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C45C06B0559; Mon, 26 Aug 2019 06:10:44 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0094.hostedemail.com [216.40.44.94]) by kanga.kvack.org (Postfix) with ESMTP id 9AED06B0557 for ; Mon, 26 Aug 2019 06:10:44 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 53212824CA3D for ; Mon, 26 Aug 2019 10:10:44 +0000 (UTC) X-FDA: 75864160008.15.toys94_236dd243d0f4b X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,david@redhat.com,:linux-kernel@vger.kernel.org::david@redhat.com:akpm@linux-foundation.org:vbabka@suse.cz:dan.j.williams@intel.com:mgorman@techsingularity.net:mhocko@suse.com:richard.weiyang@gmail.com:hannes@cmpxchg.org:arunks@codeaurora.org,RULES_HIT:30054:30064,0,RBL:209.132.183.28:@redhat.com:.lbl8.mailshell.net-62.18.0.0 64.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:15,LUA_SUMMARY:none X-HE-Tag: toys94_236dd243d0f4b X-Filterd-Recvd-Size: 2688 Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Mon, 26 Aug 2019 10:10:43 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 72397300BEB0; Mon, 26 Aug 2019 10:10:42 +0000 (UTC) Received: from t460s.redhat.com (ovpn-116-227.ams2.redhat.com [10.36.116.227]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4E1C560920; Mon, 26 Aug 2019 10:10:40 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , Vlastimil Babka , Dan Williams , Mel Gorman , Michal Hocko , Wei Yang , Johannes Weiner , Arun KS Subject: [PATCH v2 5/6] mm: Introduce for_each_zone_nid() Date: Mon, 26 Aug 2019 12:10:11 +0200 Message-Id: <20190826101012.10575-6-david@redhat.com> In-Reply-To: <20190826101012.10575-1-david@redhat.com> References: <20190826101012.10575-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.45]); Mon, 26 Aug 2019 10:10:42 +0000 (UTC) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Allow to iterate all zones belonging to a nid. Cc: Andrew Morton Cc: Vlastimil Babka Cc: Dan Williams Cc: Mel Gorman Cc: Michal Hocko Cc: Wei Yang Cc: Johannes Weiner Cc: Arun KS Signed-off-by: David Hildenbrand --- include/linux/mmzone.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 8b5f758942a2..71f2b9b55069 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1004,6 +1004,11 @@ extern struct zone *next_zone(struct zone *zone); ; /* do nothing */ \ else +#define for_each_zone_nid(zone, nid) \ + for (zone = (NODE_DATA(nid))->node_zones; \ + zone && zone_to_nid(zone) == nid; \ + zone = next_zone(zone)) + static inline struct zone *zonelist_zone(struct zoneref *zoneref) { return zoneref->zone; From patchwork Mon Aug 26 10:10:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11114327 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E4EAD1395 for ; Mon, 26 Aug 2019 10:10:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A0CEF2186A for ; Mon, 26 Aug 2019 10:10:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A0CEF2186A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A4C7A6B055A; Mon, 26 Aug 2019 06:10:54 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A23936B055B; Mon, 26 Aug 2019 06:10:54 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8EA106B055C; Mon, 26 Aug 2019 06:10:54 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0221.hostedemail.com [216.40.44.221]) by kanga.kvack.org (Postfix) with ESMTP id 6731D6B055A for ; Mon, 26 Aug 2019 06:10:54 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 11C8C55FB7 for ; Mon, 26 Aug 2019 10:10:54 +0000 (UTC) X-FDA: 75864160428.08.trip20_24e343082ea56 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,david@redhat.com,:linux-kernel@vger.kernel.org::david@redhat.com:catalin.marinas@arm.com:will@kernel.org:tony.luck@intel.com:fenghua.yu@intel.com:benh@kernel.crashing.org:paulus@samba.org:mpe@ellerman.id.au:heiko.carstens@de.ibm.com:gor@linux.ibm.com:borntraeger@de.ibm.com:ysato@users.sourceforge.jp:dalias@libc.org:dave.hansen@linux.intel.com:luto@kernel.org:peterz@infradead.org:tglx@linutronix.de:mingo@redhat.com:bp@alien8.de:hpa@zytor.com:x86@kernel.org:akpm@linux-foundation.org:mark.rutland@arm.com:steve.capper@arm.com:rppt@linux.ibm.com:anshuman.khandual@arm.com:yuzhao@google.com:yaojun8558363@gmail.com:robin.murphy@arm.com:mhocko@suse.com:osalvador@suse.de:willy@infradead.org:christophe.leroy@c-s.fr:aneesh.kumar@linux.ibm.com:pasha.tatashin@soleen.com:gerald.schaefer@de.ibm.com:pasic@linux.ibm.com:thomas.lendacky@amd.com:gregkh@linuxfoundation.org:yamada.masahiro@socionext.com:dan.j.williams@intel.com:richard.weiyang@gmail.com:cai@ lca.pw:j X-HE-Tag: trip20_24e343082ea56 X-Filterd-Recvd-Size: 14430 Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Mon, 26 Aug 2019 10:10:53 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 098A33082E51; Mon, 26 Aug 2019 10:10:52 +0000 (UTC) Received: from t460s.redhat.com (ovpn-116-227.ams2.redhat.com [10.36.116.227]) by smtp.corp.redhat.com (Postfix) with ESMTP id C94E26060D; Mon, 26 Aug 2019 10:10:42 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Catalin Marinas , Will Deacon , Tony Luck , Fenghua Yu , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Yoshinori Sato , Rich Felker , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , x86@kernel.org, Andrew Morton , Mark Rutland , Steve Capper , Mike Rapoport , Anshuman Khandual , Yu Zhao , Jun Yao , Robin Murphy , Michal Hocko , Oscar Salvador , "Matthew Wilcox (Oracle)" , Christophe Leroy , "Aneesh Kumar K.V" , Pavel Tatashin , Gerald Schaefer , Halil Pasic , Tom Lendacky , Greg Kroah-Hartman , Masahiro Yamada , Dan Williams , Wei Yang , Qian Cai , Jason Gunthorpe , Logan Gunthorpe , Ira Weiny , linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org Subject: [PATCH v2 6/6] mm/memory_hotplug: Pass nid instead of zone to __remove_pages() Date: Mon, 26 Aug 2019 12:10:12 +0200 Message-Id: <20190826101012.10575-7-david@redhat.com> In-Reply-To: <20190826101012.10575-1-david@redhat.com> References: <20190826101012.10575-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.46]); Mon, 26 Aug 2019 10:10:52 +0000 (UTC) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The zone parameter is no longer in use. Replace it with the nid, which we can now use the nid to limit the number of zones we have to process (vie for_each_zone_nid()). The function signature of __remove_pages() now looks much more similar to the one of __add_pages(). Cc: Catalin Marinas Cc: Will Deacon Cc: Tony Luck Cc: Fenghua Yu Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: Michael Ellerman Cc: Heiko Carstens Cc: Vasily Gorbik Cc: Christian Borntraeger Cc: Yoshinori Sato Cc: Rich Felker Cc: Dave Hansen Cc: Andy Lutomirski Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: "H. Peter Anvin" Cc: x86@kernel.org Cc: Andrew Morton Cc: Mark Rutland Cc: Steve Capper Cc: Mike Rapoport Cc: Anshuman Khandual Cc: Yu Zhao Cc: Jun Yao Cc: Robin Murphy Cc: Michal Hocko Cc: Oscar Salvador Cc: "Matthew Wilcox (Oracle)" Cc: Christophe Leroy Cc: "Aneesh Kumar K.V" Cc: Pavel Tatashin Cc: Gerald Schaefer Cc: Halil Pasic Cc: Tom Lendacky Cc: Greg Kroah-Hartman Cc: Masahiro Yamada Cc: Dan Williams Cc: Wei Yang Cc: Qian Cai Cc: Jason Gunthorpe Cc: Logan Gunthorpe Cc: Ira Weiny Cc: linux-arm-kernel@lists.infradead.org Cc: linux-ia64@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Cc: linux-s390@vger.kernel.org Cc: linux-sh@vger.kernel.org Signed-off-by: David Hildenbrand Acked-by: Robin Murphy --- arch/arm64/mm/mmu.c | 4 +--- arch/ia64/mm/init.c | 4 +--- arch/powerpc/mm/mem.c | 3 +-- arch/s390/mm/init.c | 4 +--- arch/sh/mm/init.c | 4 +--- arch/x86/mm/init_32.c | 4 +--- arch/x86/mm/init_64.c | 4 +--- include/linux/memory_hotplug.h | 2 +- mm/memory_hotplug.c | 17 +++++++++-------- mm/memremap.c | 3 +-- 10 files changed, 18 insertions(+), 31 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index e67bab4d613e..9a2d388314f3 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1080,7 +1080,6 @@ void arch_remove_memory(int nid, u64 start, u64 size, { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; - struct zone *zone; /* * FIXME: Cleanup page tables (also in arch_add_memory() in case @@ -1089,7 +1088,6 @@ void arch_remove_memory(int nid, u64 start, u64 size, * unplug. ARCH_ENABLE_MEMORY_HOTREMOVE must not be * unlocked yet. */ - zone = page_zone(pfn_to_page(start_pfn)); - __remove_pages(zone, start_pfn, nr_pages, altmap); + __remove_pages(nid, start_pfn, nr_pages, altmap); } #endif diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c index bf9df2625bc8..ae6a3e718aa0 100644 --- a/arch/ia64/mm/init.c +++ b/arch/ia64/mm/init.c @@ -689,9 +689,7 @@ void arch_remove_memory(int nid, u64 start, u64 size, { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; - struct zone *zone; - zone = page_zone(pfn_to_page(start_pfn)); - __remove_pages(zone, start_pfn, nr_pages, altmap); + __remove_pages(nid, start_pfn, nr_pages, altmap); } #endif diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index 9191a66b3bc5..af21e13529ce 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -130,10 +130,9 @@ void __ref arch_remove_memory(int nid, u64 start, u64 size, { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; - struct page *page = pfn_to_page(start_pfn) + vmem_altmap_offset(altmap); int ret; - __remove_pages(page_zone(page), start_pfn, nr_pages, altmap); + __remove_pages(nid, start_pfn, nr_pages, altmap); /* Remove htab bolted mappings for this section of memory */ start = (unsigned long)__va(start); diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index 20340a03ad90..2a7373ed6ded 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -296,10 +296,8 @@ void arch_remove_memory(int nid, u64 start, u64 size, { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; - struct zone *zone; - zone = page_zone(pfn_to_page(start_pfn)); - __remove_pages(zone, start_pfn, nr_pages, altmap); + __remove_pages(nid, start_pfn, nr_pages, altmap); vmem_remove_mapping(start, size); } #endif /* CONFIG_MEMORY_HOTPLUG */ diff --git a/arch/sh/mm/init.c b/arch/sh/mm/init.c index dfdbaa50946e..32441b59297d 100644 --- a/arch/sh/mm/init.c +++ b/arch/sh/mm/init.c @@ -434,9 +434,7 @@ void arch_remove_memory(int nid, u64 start, u64 size, { unsigned long start_pfn = PFN_DOWN(start); unsigned long nr_pages = size >> PAGE_SHIFT; - struct zone *zone; - zone = page_zone(pfn_to_page(start_pfn)); - __remove_pages(zone, start_pfn, nr_pages, altmap); + __remove_pages(nid, start_pfn, nr_pages, altmap); } #endif /* CONFIG_MEMORY_HOTPLUG */ diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c index 4068abb9427f..2760e4bfbc56 100644 --- a/arch/x86/mm/init_32.c +++ b/arch/x86/mm/init_32.c @@ -865,10 +865,8 @@ void arch_remove_memory(int nid, u64 start, u64 size, { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; - struct zone *zone; - zone = page_zone(pfn_to_page(start_pfn)); - __remove_pages(zone, start_pfn, nr_pages, altmap); + __remove_pages(nid, start_pfn, nr_pages, altmap); } #endif diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index a6b5c653727b..99d92297f1cf 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1212,10 +1212,8 @@ void __ref arch_remove_memory(int nid, u64 start, u64 size, { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; - struct page *page = pfn_to_page(start_pfn) + vmem_altmap_offset(altmap); - struct zone *zone = page_zone(page); - __remove_pages(zone, start_pfn, nr_pages, altmap); + __remove_pages(nid, start_pfn, nr_pages, altmap); kernel_physical_mapping_remove(start, start + size); } #endif /* CONFIG_MEMORY_HOTPLUG */ diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index f46ea71b4ffd..c5b38e7dc8aa 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -125,7 +125,7 @@ static inline bool movable_node_is_enabled(void) extern void arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap); -extern void __remove_pages(struct zone *zone, unsigned long start_pfn, +extern void __remove_pages(int nid, unsigned long start_pfn, unsigned long nr_pages, struct vmem_altmap *altmap); /* reasonably generic interface to expand the physical pages */ diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index e88c96cf9d77..49ca3364eb70 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -514,7 +514,7 @@ static void __remove_zone(struct zone *zone, unsigned long start_pfn, pgdat_resize_unlock(zone->zone_pgdat, &flags); } -static void __remove_section(unsigned long pfn, unsigned long nr_pages, +static void __remove_section(int nid, unsigned long pfn, unsigned long nr_pages, unsigned long map_offset, struct vmem_altmap *altmap) { @@ -525,14 +525,14 @@ static void __remove_section(unsigned long pfn, unsigned long nr_pages, return; /* TODO: move zone handling out of memory removal path */ - for_each_zone(zone) + for_each_zone_nid(zone, nid) __remove_zone(zone, pfn, nr_pages); sparse_remove_section(ms, pfn, nr_pages, map_offset, altmap); } /** * __remove_pages() - remove sections of pages from a zone - * @zone: zone from which pages need to be removed + * @nid: the nid all pages were added to * @pfn: starting pageframe (must be aligned to start of a section) * @nr_pages: number of pages to remove (must be multiple of section size) * @altmap: alternative device page map or %NULL if default memmap is used @@ -542,12 +542,13 @@ static void __remove_section(unsigned long pfn, unsigned long nr_pages, * sure that pages are marked reserved and zones are adjust properly by * calling offline_pages(). */ -void __remove_pages(struct zone *zone, unsigned long pfn, - unsigned long nr_pages, struct vmem_altmap *altmap) +void __remove_pages(int nid, unsigned long pfn, unsigned long nr_pages, + struct vmem_altmap *altmap) { const unsigned long end_pfn = pfn + nr_pages; unsigned long cur_nr_pages; unsigned long map_offset = 0; + struct zone *zone; if (check_pfn_span(pfn, nr_pages, "remove")) return; @@ -555,7 +556,7 @@ void __remove_pages(struct zone *zone, unsigned long pfn, map_offset = vmem_altmap_offset(altmap); /* TODO: move zone handling out of memory removal path */ - for_each_zone(zone) + for_each_zone_nid(zone, nid) if (zone_intersects(zone, pfn, nr_pages)) clear_zone_contiguous(zone); @@ -563,12 +564,12 @@ void __remove_pages(struct zone *zone, unsigned long pfn, cond_resched(); /* Select all remaining pages up to the next section boundary */ cur_nr_pages = min(end_pfn - pfn, -(pfn | PAGE_SECTION_MASK)); - __remove_section(pfn, cur_nr_pages, map_offset, altmap); + __remove_section(nid, pfn, cur_nr_pages, map_offset, altmap); map_offset = 0; } /* TODO: move zone handling out of memory removal path */ - for_each_zone(zone) + for_each_zone_nid(zone, nid) set_zone_contiguous(zone); } diff --git a/mm/memremap.c b/mm/memremap.c index 8a394552b5bd..292ef4c6b447 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -138,8 +138,7 @@ static void devm_memremap_pages_release(void *data) mem_hotplug_begin(); if (pgmap->type == MEMORY_DEVICE_PRIVATE) { pfn = PHYS_PFN(res->start); - __remove_pages(page_zone(pfn_to_page(pfn)), pfn, - PHYS_PFN(resource_size(res)), NULL); + __remove_pages(nid, pfn, PHYS_PFN(resource_size(res)), NULL); } else { arch_remove_memory(nid, res->start, resource_size(res), pgmap_altmap(pgmap));