From patchwork Tue Oct 1 14:40:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11168941 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E50BB912 for ; Tue, 1 Oct 2019 14:40:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id CD4F020842 for ; Tue, 1 Oct 2019 14:40:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389133AbfJAOko (ORCPT ); Tue, 1 Oct 2019 10:40:44 -0400 Received: from mx1.redhat.com ([209.132.183.28]:33576 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389014AbfJAOko (ORCPT ); Tue, 1 Oct 2019 10:40:44 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id A07773082E42; Tue, 1 Oct 2019 14:40:43 +0000 (UTC) Received: from t460s.redhat.com (ovpn-116-54.ams2.redhat.com [10.36.116.54]) by smtp.corp.redhat.com (Postfix) with ESMTP id 782385D9E5; Tue, 1 Oct 2019 14:40:32 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, "Aneesh Kumar K.V" , Dan Williams , Andrew Morton , Jason Gunthorpe , Logan Gunthorpe , Ira Weiny , Pankaj Gupta , David Hildenbrand Subject: [PATCH v5 01/10] mm/memunmap: Use the correct start and end pfn when removing pages from zone Date: Tue, 1 Oct 2019 16:40:02 +0200 Message-Id: <20191001144011.3801-2-david@redhat.com> In-Reply-To: <20191001144011.3801-1-david@redhat.com> References: <20191001144011.3801-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.46]); Tue, 01 Oct 2019 14:40:44 +0000 (UTC) Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org From: "Aneesh Kumar K.V" With altmap, all the resource pfns are not initialized. While initializing pfn, altmap reserve space is skipped. Hence when removing pfn from zone skip pfns that were never initialized. Update memunmap_pages to calculate start and end pfn based on altmap values. This fixes a kernel crash that is observed when destroying a namespace. [ 81.356173] kernel BUG at include/linux/mm.h:1107! cpu 0x1: Vector: 700 (Program Check) at [c000000274087890] pc: c0000000004b9728: memunmap_pages+0x238/0x340 lr: c0000000004b9724: memunmap_pages+0x234/0x340 ... pid = 3669, comm = ndctl kernel BUG at include/linux/mm.h:1107! [c000000274087ba0] c0000000009e3500 devm_action_release+0x30/0x50 [c000000274087bc0] c0000000009e4758 release_nodes+0x268/0x2d0 [c000000274087c30] c0000000009dd144 device_release_driver_internal+0x174/0x240 [c000000274087c70] c0000000009d9dfc unbind_store+0x13c/0x190 [c000000274087cb0] c0000000009d8a24 drv_attr_store+0x44/0x60 [c000000274087cd0] c0000000005a7470 sysfs_kf_write+0x70/0xa0 [c000000274087d10] c0000000005a5cac kernfs_fop_write+0x1ac/0x290 [c000000274087d60] c0000000004be45c __vfs_write+0x3c/0x70 [c000000274087d80] c0000000004c26e4 vfs_write+0xe4/0x200 [c000000274087dd0] c0000000004c2a6c ksys_write+0x7c/0x140 [c000000274087e20] c00000000000bbd0 system_call+0x5c/0x68 Cc: Dan Williams Cc: Andrew Morton Cc: Jason Gunthorpe Cc: Logan Gunthorpe Cc: Ira Weiny Reviewed-by: Pankaj Gupta Signed-off-by: Aneesh Kumar K.V [ move all pfn-realted declarations into a single line ] Signed-off-by: David Hildenbrand Signed-off-by: Aneesh Kumar K.V Signed-off-by: David Hildenbrand --- mm/memremap.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/mm/memremap.c b/mm/memremap.c index 557e53c6fb46..026788b2ac69 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -123,7 +123,7 @@ static void dev_pagemap_cleanup(struct dev_pagemap *pgmap) void memunmap_pages(struct dev_pagemap *pgmap) { struct resource *res = &pgmap->res; - unsigned long pfn; + unsigned long pfn, nr_pages, start_pfn, end_pfn; int nid; dev_pagemap_kill(pgmap); @@ -131,14 +131,17 @@ void memunmap_pages(struct dev_pagemap *pgmap) put_page(pfn_to_page(pfn)); dev_pagemap_cleanup(pgmap); + start_pfn = pfn_first(pgmap); + end_pfn = pfn_end(pgmap); + nr_pages = end_pfn - start_pfn; + /* pages are dead and unused, undo the arch mapping */ - nid = page_to_nid(pfn_to_page(PHYS_PFN(res->start))); + nid = page_to_nid(pfn_to_page(start_pfn)); mem_hotplug_begin(); if (pgmap->type == MEMORY_DEVICE_PRIVATE) { - pfn = PHYS_PFN(res->start); - __remove_pages(page_zone(pfn_to_page(pfn)), pfn, - PHYS_PFN(resource_size(res)), NULL); + __remove_pages(page_zone(pfn_to_page(start_pfn)), start_pfn, + nr_pages, NULL); } else { arch_remove_memory(nid, res->start, resource_size(res), pgmap_altmap(pgmap)); From patchwork Tue Oct 1 14:40:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11168959 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CFC0D912 for ; Tue, 1 Oct 2019 14:41:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B8DD72190F for ; Tue, 1 Oct 2019 14:41:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389033AbfJAOlM (ORCPT ); Tue, 1 Oct 2019 10:41:12 -0400 Received: from mx1.redhat.com ([209.132.183.28]:25499 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388938AbfJAOlM (ORCPT ); Tue, 1 Oct 2019 10:41:12 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 707903060850; Tue, 1 Oct 2019 14:41:11 +0000 (UTC) Received: from t460s.redhat.com (ovpn-116-54.ams2.redhat.com [10.36.116.54]) by smtp.corp.redhat.com (Postfix) with ESMTP id EF3045D9D5; Tue, 1 Oct 2019 14:40:43 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, "Aneesh Kumar K.V" , Andrew Morton , Michal Hocko , Vlastimil Babka , Oscar Salvador , Mel Gorman , Mike Rapoport , Dan Williams , Alexander Duyck , Pavel Tatashin , Alexander Potapenko , Pankaj Gupta , David Hildenbrand Subject: [PATCH v5 02/10] mm/memmap_init: Update variable name in memmap_init_zone Date: Tue, 1 Oct 2019 16:40:03 +0200 Message-Id: <20191001144011.3801-3-david@redhat.com> In-Reply-To: <20191001144011.3801-1-david@redhat.com> References: <20191001144011.3801-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.40]); Tue, 01 Oct 2019 14:41:11 +0000 (UTC) Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org From: "Aneesh Kumar K.V" The third argument is actually number of pages. Changes the variable name from size to nr_pages to indicate this better. No functional change in this patch. Cc: Andrew Morton Cc: Michal Hocko Cc: Vlastimil Babka Cc: Oscar Salvador Cc: Mel Gorman Cc: Mike Rapoport Cc: Dan Williams Cc: Alexander Duyck Cc: Pavel Tatashin Cc: Alexander Potapenko Reviewed-by: Pankaj Gupta Reviewed-by: David Hildenbrand Signed-off-by: Aneesh Kumar K.V Signed-off-by: David Hildenbrand --- mm/page_alloc.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 15c2050c629b..b0b2d5464000 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5936,10 +5936,10 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, #ifdef CONFIG_ZONE_DEVICE void __ref memmap_init_zone_device(struct zone *zone, unsigned long start_pfn, - unsigned long size, + unsigned long nr_pages, struct dev_pagemap *pgmap) { - unsigned long pfn, end_pfn = start_pfn + size; + unsigned long pfn, end_pfn = start_pfn + nr_pages; struct pglist_data *pgdat = zone->zone_pgdat; struct vmem_altmap *altmap = pgmap_altmap(pgmap); unsigned long zone_idx = zone_idx(zone); @@ -5956,7 +5956,7 @@ void __ref memmap_init_zone_device(struct zone *zone, */ if (altmap) { start_pfn = altmap->base_pfn + vmem_altmap_offset(altmap); - size = end_pfn - start_pfn; + nr_pages = end_pfn - start_pfn; } for (pfn = start_pfn; pfn < end_pfn; pfn++) { @@ -6003,7 +6003,7 @@ void __ref memmap_init_zone_device(struct zone *zone, } pr_info("%s initialised %lu pages in %ums\n", __func__, - size, jiffies_to_msecs(jiffies - start)); + nr_pages, jiffies_to_msecs(jiffies - start)); } #endif From patchwork Tue Oct 1 14:40:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11168951 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0F6D3912 for ; Tue, 1 Oct 2019 14:41:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E02AC21906 for ; Tue, 1 Oct 2019 14:41:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389226AbfJAOlP (ORCPT ); Tue, 1 Oct 2019 10:41:15 -0400 Received: from mx1.redhat.com ([209.132.183.28]:47832 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388938AbfJAOlP (ORCPT ); Tue, 1 Oct 2019 10:41:15 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 84E4E309B6C4; Tue, 1 Oct 2019 14:41:14 +0000 (UTC) Received: from t460s.redhat.com (ovpn-116-54.ams2.redhat.com [10.36.116.54]) by smtp.corp.redhat.com (Postfix) with ESMTP id B78395D9CC; Tue, 1 Oct 2019 14:41:11 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, David Hildenbrand , Andrew Morton , Oscar Salvador , Michal Hocko , Pavel Tatashin , Dan Williams , Wei Yang , "Aneesh Kumar K . V" Subject: [PATCH v5 03/10] mm/memory_hotplug: Don't access uninitialized memmaps in shrink_pgdat_span() Date: Tue, 1 Oct 2019 16:40:04 +0200 Message-Id: <20191001144011.3801-4-david@redhat.com> In-Reply-To: <20191001144011.3801-1-david@redhat.com> References: <20191001144011.3801-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.47]); Tue, 01 Oct 2019 14:41:14 +0000 (UTC) Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org We might use the nid of memmaps that were never initialized. For example, if the memmap was poisoned, we will crash the kernel in pfn_to_nid() right now. Let's use the calculated boundaries of the separate zones instead. This now also avoids having to iterate over a whole bunch of subsections again, after shrinking one zone. Before commit d0dc12e86b31 ("mm/memory_hotplug: optimize memory hotplug"), the memmap was initialized to 0 and the node was set to the right value. After that commit, the node might be garbage. We'll have to fix shrink_zone_span() next. Cc: Andrew Morton Cc: Oscar Salvador Cc: David Hildenbrand Cc: Michal Hocko Cc: Pavel Tatashin Cc: Dan Williams Cc: Wei Yang Fixes: d0dc12e86b31 ("mm/memory_hotplug: optimize memory hotplug") Reported-by: Aneesh Kumar K.V Signed-off-by: David Hildenbrand --- mm/memory_hotplug.c | 72 ++++++++++----------------------------------- 1 file changed, 15 insertions(+), 57 deletions(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 680b4b3e57d9..86b4dc18e831 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -436,67 +436,25 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn, zone_span_writeunlock(zone); } -static void shrink_pgdat_span(struct pglist_data *pgdat, - unsigned long start_pfn, unsigned long end_pfn) +static void update_pgdat_span(struct pglist_data *pgdat) { - unsigned long pgdat_start_pfn = pgdat->node_start_pfn; - unsigned long p = pgdat_end_pfn(pgdat); /* pgdat_end_pfn namespace clash */ - unsigned long pgdat_end_pfn = p; - unsigned long pfn; - int nid = pgdat->node_id; - - if (pgdat_start_pfn == start_pfn) { - /* - * If the section is smallest section in the pgdat, it need - * shrink pgdat->node_start_pfn and pgdat->node_spanned_pages. - * In this case, we find second smallest valid mem_section - * for shrinking zone. - */ - pfn = find_smallest_section_pfn(nid, NULL, end_pfn, - pgdat_end_pfn); - if (pfn) { - pgdat->node_start_pfn = pfn; - pgdat->node_spanned_pages = pgdat_end_pfn - pfn; - } - } else if (pgdat_end_pfn == end_pfn) { - /* - * If the section is biggest section in the pgdat, it need - * shrink pgdat->node_spanned_pages. - * In this case, we find second biggest valid mem_section for - * shrinking zone. - */ - pfn = find_biggest_section_pfn(nid, NULL, pgdat_start_pfn, - start_pfn); - if (pfn) - pgdat->node_spanned_pages = pfn - pgdat_start_pfn + 1; - } - - /* - * If the section is not biggest or smallest mem_section in the pgdat, - * it only creates a hole in the pgdat. So in this case, we need not - * change the pgdat. - * But perhaps, the pgdat has only hole data. Thus it check the pgdat - * has only hole or not. - */ - pfn = pgdat_start_pfn; - for (; pfn < pgdat_end_pfn; pfn += PAGES_PER_SUBSECTION) { - if (unlikely(!pfn_valid(pfn))) - continue; - - if (pfn_to_nid(pfn) != nid) - continue; + unsigned long node_start_pfn = 0, node_end_pfn = 0; + struct zone *zone; - /* Skip range to be removed */ - if (pfn >= start_pfn && pfn < end_pfn) - continue; + for (zone = pgdat->node_zones; + zone < pgdat->node_zones + MAX_NR_ZONES; zone++) { + unsigned long zone_end_pfn = zone->zone_start_pfn + + zone->spanned_pages; - /* If we find valid section, we have nothing to do */ - return; + /* No need to lock the zones, they can't change. */ + if (zone_end_pfn > node_end_pfn) + node_end_pfn = zone_end_pfn; + if (zone->zone_start_pfn < node_start_pfn) + node_start_pfn = zone->zone_start_pfn; } - /* The pgdat has no valid section */ - pgdat->node_start_pfn = 0; - pgdat->node_spanned_pages = 0; + pgdat->node_start_pfn = node_start_pfn; + pgdat->node_spanned_pages = node_end_pfn - node_start_pfn; } static void __remove_zone(struct zone *zone, unsigned long start_pfn, @@ -507,7 +465,7 @@ static void __remove_zone(struct zone *zone, unsigned long start_pfn, pgdat_resize_lock(zone->zone_pgdat, &flags); shrink_zone_span(zone, start_pfn, start_pfn + nr_pages); - shrink_pgdat_span(pgdat, start_pfn, start_pfn + nr_pages); + update_pgdat_span(pgdat); pgdat_resize_unlock(zone->zone_pgdat, &flags); } From patchwork Tue Oct 1 14:40:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11168957 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8F4B315AB for ; Tue, 1 Oct 2019 14:41:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 783BE2190F for ; Tue, 1 Oct 2019 14:41:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389238AbfJAOlS (ORCPT ); Tue, 1 Oct 2019 10:41:18 -0400 Received: from mx1.redhat.com ([209.132.183.28]:55784 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727051AbfJAOlS (ORCPT ); Tue, 1 Oct 2019 10:41:18 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 813722EF16A; Tue, 1 Oct 2019 14:41:17 +0000 (UTC) Received: from t460s.redhat.com (ovpn-116-54.ams2.redhat.com [10.36.116.54]) by smtp.corp.redhat.com (Postfix) with ESMTP id D83F15D9C9; Tue, 1 Oct 2019 14:41:14 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, David Hildenbrand , Andrew Morton , Oscar Salvador , Michal Hocko , Pavel Tatashin , Dan Williams , "Aneesh Kumar K . V" Subject: [PATCH v5 04/10] mm/memory_hotplug: Don't access uninitialized memmaps in shrink_zone_span() Date: Tue, 1 Oct 2019 16:40:05 +0200 Message-Id: <20191001144011.3801-5-david@redhat.com> In-Reply-To: <20191001144011.3801-1-david@redhat.com> References: <20191001144011.3801-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Tue, 01 Oct 2019 14:41:17 +0000 (UTC) Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Let's limit shrinking to !ZONE_DEVICE so we can fix the current code. We should never try to touch the memmap of offline sections where we could have uninitialized memmaps and could trigger BUGs when calling page_to_nid() on poisoned pages. There is no reliable way to distinguish an uninitialized memmap from an initialized memmap that belongs to ZONE_DEVICE, as we don't have anything like SECTION_IS_ONLINE we can use similar to pfn_to_online_section() for !ZONE_DEVICE memory. E.g., set_zone_contiguous() similarly relies on pfn_to_online_section() and will therefore never set a ZONE_DEVICE zone consecutive. Stopping to shrink the ZONE_DEVICE therefore results in no observable changes, besides /proc/zoneinfo indicating different boundaries - something we can totally live with. Before commit d0dc12e86b31 ("mm/memory_hotplug: optimize memory hotplug"), the memmap was initialized with 0 and the node with the right value. So the zone might be wrong but not garbage. After that commit, both the zone and the node will be garbage when touching uninitialized memmaps. Cc: Andrew Morton Cc: Oscar Salvador Cc: David Hildenbrand Cc: Michal Hocko Cc: Pavel Tatashin Cc: Dan Williams Fixes: d0dc12e86b31 ("mm/memory_hotplug: optimize memory hotplug") Reported-by: Aneesh Kumar K.V Signed-off-by: David Hildenbrand --- mm/memory_hotplug.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 86b4dc18e831..afed8331332b 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -331,7 +331,7 @@ static unsigned long find_smallest_section_pfn(int nid, struct zone *zone, unsigned long end_pfn) { for (; start_pfn < end_pfn; start_pfn += PAGES_PER_SUBSECTION) { - if (unlikely(!pfn_valid(start_pfn))) + if (unlikely(!pfn_to_online_page(start_pfn))) continue; if (unlikely(pfn_to_nid(start_pfn) != nid)) @@ -356,7 +356,7 @@ static unsigned long find_biggest_section_pfn(int nid, struct zone *zone, /* pfn is the end pfn of a memory section. */ pfn = end_pfn - 1; for (; pfn >= start_pfn; pfn -= PAGES_PER_SUBSECTION) { - if (unlikely(!pfn_valid(pfn))) + if (unlikely(!pfn_to_online_page(pfn))) continue; if (unlikely(pfn_to_nid(pfn) != nid)) @@ -415,7 +415,7 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn, */ pfn = zone_start_pfn; for (; pfn < zone_end_pfn; pfn += PAGES_PER_SUBSECTION) { - if (unlikely(!pfn_valid(pfn))) + if (unlikely(!pfn_to_online_page(pfn))) continue; if (page_zone(pfn_to_page(pfn)) != zone) @@ -463,6 +463,14 @@ static void __remove_zone(struct zone *zone, unsigned long start_pfn, struct pglist_data *pgdat = zone->zone_pgdat; unsigned long flags; + /* + * Zone shrinking code cannot properly deal with ZONE_DEVICE. So + * we will not try to shrink the zones - which is okay as + * set_zone_contiguous() cannot deal with ZONE_DEVICE either way. + */ + if (zone_idx(zone) == ZONE_DEVICE) + return; + pgdat_resize_lock(zone->zone_pgdat, &flags); shrink_zone_span(zone, start_pfn, start_pfn + nr_pages); update_pgdat_span(pgdat); From patchwork Tue Oct 1 14:40:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11168963 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C02811902 for ; Tue, 1 Oct 2019 14:41:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9D13221906 for ; Tue, 1 Oct 2019 14:41:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389065AbfJAOla (ORCPT ); Tue, 1 Oct 2019 10:41:30 -0400 Received: from mx1.redhat.com ([209.132.183.28]:53132 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727018AbfJAOla (ORCPT ); Tue, 1 Oct 2019 10:41:30 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id BD5973E2AF; Tue, 1 Oct 2019 14:41:29 +0000 (UTC) Received: from t460s.redhat.com (ovpn-116-54.ams2.redhat.com [10.36.116.54]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4E5AE5D9E1; Tue, 1 Oct 2019 14:41:27 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, David Hildenbrand , Andrew Morton , Oscar Salvador , Michal Hocko , Pavel Tatashin , Dan Williams Subject: [PATCH v5 06/10] mm/memory_hotplug: Poison memmap in remove_pfn_range_from_zone() Date: Tue, 1 Oct 2019 16:40:07 +0200 Message-Id: <20191001144011.3801-7-david@redhat.com> In-Reply-To: <20191001144011.3801-1-david@redhat.com> References: <20191001144011.3801-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.30]); Tue, 01 Oct 2019 14:41:29 +0000 (UTC) Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Let's poison the pages similar to when adding new memory in sparse_add_section(). Also call remove_pfn_range_from_zone() from memunmap_pages(), so we can poison the memmap from there as well. While at it, calculate the pfn in memunmap_pages() only once. Cc: Andrew Morton Cc: David Hildenbrand Cc: Oscar Salvador Cc: Michal Hocko Cc: Pavel Tatashin Cc: Dan Williams Signed-off-by: David Hildenbrand --- mm/memory_hotplug.c | 3 +++ mm/memremap.c | 2 ++ 2 files changed, 5 insertions(+) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index cef909ebd807..640309236a58 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -464,6 +464,9 @@ void __ref remove_pfn_range_from_zone(struct zone *zone, struct pglist_data *pgdat = zone->zone_pgdat; unsigned long flags; + /* Poison struct pages because they are now uninitialized again. */ + page_init_poison(pfn_to_page(start_pfn), sizeof(struct page) * nr_pages); + /* * Zone shrinking code cannot properly deal with ZONE_DEVICE. So * we will not try to shrink the zones - which is okay as diff --git a/mm/memremap.c b/mm/memremap.c index 734afeaad811..371939f92b69 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -139,6 +139,8 @@ void memunmap_pages(struct dev_pagemap *pgmap) nid = page_to_nid(pfn_to_page(start_pfn)); mem_hotplug_begin(); + remove_pfn_range_from_zone(page_zone(pfn_to_page(start_pfn)), + start_pfn, nr_pages); if (pgmap->type == MEMORY_DEVICE_PRIVATE) { __remove_pages(start_pfn, nr_pages, NULL); } else { From patchwork Tue Oct 1 14:40:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11168983 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0F94B1902 for ; Tue, 1 Oct 2019 14:41:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E3A3021872 for ; Tue, 1 Oct 2019 14:41:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389260AbfJAOld (ORCPT ); Tue, 1 Oct 2019 10:41:33 -0400 Received: from mx1.redhat.com ([209.132.183.28]:54958 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727018AbfJAOld (ORCPT ); Tue, 1 Oct 2019 10:41:33 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id B529210CC20A; Tue, 1 Oct 2019 14:41:32 +0000 (UTC) Received: from t460s.redhat.com (ovpn-116-54.ams2.redhat.com [10.36.116.54]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1A8275D9C9; Tue, 1 Oct 2019 14:41:29 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, David Hildenbrand , Andrew Morton , Oscar Salvador , Michal Hocko , Pavel Tatashin , Dan Williams , Wei Yang Subject: [PATCH v5 07/10] mm/memory_hotplug: We always have a zone in find_(smallest|biggest)_section_pfn Date: Tue, 1 Oct 2019 16:40:08 +0200 Message-Id: <20191001144011.3801-8-david@redhat.com> In-Reply-To: <20191001144011.3801-1-david@redhat.com> References: <20191001144011.3801-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.6.2 (mx1.redhat.com [10.5.110.65]); Tue, 01 Oct 2019 14:41:32 +0000 (UTC) Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org With shrink_pgdat_span() out of the way, we now always have a valid zone. Cc: Andrew Morton Cc: Oscar Salvador Cc: David Hildenbrand Cc: Michal Hocko Cc: Pavel Tatashin Cc: Dan Williams Cc: Wei Yang Signed-off-by: David Hildenbrand --- mm/memory_hotplug.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 640309236a58..273833612774 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -337,7 +337,7 @@ static unsigned long find_smallest_section_pfn(int nid, struct zone *zone, if (unlikely(pfn_to_nid(start_pfn) != nid)) continue; - if (zone && zone != page_zone(pfn_to_page(start_pfn))) + if (zone != page_zone(pfn_to_page(start_pfn))) continue; return start_pfn; @@ -362,7 +362,7 @@ static unsigned long find_biggest_section_pfn(int nid, struct zone *zone, if (unlikely(pfn_to_nid(pfn) != nid)) continue; - if (zone && zone != page_zone(pfn_to_page(pfn))) + if (zone != page_zone(pfn_to_page(pfn))) continue; return pfn; From patchwork Tue Oct 1 14:40:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11168971 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E3DE9912 for ; Tue, 1 Oct 2019 14:41:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id CC0502190F for ; Tue, 1 Oct 2019 14:41:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389276AbfJAOlg (ORCPT ); Tue, 1 Oct 2019 10:41:36 -0400 Received: from mx1.redhat.com ([209.132.183.28]:44462 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389116AbfJAOlg (ORCPT ); Tue, 1 Oct 2019 10:41:36 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id B3213316E536; Tue, 1 Oct 2019 14:41:35 +0000 (UTC) Received: from t460s.redhat.com (ovpn-116-54.ams2.redhat.com [10.36.116.54]) by smtp.corp.redhat.com (Postfix) with ESMTP id 117335D9C9; Tue, 1 Oct 2019 14:41:32 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, David Hildenbrand , Andrew Morton , Oscar Salvador , Michal Hocko , Pavel Tatashin , Dan Williams , Wei Yang Subject: [PATCH v5 08/10] mm/memory_hotplug: Don't check for "all holes" in shrink_zone_span() Date: Tue, 1 Oct 2019 16:40:09 +0200 Message-Id: <20191001144011.3801-9-david@redhat.com> In-Reply-To: <20191001144011.3801-1-david@redhat.com> References: <20191001144011.3801-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.41]); Tue, 01 Oct 2019 14:41:35 +0000 (UTC) Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org If we have holes, the holes will automatically get detected and removed once we remove the next bigger/smaller section. The extra checks can go. Cc: Andrew Morton Cc: Oscar Salvador Cc: Michal Hocko Cc: David Hildenbrand Cc: Pavel Tatashin Cc: Dan Williams Cc: Wei Yang Signed-off-by: David Hildenbrand --- mm/memory_hotplug.c | 34 +++++++--------------------------- 1 file changed, 7 insertions(+), 27 deletions(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 273833612774..22b3623768c8 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -393,6 +393,9 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn, if (pfn) { zone->zone_start_pfn = pfn; zone->spanned_pages = zone_end_pfn - pfn; + } else { + zone->zone_start_pfn = 0; + zone->spanned_pages = 0; } } else if (zone_end_pfn == end_pfn) { /* @@ -405,34 +408,11 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn, start_pfn); if (pfn) zone->spanned_pages = pfn - zone_start_pfn + 1; + else { + zone->zone_start_pfn = 0; + zone->spanned_pages = 0; + } } - - /* - * The section is not biggest or smallest mem_section in the zone, it - * only creates a hole in the zone. So in this case, we need not - * change the zone. But perhaps, the zone has only hole data. Thus - * it check the zone has only hole or not. - */ - pfn = zone_start_pfn; - for (; pfn < zone_end_pfn; pfn += PAGES_PER_SUBSECTION) { - if (unlikely(!pfn_to_online_page(pfn))) - continue; - - if (page_zone(pfn_to_page(pfn)) != zone) - continue; - - /* Skip range to be removed */ - if (pfn >= start_pfn && pfn < end_pfn) - continue; - - /* If we find valid section, we have nothing to do */ - zone_span_writeunlock(zone); - return; - } - - /* The zone has no valid section */ - zone->zone_start_pfn = 0; - zone->spanned_pages = 0; zone_span_writeunlock(zone); } From patchwork Tue Oct 1 14:40:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11168981 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B8D7B15AB for ; Tue, 1 Oct 2019 14:41:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A216F21872 for ; Tue, 1 Oct 2019 14:41:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389295AbfJAOlq (ORCPT ); Tue, 1 Oct 2019 10:41:46 -0400 Received: from mx1.redhat.com ([209.132.183.28]:7742 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389288AbfJAOlj (ORCPT ); Tue, 1 Oct 2019 10:41:39 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id A8E615945B; Tue, 1 Oct 2019 14:41:38 +0000 (UTC) Received: from t460s.redhat.com (ovpn-116-54.ams2.redhat.com [10.36.116.54]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0ABBC5D9C9; Tue, 1 Oct 2019 14:41:35 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, David Hildenbrand , Andrew Morton , Oscar Salvador , Michal Hocko , Pavel Tatashin , Dan Williams , Wei Yang Subject: [PATCH v5 09/10] mm/memory_hotplug: Drop local variables in shrink_zone_span() Date: Tue, 1 Oct 2019 16:40:10 +0200 Message-Id: <20191001144011.3801-10-david@redhat.com> In-Reply-To: <20191001144011.3801-1-david@redhat.com> References: <20191001144011.3801-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Tue, 01 Oct 2019 14:41:38 +0000 (UTC) Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Get rid of the unnecessary local variables. Cc: Andrew Morton Cc: Oscar Salvador Cc: David Hildenbrand Cc: Michal Hocko Cc: Pavel Tatashin Cc: Dan Williams Cc: Wei Yang Signed-off-by: David Hildenbrand --- mm/memory_hotplug.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 22b3623768c8..ffb514e3b090 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -374,14 +374,11 @@ static unsigned long find_biggest_section_pfn(int nid, struct zone *zone, static void shrink_zone_span(struct zone *zone, unsigned long start_pfn, unsigned long end_pfn) { - unsigned long zone_start_pfn = zone->zone_start_pfn; - unsigned long z = zone_end_pfn(zone); /* zone_end_pfn namespace clash */ - unsigned long zone_end_pfn = z; unsigned long pfn; int nid = zone_to_nid(zone); zone_span_writelock(zone); - if (zone_start_pfn == start_pfn) { + if (zone->zone_start_pfn == start_pfn) { /* * If the section is smallest section in the zone, it need * shrink zone->zone_start_pfn and zone->zone_spanned_pages. @@ -389,25 +386,25 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn, * for shrinking zone. */ pfn = find_smallest_section_pfn(nid, zone, end_pfn, - zone_end_pfn); + zone_end_pfn(zone)); if (pfn) { + zone->spanned_pages = zone_end_pfn(zone) - pfn; zone->zone_start_pfn = pfn; - zone->spanned_pages = zone_end_pfn - pfn; } else { zone->zone_start_pfn = 0; zone->spanned_pages = 0; } - } else if (zone_end_pfn == end_pfn) { + } else if (zone_end_pfn(zone) == end_pfn) { /* * If the section is biggest section in the zone, it need * shrink zone->spanned_pages. * In this case, we find second biggest valid mem_section for * shrinking zone. */ - pfn = find_biggest_section_pfn(nid, zone, zone_start_pfn, + pfn = find_biggest_section_pfn(nid, zone, zone->zone_start_pfn, start_pfn); if (pfn) - zone->spanned_pages = pfn - zone_start_pfn + 1; + zone->spanned_pages = pfn - zone->zone_start_pfn + 1; else { zone->zone_start_pfn = 0; zone->spanned_pages = 0; From patchwork Tue Oct 1 14:40:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11168979 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D2AA6912 for ; Tue, 1 Oct 2019 14:41:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B88A120842 for ; Tue, 1 Oct 2019 14:41:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389318AbfJAOlm (ORCPT ); Tue, 1 Oct 2019 10:41:42 -0400 Received: from mx1.redhat.com ([209.132.183.28]:47604 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389116AbfJAOll (ORCPT ); Tue, 1 Oct 2019 10:41:41 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 9D28D757C7; Tue, 1 Oct 2019 14:41:41 +0000 (UTC) Received: from t460s.redhat.com (ovpn-116-54.ams2.redhat.com [10.36.116.54]) by smtp.corp.redhat.com (Postfix) with ESMTP id 04D405D9C9; Tue, 1 Oct 2019 14:41:38 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, David Hildenbrand , Andrew Morton , Oscar Salvador , Michal Hocko , Pavel Tatashin , Dan Williams , Wei Yang Subject: [PATCH v5 10/10] mm/memory_hotplug: Cleanup __remove_pages() Date: Tue, 1 Oct 2019 16:40:11 +0200 Message-Id: <20191001144011.3801-11-david@redhat.com> In-Reply-To: <20191001144011.3801-1-david@redhat.com> References: <20191001144011.3801-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Tue, 01 Oct 2019 14:41:41 +0000 (UTC) Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Let's drop the basically unused section stuff and simplify. Also, let's use a shorter variant to calculate the number of pages to the next section boundary. Cc: Andrew Morton Cc: Oscar Salvador Cc: Michal Hocko Cc: Pavel Tatashin Cc: Dan Williams Cc: Wei Yang Signed-off-by: David Hildenbrand --- mm/memory_hotplug.c | 17 ++++++----------- 1 file changed, 6 insertions(+), 11 deletions(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index ffb514e3b090..0fa99e5a657e 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -488,25 +488,20 @@ static void __remove_section(unsigned long pfn, unsigned long nr_pages, void __remove_pages(unsigned long pfn, unsigned long nr_pages, struct vmem_altmap *altmap) { + const unsigned long end_pfn = pfn + nr_pages; + unsigned long cur_nr_pages; unsigned long map_offset = 0; - unsigned long nr, start_sec, end_sec; map_offset = vmem_altmap_offset(altmap); if (check_pfn_span(pfn, nr_pages, "remove")) return; - start_sec = pfn_to_section_nr(pfn); - end_sec = pfn_to_section_nr(pfn + nr_pages - 1); - for (nr = start_sec; nr <= end_sec; nr++) { - unsigned long pfns; - + for (; pfn < end_pfn; pfn += cur_nr_pages) { cond_resched(); - pfns = min(nr_pages, PAGES_PER_SECTION - - (pfn & ~PAGE_SECTION_MASK)); - __remove_section(pfn, pfns, map_offset, altmap); - pfn += pfns; - nr_pages -= pfns; + /* Select all remaining pages up to the next section boundary */ + cur_nr_pages = min(end_pfn - pfn, -(pfn | PAGE_SECTION_MASK)); + __remove_section(pfn, cur_nr_pages, map_offset, altmap); map_offset = 0; } }