From patchwork Tue Aug 7 13:37:55 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oscar Salvador X-Patchwork-Id: 10558721 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8B59413BB for ; Tue, 7 Aug 2018 13:38:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7918E29E12 for ; Tue, 7 Aug 2018 13:38:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6CC5629E24; Tue, 7 Aug 2018 13:38:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B859429E12 for ; Tue, 7 Aug 2018 13:38:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F2D6E6B000A; Tue, 7 Aug 2018 09:38:23 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E8E006B000C; Tue, 7 Aug 2018 09:38:23 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CDD766B000D; Tue, 7 Aug 2018 09:38:23 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-wm0-f72.google.com (mail-wm0-f72.google.com [74.125.82.72]) by kanga.kvack.org (Postfix) with ESMTP id 739916B000A for ; Tue, 7 Aug 2018 09:38:23 -0400 (EDT) Received: by mail-wm0-f72.google.com with SMTP id z11-v6so10379758wma.4 for ; Tue, 07 Aug 2018 06:38:23 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=CPRN2w6qtUBwxTo39JZZYnJbZhBo7PZFu0H11s8wxkA=; b=tUsRSpCSI1avTrGWaZEyu1c0ktGxJmPQb+mwkDqlTRnsGXCHRAHpRTwVuSTGMl4pjM 2s3XKCdgF/iAaR+05Zp7fXoJm5PwFgtlwyUZN6VmPKuzZuTOSurwkwfbf6dnSR0rXXEI z1oCqEAAN/ds0bUzEC66bOTyzEksqAXtcQZCMzHIvyBLRtjoBRAi1RbbAC+5tSJzB/zP KCyPPx0BjHOt0i+KTAytQllXfCHNTXPmlhWjNaSxGQTLh54Xtv7tcynQvmXIO2kL73ui pdvK7vLQQI+7eHlGaBzqYfXl09CPXiHygEQ0f2hZc2sPuNn9TLL9TN1XjzTRgjI/v09r dfdw== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of osalvador.vilardaga@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=osalvador.vilardaga@gmail.com X-Gm-Message-State: AOUpUlFvEDylRLqNHrbC62xQ4LJivPvw8FWebI4fHh8HWbT7zJtVC0M2 WmdZLXc2xT3A6N0dzQ/52brQooTNYIHhtfK5OyTAnFAVDwff/kbJCgDTycaxSVGbNF3KGiJPzXz 9RaghEDxkTv4KOeEVOY+17fDXrwcW4BGKGWJ/CHB07rmv29vCxRHWRsDrGbzAqaJGuuPIPoLTwZ A8mQ4E4Q6u7OljTx1mhb9hYqqQRYayQZD/9/5BiNoDLxuGdpxrsim1ipJrp979Fa5xnmwYA0pED /HJofjz5fWlnUFMzazNbFPA16pP6YT51mIG1ZqwsziXX/+bOjXdw3UXfGr8guZ/GPIbRKAMHZbP yYJvHoHedrzNviDDm9zc9m4KKzmQRD18VMCjKqAPgOGxF506cinC31CING2g+y83Wf/ELHEr2w= = X-Received: by 2002:adf:ba12:: with SMTP id o18-v6mr13532966wrg.249.1533649103012; Tue, 07 Aug 2018 06:38:23 -0700 (PDT) X-Received: by 2002:adf:ba12:: with SMTP id o18-v6mr13532935wrg.249.1533649102126; Tue, 07 Aug 2018 06:38:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1533649102; cv=none; d=google.com; s=arc-20160816; b=VRtba6BJvWEGL9+astQIMz3QPWkTTbKszS6TSCh29bhnWTUTUkmNMYIGx3OYxO12XI 1UhHxQGVPiBi6q/svCs7Of7+XvPS3AzcAbqxJCku5CqxBUMFuf+Ff2DBcoUQPXdZV1TL JCmGr3AeLO4HeTfLFWW+VNl3O/nY9z8NWQ/dkXT9dNaVY0g0k9DwYbUlrzBNg8V0UaIl F0T0eHbjvJpxrr8ROljKdQ9yFQBhax7kkxuk3lKsLRRUXWkhv4aFZz1+2/GLwxEf93SG f95fCkPARVyzY16tBMa42P6Jw2iRhOMr+pMGLeGj0+xy/3BJPKGk9oXHP4pFzZ9wkQ0D 13AQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=CPRN2w6qtUBwxTo39JZZYnJbZhBo7PZFu0H11s8wxkA=; b=n5Tv0RQIjzhROIfzl6IMEux7x1bFSU8m0iLeQdsYtMLDwB4lgL4OMz0pu9IAF9K8C8 QGLOq9jKAQRNrQcLXB9IEXhjufom29i2pwHSudfFPQT3Rmg3GU9C5qfktFM96TfjNgSZ b7l6xDECjyWL5/m60f4r5iTqYbVnRwlDK76YwocTDsNSB37gEirFFentnoxUcO2ZcTvP g6zuUST2L08BP/vysqiHiKLXdO3CMiBpur2xLG9Rao0s3dV8O03zSOEJj6jdWMHKuBKW C8W9YY10JwkOuOL/Atpt2z06Lznn2Mo1Il8Gj2TVBp63kS6tMsXYxSimy3BpvXCTX9X+ paUQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of osalvador.vilardaga@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=osalvador.vilardaga@gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id n198-v6sor462844wmd.1.2018.08.07.06.38.21 for (Google Transport Security); Tue, 07 Aug 2018 06:38:22 -0700 (PDT) Received-SPF: pass (google.com: domain of osalvador.vilardaga@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; spf=pass (google.com: domain of osalvador.vilardaga@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=osalvador.vilardaga@gmail.com X-Google-Smtp-Source: AA+uWPzlbCizfcOneDjMSjyEEQQiUMYD0HdfiHaa4zEQOlkQlK9fAFpaWAdZxJLm2/6s/tUpZ6ms9w== X-Received: by 2002:a7b:c041:: with SMTP id u1-v6mr1766504wmc.53.1533649101731; Tue, 07 Aug 2018 06:38:21 -0700 (PDT) Received: from techadventures.net (techadventures.net. [62.201.165.239]) by smtp.gmail.com with ESMTPSA id w4-v6sm1999312wrl.46.2018.08.07.06.38.20 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 07 Aug 2018 06:38:20 -0700 (PDT) Received: from d104.suse.de (charybdis-ext.suse.de [195.135.221.2]) by techadventures.net (Postfix) with ESMTPA id E77FF1246FB; Tue, 7 Aug 2018 15:38:19 +0200 (CEST) From: osalvador@techadventures.net To: akpm@linux-foundation.org Cc: mhocko@suse.com, dan.j.williams@intel.com, pasha.tatashin@oracle.com, jglisse@redhat.com, david@redhat.com, yasu.isimatu@gmail.com, logang@deltatee.com, dave.jiang@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Oscar Salvador Subject: [RFC PATCH 1/3] mm/memory_hotplug: Add nid parameter to arch_remove_memory Date: Tue, 7 Aug 2018 15:37:55 +0200 Message-Id: <20180807133757.18352-2-osalvador@techadventures.net> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20180807133757.18352-1-osalvador@techadventures.net> References: <20180807133757.18352-1-osalvador@techadventures.net> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Oscar Salvador This patch is only a preparation for the following-up patches. The idea is to remove the zone parameter and pass the nid instead. The zone parameter was needed because down the chain we call __remove_zone, which adjusts the spanned pages of a zone/node. online_pages() increments the spanned pages of a zone/node, so for consistency it is better if we move __remove_zone to offline_pages(). With that, remove_memory() will only take care of removing the sections and delete the mappings. Signed-off-by: Oscar Salvador --- arch/ia64/mm/init.c | 2 +- arch/powerpc/mm/mem.c | 2 +- arch/s390/mm/init.c | 2 +- arch/sh/mm/init.c | 2 +- arch/x86/mm/init_32.c | 2 +- arch/x86/mm/init_64.c | 2 +- include/linux/memory_hotplug.h | 2 +- kernel/memremap.c | 4 +++- mm/hmm.c | 4 +++- mm/memory_hotplug.c | 2 +- 10 files changed, 14 insertions(+), 10 deletions(-) diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c index e6c6dfd98de2..bc5e15045cee 100644 --- a/arch/ia64/mm/init.c +++ b/arch/ia64/mm/init.c @@ -660,7 +660,7 @@ int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, } #ifdef CONFIG_MEMORY_HOTREMOVE -int arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap) +int arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap) { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index 5c8530d0c611..9e17d57a9948 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -139,7 +139,7 @@ int __meminit arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap * } #ifdef CONFIG_MEMORY_HOTREMOVE -int __meminit arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap) +int __meminit arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap) { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index 3fa3e5323612..bc49b560625e 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -240,7 +240,7 @@ int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, } #ifdef CONFIG_MEMORY_HOTREMOVE -int arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap) +int arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap) { /* * There is no hardware or firmware interface which could trigger a diff --git a/arch/sh/mm/init.c b/arch/sh/mm/init.c index 4034035fbede..55c740ab861b 100644 --- a/arch/sh/mm/init.c +++ b/arch/sh/mm/init.c @@ -454,7 +454,7 @@ EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid); #endif #ifdef CONFIG_MEMORY_HOTREMOVE -int arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap) +int arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap) { unsigned long start_pfn = PFN_DOWN(start); unsigned long nr_pages = size >> PAGE_SHIFT; diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c index 979e0a02cbe1..9fa503f2f56c 100644 --- a/arch/x86/mm/init_32.c +++ b/arch/x86/mm/init_32.c @@ -861,7 +861,7 @@ int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, } #ifdef CONFIG_MEMORY_HOTREMOVE -int arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap) +int arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap) { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 9b19f9a8948e..26728df07072 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1148,7 +1148,7 @@ kernel_physical_mapping_remove(unsigned long start, unsigned long end) remove_pagetable(start, end, true, NULL); } -int __ref arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap) +int __ref arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap) { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 34a28227068d..c68b03fd87bd 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -107,7 +107,7 @@ static inline bool movable_node_is_enabled(void) } #ifdef CONFIG_MEMORY_HOTREMOVE -extern int arch_remove_memory(u64 start, u64 size, +extern int arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap); extern int __remove_pages(struct zone *zone, unsigned long start_pfn, unsigned long nr_pages, struct vmem_altmap *altmap); diff --git a/kernel/memremap.c b/kernel/memremap.c index 5b8600d39931..7a832b844f24 100644 --- a/kernel/memremap.c +++ b/kernel/memremap.c @@ -121,6 +121,7 @@ static void devm_memremap_pages_release(void *data) struct resource *res = &pgmap->res; resource_size_t align_start, align_size; unsigned long pfn; + int nid; for_each_device_pfn(pfn, pgmap) put_page(pfn_to_page(pfn)); @@ -134,9 +135,10 @@ static void devm_memremap_pages_release(void *data) align_start = res->start & ~(SECTION_SIZE - 1); align_size = ALIGN(res->start + resource_size(res), SECTION_SIZE) - align_start; + nid = dev_to_node(dev); mem_hotplug_begin(); - arch_remove_memory(align_start, align_size, pgmap->altmap_valid ? + arch_remove_memory(nid, align_start, align_size, pgmap->altmap_valid ? &pgmap->altmap : NULL); kasan_remove_zero_shadow(__va(align_start), align_size); mem_hotplug_done(); diff --git a/mm/hmm.c b/mm/hmm.c index c968e49f7a0c..21787e480b4a 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -995,6 +995,7 @@ static void hmm_devmem_release(struct device *dev, void *data) unsigned long start_pfn, npages; struct zone *zone; struct page *page; + int nid; if (percpu_ref_tryget_live(&devmem->ref)) { dev_WARN(dev, "%s: page mapping is still live!\n", __func__); @@ -1007,12 +1008,13 @@ static void hmm_devmem_release(struct device *dev, void *data) page = pfn_to_page(start_pfn); zone = page_zone(page); + nid = zone->zone_pgdat->node_id; mem_hotplug_begin(); if (resource->desc == IORES_DESC_DEVICE_PRIVATE_MEMORY) __remove_pages(zone, start_pfn, npages, NULL); else - arch_remove_memory(start_pfn << PAGE_SHIFT, + arch_remove_memory(nid, start_pfn << PAGE_SHIFT, npages << PAGE_SHIFT, NULL); mem_hotplug_done(); diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 9eea6e809a4e..9bd629944c91 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1895,7 +1895,7 @@ void __ref remove_memory(int nid, u64 start, u64 size) memblock_free(start, size); memblock_remove(start, size); - arch_remove_memory(start, size, NULL); + arch_remove_memory(nid, start, size, NULL); try_offline_node(nid); From patchwork Tue Aug 7 13:37:56 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oscar Salvador X-Patchwork-Id: 10558725 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7F0B5139A for ; Tue, 7 Aug 2018 13:38:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6AF7629E12 for ; Tue, 7 Aug 2018 13:38:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5D3BC29E24; Tue, 7 Aug 2018 13:38:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 541C429E12 for ; Tue, 7 Aug 2018 13:38:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 42D0A6B000E; Tue, 7 Aug 2018 09:38:26 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3DBDF6B0269; Tue, 7 Aug 2018 09:38:26 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2194D6B000E; Tue, 7 Aug 2018 09:38:26 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-wm0-f72.google.com (mail-wm0-f72.google.com [74.125.82.72]) by kanga.kvack.org (Postfix) with ESMTP id 8F0476B000E for ; Tue, 7 Aug 2018 09:38:25 -0400 (EDT) Received: by mail-wm0-f72.google.com with SMTP id s18-v6so11571617wmh.0 for ; Tue, 07 Aug 2018 06:38:25 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=hxhJQuPP8MK8FGZDdnfdhVz2NfjBhW1HNLzyYAe9gyg=; b=IV+EFoyoOo83aZWuTSZ/+50E/40byzlX8KPgHo6eJY8a7ScEVXruT8oP60lQI0wEC8 u+u3Q147a6A+p0/8gz35uuUHmWESBQIJ7twKrKJPaz5Qe43I/sU5vBRvfRU3Ftv9us0N kFfS2I4nVhJ08l6x9hQdEnrp9xNkZ12QVfllKmlb2GJoCYlWW8zdB5tam313bH4Co/yt QbXv2tOghs6og0FFIX94d01jlN0D1NEamDQ8N9a+Qcj9sb+HKPYIgsmqbvYQqpSa/sHJ wjEyRgnnZEFkWHxolIrWi4eYTbhEB5CXTpra1sVn/Cq+yaByIUPuhlKUq9ZkXCAN0/EM m0Jg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of osalvador.vilardaga@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=osalvador.vilardaga@gmail.com X-Gm-Message-State: AOUpUlFoys32Oksj/1skqAHSTGUvaF9GgpctGbMuatMPoVj50e9El7qp yv0h0EygBM4qal/tgaijPYdBrI/ja8e9pWoK1tP8pHNg0aXKYE0jzVNzWRczNFyvCRspbxQQAJ/ FVC5MWbZ5kww+bH2DPYy0ahk6JkKCl4EWjc98685ftmx9SLCQw3AFjUOI6F9I2+qiCT1v9nejIu OLLYZtEOyhxdN0uSHmEONcjNMZJ9svRLouy+/3Y/HAfllkVomFuuy8FEJ1b+B6nIB29yOHxobZQ 2eC5tI5AGi95e6q+mLsVI6Em2dYKoQu2+74UGPewfY+8EI0WEVo+bibSnSBbkpAXyIjee+qaMw9 hjTjwyxpRjsvkWl6yhxJF4Jb9sDiWgjGeXq4fqt5zoqc361sMmvJ+L9qyOWB4SI9NArKdoWpgw= = X-Received: by 2002:adf:b583:: with SMTP id c3-v6mr13105114wre.79.1533649105101; Tue, 07 Aug 2018 06:38:25 -0700 (PDT) X-Received: by 2002:adf:b583:: with SMTP id c3-v6mr13105065wre.79.1533649103832; Tue, 07 Aug 2018 06:38:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1533649103; cv=none; d=google.com; s=arc-20160816; b=ZMP7iVPebHD8MgI6DzQaKhAsafpcOxrl7Of8h4Jzw2w7YaIXcee42ZVbl1wXCMdeOc PX02LSxlW2u4QXmQWRKYndSc1MaZPR1yIcNPo0dJhWWYDSCBQmlhr9X2QTI92QIMPZ9c 6x8Fk168s7MSxWBaFXGiSW4mT2lYr7wTqqR6atmvEy4kpJDahuTL0hIf7gTFJE+oXnah aw766zUs0JjzdcjZpwuDjfjawgmGD92dzuiVwpkg+p9XFFl21vCSk6MyeWzfHCJ1yZJi CPixSNnPNAaFjDqUYr+w74h4YvDWaTYNvmbyQB7ycShoN9Qib0NcX2ANOwYqIT8pF7bw IypQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=hxhJQuPP8MK8FGZDdnfdhVz2NfjBhW1HNLzyYAe9gyg=; b=scE32ea+LPMcXwHcMZRwmaf8cCG8EqS4BUOJYeNFr/3VcVfvYn0aYjXfAYdFt3L5vA B4HZfwlFxSWaWQnOGVdasrGM2dOSWvHnJHIItHc3K8QW2YsqatKNleLsT2x6wOotq12Q tN/UjlkL07QtXo8BOPKCZYZtKbz2iu6TaaVwLzJspPKumVVzy1yzKjikZUkN4AwgMyRM LOFav8cCpCB7ID/E3tQBR+OjDD4YKChCzNrEi8SHsCpSc+U4N+RQNUGbq1LcXTEw8Pmj jbw5Sclx6oCvy0ty/sKmhDvWaAy3Ity54n25e4xpVIXazltymZqxEZ8rk1ev2vfBa8Rg 7BMQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of osalvador.vilardaga@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=osalvador.vilardaga@gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id q201-v6sor461369wmg.15.2018.08.07.06.38.23 for (Google Transport Security); Tue, 07 Aug 2018 06:38:23 -0700 (PDT) Received-SPF: pass (google.com: domain of osalvador.vilardaga@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; spf=pass (google.com: domain of osalvador.vilardaga@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=osalvador.vilardaga@gmail.com X-Google-Smtp-Source: AA+uWPza+0MLtBUlSFwLTXyuvUKYh/DfZTks71nzQ3IUUiJOWILjj1QUo3mk5QRhTSDThpW+IlDbdg== X-Received: by 2002:a1c:5dd4:: with SMTP id r203-v6mr1803113wmb.29.1533649103304; Tue, 07 Aug 2018 06:38:23 -0700 (PDT) Received: from techadventures.net (techadventures.net. [62.201.165.239]) by smtp.gmail.com with ESMTPSA id g10-v6sm1809940wrv.90.2018.08.07.06.38.20 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 07 Aug 2018 06:38:21 -0700 (PDT) Received: from d104.suse.de (charybdis-ext.suse.de [195.135.221.2]) by techadventures.net (Postfix) with ESMTPA id 31EB11246FC; Tue, 7 Aug 2018 15:38:20 +0200 (CEST) From: osalvador@techadventures.net To: akpm@linux-foundation.org Cc: mhocko@suse.com, dan.j.williams@intel.com, pasha.tatashin@oracle.com, jglisse@redhat.com, david@redhat.com, yasu.isimatu@gmail.com, logang@deltatee.com, dave.jiang@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Oscar Salvador Subject: [RFC PATCH 2/3] mm/memory_hotplug: Create __shrink_pages and move it to offline_pages Date: Tue, 7 Aug 2018 15:37:56 +0200 Message-Id: <20180807133757.18352-3-osalvador@techadventures.net> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20180807133757.18352-1-osalvador@techadventures.net> References: <20180807133757.18352-1-osalvador@techadventures.net> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Oscar Salvador Currently, we decrement zone/node spanned_pages when we __remove__ the memory. This is not really great. Incrementing of spanned pages is done in online_pages() path, decrementing spanned pages should be moved to offline_pages(). This, besides making the core more consistent, helps in fixing the bug explained in the cover letter, since from now on, we will not touch any pages in remove_memory path. This does all the work to accomplish this. It removes __remove_zone() and creates __shrink_pages(), which does pretty much the same. It also changes find_smallest/biggest_section_pfn to check for: !online_section(ms) instead of !valid_section(ms) as we offline the section in: __offline_pages offline_isolated_pages offline_isolated_pages_cb __offline_isolated_pages offline_mem_sections right before shrinking the pages. Signed-off-by: Oscar Salvador --- arch/ia64/mm/init.c | 4 +-- arch/powerpc/mm/mem.c | 11 +------ arch/sh/mm/init.c | 4 +-- arch/x86/mm/init_32.c | 4 +-- arch/x86/mm/init_64.c | 8 +----- include/linux/memory_hotplug.h | 6 ++-- kernel/memremap.c | 5 ++++ mm/hmm.c | 2 +- mm/memory_hotplug.c | 65 +++++++++++++++++++++--------------------- mm/sparse.c | 4 +-- 10 files changed, 50 insertions(+), 63 deletions(-) diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c index bc5e15045cee..0029c4d3f574 100644 --- a/arch/ia64/mm/init.c +++ b/arch/ia64/mm/init.c @@ -664,11 +664,9 @@ int arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap) { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; - struct zone *zone; int ret; - zone = page_zone(pfn_to_page(start_pfn)); - ret = __remove_pages(zone, start_pfn, nr_pages, altmap); + ret = __remove_pages(nid, start_pfn, nr_pages, altmap); if (ret) pr_warn("%s: Problem encountered in __remove_pages() as" " ret=%d\n", __func__, ret); diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index 9e17d57a9948..12879191bc6b 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -143,18 +143,9 @@ int __meminit arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altma { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; - struct page *page; int ret; - /* - * If we have an altmap then we need to skip over any reserved PFNs - * when querying the zone. - */ - page = pfn_to_page(start_pfn); - if (altmap) - page += vmem_altmap_offset(altmap); - - ret = __remove_pages(page_zone(page), start_pfn, nr_pages, altmap); + ret = __remove_pages(nid, start_pfn, nr_pages, altmap); if (ret) return ret; diff --git a/arch/sh/mm/init.c b/arch/sh/mm/init.c index 55c740ab861b..4f183857b522 100644 --- a/arch/sh/mm/init.c +++ b/arch/sh/mm/init.c @@ -458,11 +458,9 @@ int arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap) { unsigned long start_pfn = PFN_DOWN(start); unsigned long nr_pages = size >> PAGE_SHIFT; - struct zone *zone; int ret; - zone = page_zone(pfn_to_page(start_pfn)); - ret = __remove_pages(zone, start_pfn, nr_pages, altmap); + ret = __remove_pages(nid, start_pfn, nr_pages, altmap); if (unlikely(ret)) pr_warn("%s: Failed, __remove_pages() == %d\n", __func__, ret); diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c index 9fa503f2f56c..88909e88c873 100644 --- a/arch/x86/mm/init_32.c +++ b/arch/x86/mm/init_32.c @@ -865,10 +865,8 @@ int arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap) { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; - struct zone *zone; - zone = page_zone(pfn_to_page(start_pfn)); - return __remove_pages(zone, start_pfn, nr_pages, altmap); + return __remove_pages(nid, start_pfn, nr_pages, altmap); } #endif #endif diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 26728df07072..1aad5ea2e274 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1152,15 +1152,9 @@ int __ref arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *a { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; - struct page *page = pfn_to_page(start_pfn); - struct zone *zone; int ret; - /* With altmap the first mapped page is offset from @start */ - if (altmap) - page += vmem_altmap_offset(altmap); - zone = page_zone(page); - ret = __remove_pages(zone, start_pfn, nr_pages, altmap); + ret = __remove_pages(nid, start_pfn, nr_pages, altmap); WARN_ON_ONCE(ret); kernel_physical_mapping_remove(start, start + size); diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index c68b03fd87bd..eff460ba3ab1 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -109,8 +109,10 @@ static inline bool movable_node_is_enabled(void) #ifdef CONFIG_MEMORY_HOTREMOVE extern int arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap); -extern int __remove_pages(struct zone *zone, unsigned long start_pfn, +extern int __remove_pages(int nid, unsigned long start_pfn, unsigned long nr_pages, struct vmem_altmap *altmap); +extern void shrink_pages(struct zone *zone, unsigned long start_pfn, + unsigned long nr_pages); #endif /* CONFIG_MEMORY_HOTREMOVE */ /* reasonably generic interface to expand the physical pages */ @@ -333,7 +335,7 @@ extern bool is_memblock_offlined(struct memory_block *mem); extern void remove_memory(int nid, u64 start, u64 size); extern int sparse_add_one_section(struct pglist_data *pgdat, unsigned long start_pfn, struct vmem_altmap *altmap); -extern void sparse_remove_one_section(struct zone *zone, struct mem_section *ms, +extern void sparse_remove_one_section(int nid, struct mem_section *ms, unsigned long map_offset, struct vmem_altmap *altmap); extern struct page *sparse_decode_mem_map(unsigned long coded_mem_map, unsigned long pnum); diff --git a/kernel/memremap.c b/kernel/memremap.c index 7a832b844f24..2057af5c1c4f 100644 --- a/kernel/memremap.c +++ b/kernel/memremap.c @@ -121,6 +121,7 @@ static void devm_memremap_pages_release(void *data) struct resource *res = &pgmap->res; resource_size_t align_start, align_size; unsigned long pfn; + struct page *page; int nid; for_each_device_pfn(pfn, pgmap) @@ -138,6 +139,10 @@ static void devm_memremap_pages_release(void *data) nid = dev_to_node(dev); mem_hotplug_begin(); + page = pfn_to_page(pfn); + if (pgmap->altmap_valid) + page += vmem_altmap_offset(&pgmap->altmap); + shrink_pages(page_zone(page), pfn, align_size >> PAGE_SHIFT); arch_remove_memory(nid, align_start, align_size, pgmap->altmap_valid ? &pgmap->altmap : NULL); kasan_remove_zero_shadow(__va(align_start), align_size); diff --git a/mm/hmm.c b/mm/hmm.c index 21787e480b4a..b2f2dcadb5fb 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -1012,7 +1012,7 @@ static void hmm_devmem_release(struct device *dev, void *data) mem_hotplug_begin(); if (resource->desc == IORES_DESC_DEVICE_PRIVATE_MEMORY) - __remove_pages(zone, start_pfn, npages, NULL); + __remove_pages(nid, start_pfn, npages, NULL); else arch_remove_memory(nid, start_pfn << PAGE_SHIFT, npages << PAGE_SHIFT, NULL); diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 9bd629944c91..e33555651e46 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -320,12 +320,10 @@ static unsigned long find_smallest_section_pfn(int nid, struct zone *zone, unsigned long start_pfn, unsigned long end_pfn) { - struct mem_section *ms; - for (; start_pfn < end_pfn; start_pfn += PAGES_PER_SECTION) { - ms = __pfn_to_section(start_pfn); + struct mem_section *ms = __pfn_to_section(start_pfn); - if (unlikely(!valid_section(ms))) + if (unlikely(!online_section(ms))) continue; if (unlikely(pfn_to_nid(start_pfn) != nid)) @@ -345,15 +343,14 @@ static unsigned long find_biggest_section_pfn(int nid, struct zone *zone, unsigned long start_pfn, unsigned long end_pfn) { - struct mem_section *ms; unsigned long pfn; /* pfn is the end pfn of a memory section. */ pfn = end_pfn - 1; for (; pfn >= start_pfn; pfn -= PAGES_PER_SECTION) { - ms = __pfn_to_section(pfn); + struct mem_section *ms = __pfn_to_section(pfn); - if (unlikely(!valid_section(ms))) + if (unlikely(!online_section(ms))) continue; if (unlikely(pfn_to_nid(pfn) != nid)) @@ -415,7 +412,7 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn, for (; pfn < zone_end_pfn; pfn += PAGES_PER_SECTION) { ms = __pfn_to_section(pfn); - if (unlikely(!valid_section(ms))) + if (unlikely(!online_section(ms))) continue; if (page_zone(pfn_to_page(pfn)) != zone) @@ -483,7 +480,7 @@ static void shrink_pgdat_span(struct pglist_data *pgdat, for (; pfn < pgdat_end_pfn; pfn += PAGES_PER_SECTION) { ms = __pfn_to_section(pfn); - if (unlikely(!valid_section(ms))) + if (unlikely(!online_section(ms))) continue; if (pfn_to_nid(pfn) != nid) @@ -502,19 +499,32 @@ static void shrink_pgdat_span(struct pglist_data *pgdat, pgdat->node_spanned_pages = 0; } -static void __remove_zone(struct zone *zone, unsigned long start_pfn) +static void __shrink_pages(struct zone *zone, unsigned long start_pfn, + unsigned long end_pfn, + unsigned long offlined_pages) { struct pglist_data *pgdat = zone->zone_pgdat; int nr_pages = PAGES_PER_SECTION; unsigned long flags; + unsigned long pfn; - pgdat_resize_lock(zone->zone_pgdat, &flags); - shrink_zone_span(zone, start_pfn, start_pfn + nr_pages); - shrink_pgdat_span(pgdat, start_pfn, start_pfn + nr_pages); - pgdat_resize_unlock(zone->zone_pgdat, &flags); + clear_zone_contiguous(zone); + pgdat_resize_lock(pgdat, &flags); + for(pfn = start_pfn; pfn < end_pfn; pfn += nr_pages) { + shrink_zone_span(zone, pfn, pfn + nr_pages); + shrink_pgdat_span(pgdat, pfn, pfn + nr_pages); + } + pgdat->node_present_pages -= offlined_pages; + pgdat_resize_unlock(pgdat, &flags); + set_zone_contiguous(zone); } -static int __remove_section(struct zone *zone, struct mem_section *ms, +void shrink_pages(struct zone *zone, unsigned long start_pfn, unsigned long nr_pages) +{ + __shrink_pages(zone, start_pfn, start_pfn + nr_pages, nr_pages); +} + +static int __remove_section(int nid, struct mem_section *ms, unsigned long map_offset, struct vmem_altmap *altmap) { unsigned long start_pfn; @@ -530,15 +540,14 @@ static int __remove_section(struct zone *zone, struct mem_section *ms, scn_nr = __section_nr(ms); start_pfn = section_nr_to_pfn((unsigned long)scn_nr); - __remove_zone(zone, start_pfn); - sparse_remove_one_section(zone, ms, map_offset, altmap); + sparse_remove_one_section(nid, ms, map_offset, altmap); return 0; } /** * __remove_pages() - remove sections of pages from a zone - * @zone: zone from which pages need to be removed + * @nid: node which pages belong to * @phys_start_pfn: starting pageframe (must be aligned to start of a section) * @nr_pages: number of pages to remove (must be multiple of section size) * @altmap: alternative device page map or %NULL if default memmap is used @@ -548,7 +557,7 @@ static int __remove_section(struct zone *zone, struct mem_section *ms, * sure that pages are marked reserved and zones are adjust properly by * calling offline_pages(). */ -int __remove_pages(struct zone *zone, unsigned long phys_start_pfn, +int __remove_pages(int nid, unsigned long phys_start_pfn, unsigned long nr_pages, struct vmem_altmap *altmap) { unsigned long i; @@ -556,10 +565,9 @@ int __remove_pages(struct zone *zone, unsigned long phys_start_pfn, int sections_to_remove, ret = 0; /* In the ZONE_DEVICE case device driver owns the memory region */ - if (is_dev_zone(zone)) { - if (altmap) - map_offset = vmem_altmap_offset(altmap); - } else { + if (altmap) + map_offset = vmem_altmap_offset(altmap); + else { resource_size_t start, size; start = phys_start_pfn << PAGE_SHIFT; @@ -575,8 +583,6 @@ int __remove_pages(struct zone *zone, unsigned long phys_start_pfn, } } - clear_zone_contiguous(zone); - /* * We can only remove entire sections */ @@ -587,15 +593,13 @@ int __remove_pages(struct zone *zone, unsigned long phys_start_pfn, for (i = 0; i < sections_to_remove; i++) { unsigned long pfn = phys_start_pfn + i*PAGES_PER_SECTION; - ret = __remove_section(zone, __pfn_to_section(pfn), map_offset, + ret = __remove_section(nid, __pfn_to_section(pfn), map_offset, altmap); map_offset = 0; if (ret) break; } - set_zone_contiguous(zone); - return ret; } #endif /* CONFIG_MEMORY_HOTREMOVE */ @@ -1595,7 +1599,6 @@ static int __ref __offline_pages(unsigned long start_pfn, unsigned long pfn, nr_pages; long offlined_pages; int ret, node; - unsigned long flags; unsigned long valid_start, valid_end; struct zone *zone; struct memory_notify arg; @@ -1667,9 +1670,7 @@ static int __ref __offline_pages(unsigned long start_pfn, adjust_managed_page_count(pfn_to_page(start_pfn), -offlined_pages); zone->present_pages -= offlined_pages; - pgdat_resize_lock(zone->zone_pgdat, &flags); - zone->zone_pgdat->node_present_pages -= offlined_pages; - pgdat_resize_unlock(zone->zone_pgdat, &flags); + __shrink_pages(zone, valid_start, valid_end, offlined_pages); init_per_zone_wmark_min(); diff --git a/mm/sparse.c b/mm/sparse.c index 10b07eea9a6e..016020bd20b5 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -766,12 +766,12 @@ static void free_section_usemap(struct page *memmap, unsigned long *usemap, free_map_bootmem(memmap); } -void sparse_remove_one_section(struct zone *zone, struct mem_section *ms, +void sparse_remove_one_section(int nid, struct mem_section *ms, unsigned long map_offset, struct vmem_altmap *altmap) { struct page *memmap = NULL; unsigned long *usemap = NULL, flags; - struct pglist_data *pgdat = zone->zone_pgdat; + struct pglist_data *pgdat = NODE_DATA(nid); pgdat_resize_lock(pgdat, &flags); if (ms->section_mem_map) { From patchwork Tue Aug 7 13:37:57 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oscar Salvador X-Patchwork-Id: 10558723 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 15042139A for ; Tue, 7 Aug 2018 13:38:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 01C2029E12 for ; Tue, 7 Aug 2018 13:38:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E963E29E24; Tue, 7 Aug 2018 13:38:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4DEBA29E12 for ; Tue, 7 Aug 2018 13:38:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F02636B000C; Tue, 7 Aug 2018 09:38:24 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id EB11F6B000D; Tue, 7 Aug 2018 09:38:24 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D2E666B000E; Tue, 7 Aug 2018 09:38:24 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com [209.85.221.69]) by kanga.kvack.org (Postfix) with ESMTP id 7603E6B000C for ; Tue, 7 Aug 2018 09:38:24 -0400 (EDT) Received: by mail-wr1-f69.google.com with SMTP id j6-v6so13774783wrr.15 for ; Tue, 07 Aug 2018 06:38:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=UGHDYi1fGTGsyDr5+uGpoPTBhMkqCLpSGvH1cSTmmy8=; b=a3N6SH+pUdkjoBDeF93Grrp6RQF7xVpesI7VmA7xAuvfxPYhDZ3P11vnUREpcRTyt6 3FEjVNkndeQTMv6dSZKJrwWHbMs5IS2F7jvGsyDNWAnmakL750VKeILa7TetvwjO/fWE QXvxi4Rzvs62pfZ7PQF4S2QuOLe9nIifV782CO5AyS5dG2pwciMAeSvs1evgoyMAInQQ ndRM76NiKF31dHRt+IiJA0dq6bhqk5Pv+6uN4+2z8k/sS0Ljpdvq0j6DO7qyh0ADv00k ypcxzV71kK70l3uIc9fbfGhXuYQMLbprGLXEd4Wh57aVEXUYw0pYXfe58TeAQniW3Q+D fpVA== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of osalvador.vilardaga@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=osalvador.vilardaga@gmail.com X-Gm-Message-State: AOUpUlGYhB1esw+OZpyM6A8E42KLM0kexGpzMvL4Rn+1LZ7TJtq7RyQq s+aSkUYr/gUeS0nH4UuFOsG1pRPtCRFv+143W6DoGa52Bi5VFz2bmBozPDf3tXlqUK9xO+Tmcso 3lV3RNuaYTarsZOsI6qlQrW6GWd3CsBS31fcllfJqG785x/Va4UoOO8NMo4RvcXIa+XDmmW/tEl HZLHIZDZ81NUskkEvJcNd03rbcd/CX8iiiY70kuXlAxc8lnURwRWcEp7jcKMulN4cV1IZFipsxO qw1BoSKkzD3Cf54vYRycgpG1QyZdUu0XAwHQ0K7mpg13gea0oW5js//h+sceMYCtZibFjLpfNN/ 7uv/Pj3EFFjDNcaXOa2O/ql8yaIBum0y/eMEgbVUnhjvarSVE0UtEOq9rNrAwOR8xoIl7pvjFg= = X-Received: by 2002:a1c:ce0b:: with SMTP id e11-v6mr1666966wmg.47.1533649104023; Tue, 07 Aug 2018 06:38:24 -0700 (PDT) X-Received: by 2002:a1c:ce0b:: with SMTP id e11-v6mr1666929wmg.47.1533649103134; Tue, 07 Aug 2018 06:38:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1533649103; cv=none; d=google.com; s=arc-20160816; b=QCEU//+B6F5L+2hV4bd4aCR9GYwWb4VUEDbjnSo4yoh1LZhz2Ejxrq04Yj632/Xtqu 8dY1dSEmdRduhcJfF3IbQ+TKCykL+OoNakCRbnzDH96w9BQ05Y0i6XYmXSfQ5+RxzOzb dw4t09xYcJuQ0YaxC8kGXeLYJ111a8ERfKu/NpE6cHr+EpiWy99Kl+AQVYxT2ANWzAJf eymhRZNVMtglRsAPbDDcKTKKvCtVmNRhD2N5baCYkwpbSM8QdKPiiiPu5Rg5ixS5y51c 0izqsMQmzzVlQdFibNJAGdrsr2YmF8VOA+bdVf6IrasqVXuRrX5+R9wQhDmmXHLSF+nV 87Kw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=UGHDYi1fGTGsyDr5+uGpoPTBhMkqCLpSGvH1cSTmmy8=; b=RFaM+ArAs4iy3UgLLIIK00YANlBTKi8gLdbZ9+ifIpmh4lP4LHZwMv4/2AEa7LarIa 8hjrJt7xkC3xE53eNU67c8Lx/gwBI69HAe8Xkn3vzWRQwV5x+sYadXw6kftGxNFsA89g M6ZODnpLbNvOmRANroVVKNUTGYrMaCm0ZgxejTdIfbYi5sVyFy8no9W/n29vcOm/dWGr QDuDseTNQYzXvZsyKE0KzzQ9c97Pis57JZxr6WWF8ie1mVz3njNhq8Vdh2BadyM81c/I ZYIOfrHQyH8loaSqkp+thLQJcTbrhe9YqTE+m+tEbNrbU4wCR4fG+p8GzOnD+TcAlC7W 2OTA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of osalvador.vilardaga@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=osalvador.vilardaga@gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id b139-v6sor400451wmd.16.2018.08.07.06.38.22 for (Google Transport Security); Tue, 07 Aug 2018 06:38:23 -0700 (PDT) Received-SPF: pass (google.com: domain of osalvador.vilardaga@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; spf=pass (google.com: domain of osalvador.vilardaga@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=osalvador.vilardaga@gmail.com X-Google-Smtp-Source: AA+uWPwnROE4sZZPs3TVKL+ZZ0tZRrl+xz1jyG/Gf6Z16dml6uomWbV2cXWSY5kTkoPCwGLdcvPGeA== X-Received: by 2002:a1c:c95:: with SMTP id 143-v6mr1878759wmm.50.1533649102756; Tue, 07 Aug 2018 06:38:22 -0700 (PDT) Received: from techadventures.net (techadventures.net. [62.201.165.239]) by smtp.gmail.com with ESMTPSA id y128-v6sm2564657wmy.26.2018.08.07.06.38.21 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 07 Aug 2018 06:38:21 -0700 (PDT) Received: from d104.suse.de (charybdis-ext.suse.de [195.135.221.2]) by techadventures.net (Postfix) with ESMTPA id 71CE91246FD; Tue, 7 Aug 2018 15:38:20 +0200 (CEST) From: osalvador@techadventures.net To: akpm@linux-foundation.org Cc: mhocko@suse.com, dan.j.williams@intel.com, pasha.tatashin@oracle.com, jglisse@redhat.com, david@redhat.com, yasu.isimatu@gmail.com, logang@deltatee.com, dave.jiang@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Oscar Salvador Subject: [RFC PATCH 3/3] mm/memory_hotplug: Refactor shrink_zone/pgdat_span Date: Tue, 7 Aug 2018 15:37:57 +0200 Message-Id: <20180807133757.18352-4-osalvador@techadventures.net> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20180807133757.18352-1-osalvador@techadventures.net> References: <20180807133757.18352-1-osalvador@techadventures.net> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Oscar Salvador This patch refactors shrink_zone_span and shrink_pgdat_span functions. In case that find_smallest/biggest_section do not return any pfn, it means that the zone/pgdat has no online sections left, so we can set the respective values to 0: zone case: zone->zone_start_pfn = 0; zone->spanned_pages = 0; pgdat case: pgdat->node_start_pfn = 0; pgdat->node_spanned_pages = 0; Also, the check that loops over all sections to see if we have something left is moved to an own function, and so the code can be shared by shrink_zone_span and shrink_pgdat_span. Signed-off-by: Oscar Salvador --- mm/memory_hotplug.c | 127 +++++++++++++++++++++++++++------------------------- 1 file changed, 65 insertions(+), 62 deletions(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index e33555651e46..ccac36eaac05 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -365,6 +365,29 @@ static unsigned long find_biggest_section_pfn(int nid, struct zone *zone, return 0; } +static bool has_only_holes(struct zone *zone, int nid, unsigned long zone_start_pfn, + unsigned long zone_end_pfn) +{ + unsigned long pfn; + + pfn = zone_start_pfn; + for (; pfn < zone_end_pfn; pfn += PAGES_PER_SECTION) { + struct mem_section *ms = __pfn_to_section(pfn); + + if (unlikely(!online_section(ms))) + continue; + if (zone && page_zone(pfn_to_page(pfn)) != zone) + continue; + + if (pfn_to_nid(pfn) != nid) + continue; + + return false; + } + + return true; +} + static void shrink_zone_span(struct zone *zone, unsigned long start_pfn, unsigned long end_pfn) { @@ -372,7 +395,6 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn, unsigned long z = zone_end_pfn(zone); /* zone_end_pfn namespace clash */ unsigned long zone_end_pfn = z; unsigned long pfn; - struct mem_section *ms; int nid = zone_to_nid(zone); zone_span_writelock(zone); @@ -385,10 +407,11 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn, */ pfn = find_smallest_section_pfn(nid, zone, end_pfn, zone_end_pfn); - if (pfn) { - zone->zone_start_pfn = pfn; - zone->spanned_pages = zone_end_pfn - pfn; - } + if (!pfn) + goto only_holes; + + zone->zone_start_pfn = pfn; + zone->spanned_pages = zone_end_pfn - pfn; } else if (zone_end_pfn == end_pfn) { /* * If the section is biggest section in the zone, it need @@ -398,38 +421,28 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn, */ pfn = find_biggest_section_pfn(nid, zone, zone_start_pfn, start_pfn); - if (pfn) - zone->spanned_pages = pfn - zone_start_pfn + 1; - } - - /* - * The section is not biggest or smallest mem_section in the zone, it - * only creates a hole in the zone. So in this case, we need not - * change the zone. But perhaps, the zone has only hole data. Thus - * it check the zone has only hole or not. - */ - pfn = zone_start_pfn; - for (; pfn < zone_end_pfn; pfn += PAGES_PER_SECTION) { - ms = __pfn_to_section(pfn); - - if (unlikely(!online_section(ms))) - continue; + if (!pfn) + goto only_holes; - if (page_zone(pfn_to_page(pfn)) != zone) - continue; - - /* If the section is current section, it continues the loop */ - if (start_pfn == pfn) - continue; - - /* If we find valid section, we have nothing to do */ - zone_span_writeunlock(zone); - return; + zone->spanned_pages = pfn - zone_start_pfn + 1; + } else { + /* + * The section is not biggest or smallest mem_section in the zone, it + * only creates a hole in the zone. So in this case, we need not + * change the zone. But perhaps, the zone has only hole data. Thus + * it check the zone has only hole or not. + */ + if (has_only_holes(zone, nid, zone_start_pfn, zone_end_pfn)) + goto only_holes; } + goto out; + +only_holes: /* The zone has no valid section */ zone->zone_start_pfn = 0; zone->spanned_pages = 0; +out: zone_span_writeunlock(zone); } @@ -440,7 +453,6 @@ static void shrink_pgdat_span(struct pglist_data *pgdat, unsigned long p = pgdat_end_pfn(pgdat); /* pgdat_end_pfn namespace clash */ unsigned long pgdat_end_pfn = p; unsigned long pfn; - struct mem_section *ms; int nid = pgdat->node_id; if (pgdat_start_pfn == start_pfn) { @@ -452,10 +464,11 @@ static void shrink_pgdat_span(struct pglist_data *pgdat, */ pfn = find_smallest_section_pfn(nid, NULL, end_pfn, pgdat_end_pfn); - if (pfn) { - pgdat->node_start_pfn = pfn; - pgdat->node_spanned_pages = pgdat_end_pfn - pfn; - } + if (!pfn) + goto only_holes; + + pgdat->node_start_pfn = pfn; + pgdat->node_spanned_pages = pgdat_end_pfn - pfn; } else if (pgdat_end_pfn == end_pfn) { /* * If the section is biggest section in the pgdat, it need @@ -465,35 +478,25 @@ static void shrink_pgdat_span(struct pglist_data *pgdat, */ pfn = find_biggest_section_pfn(nid, NULL, pgdat_start_pfn, start_pfn); - if (pfn) - pgdat->node_spanned_pages = pfn - pgdat_start_pfn + 1; - } - - /* - * If the section is not biggest or smallest mem_section in the pgdat, - * it only creates a hole in the pgdat. So in this case, we need not - * change the pgdat. - * But perhaps, the pgdat has only hole data. Thus it check the pgdat - * has only hole or not. - */ - pfn = pgdat_start_pfn; - for (; pfn < pgdat_end_pfn; pfn += PAGES_PER_SECTION) { - ms = __pfn_to_section(pfn); - - if (unlikely(!online_section(ms))) - continue; - - if (pfn_to_nid(pfn) != nid) - continue; + if (!pfn) + goto only_holes; - /* If the section is current section, it continues the loop */ - if (start_pfn == pfn) - continue; - - /* If we find valid section, we have nothing to do */ - return; + pgdat->node_spanned_pages = pfn - pgdat_start_pfn + 1; + } else { + /* + * If the section is not biggest or smallest mem_section in the pgdat, + * it only creates a hole in the pgdat. So in this case, we need not + * change the pgdat. + * But perhaps, the pgdat has only hole data. Thus it check the pgdat + * has only hole or not. + */ + if (has_only_holes(NULL, nid, pgdat_start_pfn, pgdat_end_pfn)) + goto only_holes; } + return; + +only_holes: /* The pgdat has no valid section */ pgdat->node_start_pfn = 0; pgdat->node_spanned_pages = 0;