From patchwork Fri Oct 16 02:44:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11840623 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8E5AB14B4 for ; Fri, 16 Oct 2020 02:44:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 49B27208E4 for ; Fri, 16 Oct 2020 02:44:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="VvK6aAn6" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 49B27208E4 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 64F6E940020; Thu, 15 Oct 2020 22:44:39 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 60073940007; Thu, 15 Oct 2020 22:44:39 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 53DF1940020; Thu, 15 Oct 2020 22:44:39 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0164.hostedemail.com [216.40.44.164]) by kanga.kvack.org (Postfix) with ESMTP id 250BD940007 for ; Thu, 15 Oct 2020 22:44:39 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id CC22F181AEF00 for ; Fri, 16 Oct 2020 02:44:38 +0000 (UTC) X-FDA: 77376245436.12.corn22_0b1028027219 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id A987F1801653A for ; Fri, 16 Oct 2020 02:44:38 +0000 (UTC) X-Spam-Summary: 1,0,0,883652498a3bd2c4,d41d8cd98f00b204,akpm@linux-foundation.org,,RULES_HIT:41:69:355:379:800:960:967:973:988:989:1260:1345:1359:1381:1431:1437:1535:1543:1711:1730:1747:1777:1792:2194:2198:2199:2200:2393:2525:2559:2564:2682:2685:2859:2902:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4043:4321:4605:5007:6261:6653:6737:7514:7576:7875:8660:9025:9545:9592:10004:11026:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12679:12986:13148:13161:13229:13230:13255:13846:14096:14181:14721:21080:21094:21323:21451:21627:21795:21939:21990:30012:30051:30054:30064,0,RBL:198.145.29.99:@linux-foundation.org:.lbl8.mailshell.net-62.2.0.100 64.100.201.201;04y8o8zgd9quizwi3fg5iokzt6ax5ocg83qs9px79u3d5f7pe3rcu3h5yos9b44.igoat7f9tuzi5n8qpsb96ccxgpdzqo9n7n7o5ewpmb936ju7pabdcfgyo3idsjb.1-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Ne tcheck:n X-HE-Tag: corn22_0b1028027219 X-Filterd-Recvd-Size: 5489 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Fri, 16 Oct 2020 02:44:38 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 4ACC920897; Fri, 16 Oct 2020 02:44:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1602816277; bh=x7cGOu+Io1/O7CmBCo0QJU33lh758XhqZ3o0n4Yf77A=; h=Date:From:To:Subject:In-Reply-To:From; b=VvK6aAn686fpkJkTKPU8n3bgoZ6caVfSLKnEpZe1/q5XVJDKLr8A+AoPXQA3HO4H/ PEyvi3kHnMYAVODaH3P+fe4YAJH61kvlEb3d4uSyyf9xfyVtMcwMt2z8W7dY5GgAws x665PaP6ip4uDIM8I8wogqWXiU0Q6EhslXJZXeAo= Date: Thu, 15 Oct 2020 19:44:35 -0700 From: Andrew Morton To: akpm@linux-foundation.org, bhe@redhat.com, charante@codeaurora.org, dan.j.williams@intel.com, david@redhat.com, fenghua.yu@intel.com, linux-mm@kvack.org, logang@deltatee.com, mgorman@suse.de, mgorman@techsingularity.net, mhocko@suse.com, mm-commits@vger.kernel.org, osalvador@suse.de, pankaj.gupta.linux@gmail.com, richard.weiyang@linux.alibaba.com, rppt@kernel.org, tony.luck@intel.com, torvalds@linux-foundation.org, walken@google.com, willy@infradead.org Subject: [patch 059/156] mm/memory_hotplug: inline __offline_pages() into offline_pages() Message-ID: <20201016024435.xfo1ocKXg%akpm@linux-foundation.org> In-Reply-To: <20201015192732.f448da14e9854c7cb7299956@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: David Hildenbrand Subject: mm/memory_hotplug: inline __offline_pages() into offline_pages() Patch series "mm/memory_hotplug: online_pages()/offline_pages() cleanups", v2. These are a bunch of cleanups for online_pages()/offline_pages() and related code, mostly getting rid of memory hole handling that is no longer necessary. There is only a single walk_system_ram_range() call left in offline_pages(), to make sure we don't have any memory holes. I had some of these patches lying around for a longer time but didn't have time to polish them. In addition, the last patch marks all pageblocks of memory to get onlined MIGRATE_ISOLATE, so pages that have just been exposed to the buddy cannot get allocated before onlining is complete. Once heavy lifting is done, the pageblocks are set to MIGRATE_MOVABLE, such that allocations are possible. I played with DIMMs and virtio-mem on x86-64 and didn't spot any surprises. I verified that the numer of isolated pageblocks is correctly handled when onlining/offlining. This patch (of 10): There is only a single user, offline_pages(). Let's inline, to make it look more similar to online_pages(). Link: https://lkml.kernel.org/r/20200819175957.28465-1-david@redhat.com Link: https://lkml.kernel.org/r/20200819175957.28465-2-david@redhat.com Signed-off-by: David Hildenbrand Acked-by: Michal Hocko Reviewed-by: Oscar Salvador Reviewed-by: Pankaj Gupta Cc: Wei Yang Cc: Baoquan He Cc: Pankaj Gupta Cc: Charan Teja Reddy Cc: Dan Williams Cc: Fenghua Yu Cc: Logan Gunthorpe Cc: "Matthew Wilcox (Oracle)" Cc: Mel Gorman Cc: Michal Hocko Cc: Michel Lespinasse Cc: Mike Rapoport Cc: Tony Luck Cc: Mel Gorman Signed-off-by: Andrew Morton --- mm/memory_hotplug.c | 16 +++++----------- 1 file changed, 5 insertions(+), 11 deletions(-) --- a/mm/memory_hotplug.c~mm-memory_hotplug-inline-__offline_pages-into-offline_pages +++ a/mm/memory_hotplug.c @@ -1484,11 +1484,10 @@ static int count_system_ram_pages_cb(uns return 0; } -static int __ref __offline_pages(unsigned long start_pfn, - unsigned long end_pfn) +int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages) { - unsigned long pfn, nr_pages = 0; - unsigned long offlined_pages = 0; + const unsigned long end_pfn = start_pfn + nr_pages; + unsigned long pfn, system_ram_pages = 0, offlined_pages = 0; int ret, node, nr_isolate_pageblock; unsigned long flags; struct zone *zone; @@ -1505,9 +1504,9 @@ static int __ref __offline_pages(unsigne * memory holes PG_reserved, don't need pfn_valid() checks, and can * avoid using walk_system_ram_range() later. */ - walk_system_ram_range(start_pfn, end_pfn - start_pfn, &nr_pages, + walk_system_ram_range(start_pfn, nr_pages, &system_ram_pages, count_system_ram_pages_cb); - if (nr_pages != end_pfn - start_pfn) { + if (system_ram_pages != nr_pages) { ret = -EINVAL; reason = "memory holes"; goto failed_removal; @@ -1656,11 +1655,6 @@ failed_removal: return ret; } -int offline_pages(unsigned long start_pfn, unsigned long nr_pages) -{ - return __offline_pages(start_pfn, start_pfn + nr_pages); -} - static int check_memblock_offlined_cb(struct memory_block *mem, void *arg) { int ret = !is_memblock_offlined(mem);