From patchwork Fri Jan 31 06:14:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11359229 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E82DC13A4 for ; Fri, 31 Jan 2020 06:14:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A7DA5206A2 for ; Fri, 31 Jan 2020 06:14:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="U+85HerO" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A7DA5206A2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4886F6B0527; Fri, 31 Jan 2020 01:14:07 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 40D2C6B0529; Fri, 31 Jan 2020 01:14:07 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2FDF36B052A; Fri, 31 Jan 2020 01:14:07 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 142F56B0527 for ; Fri, 31 Jan 2020 01:14:07 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id D1EAB4DA2 for ; Fri, 31 Jan 2020 06:14:06 +0000 (UTC) X-FDA: 76436914092.25.hose36_23d90741b121f X-Spam-Summary: 2,0,0,9d5bc31f15ea9a4e,d41d8cd98f00b204,akpm@linux-foundation.org,:akpm@linux-foundation.org:alexander.h.duyck@linux.intel.com:anshuman.khandual@arm.com:arunks@codeaurora.org:cai@lca.pw:dan.j.williams@intel.com:david@redhat.com:glider@google.com:kernelfans@gmail.com::mgorman@techsingularity.net:mhocko@suse.com:mm-commits@vger.kernel.org:mpe@ellerman.id.au:osalvador@suse.de:pasha.tatashin@soleen.com:richardw.yang@linux.intel.com:rppt@linux.vnet.ibm.com:sfr@canb.auug.org.au:torvalds@linux-foundation.org:vbabka@suse.cz,RULES_HIT:41:69:355:379:800:960:966:967:968:973:988:989:1260:1263:1345:1359:1381:1431:1437:1535:1544:1711:1730:1747:1777:1792:1963:2196:2198:2199:2200:2393:2525:2553:2559:2563:2682:2685:2689:2859:2898:2902:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3870:3871:3872:3873:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4117:4250:4321:4385:4605:5007:6119:6261:6653:6691:6737:6738:7514:7576:7903:8 599:8660 X-HE-Tag: hose36_23d90741b121f X-Filterd-Recvd-Size: 6661 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Fri, 31 Jan 2020 06:14:06 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 0E49F2082E; Fri, 31 Jan 2020 06:14:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1580451245; bh=19e5bYTA549Jo10tum5DnOTGPvJbla0+Wyo+wRfnlpE=; h=Date:From:To:Subject:In-Reply-To:From; b=U+85HerOZBvEJwAzLsEzQAGmyaRa5dDiLtZqPRscYA5dutSJwEskrdYygpncbqdK2 ZpZIcBy9m5tdVp1H4ftd6s1uaLyxDUm6Io/TfcV2tTSNG9utS8I3oxW4zJFDONFF2P WaO35BZ0ThQPSJlJc4ZJr2OTqUUEIFsypkWspX84= Date: Thu, 30 Jan 2020 22:14:04 -0800 From: Andrew Morton To: akpm@linux-foundation.org, alexander.h.duyck@linux.intel.com, anshuman.khandual@arm.com, arunks@codeaurora.org, cai@lca.pw, dan.j.williams@intel.com, david@redhat.com, glider@google.com, kernelfans@gmail.com, linux-mm@kvack.org, mgorman@techsingularity.net, mhocko@suse.com, mm-commits@vger.kernel.org, mpe@ellerman.id.au, osalvador@suse.de, pasha.tatashin@soleen.com, richardw.yang@linux.intel.com, rppt@linux.vnet.ibm.com, sfr@canb.auug.org.au, torvalds@linux-foundation.org, vbabka@suse.cz Subject: [patch 054/118] mm: remove "count" parameter from has_unmovable_pages() Message-ID: <20200131061404.vx-936iUB%akpm@linux-foundation.org> In-Reply-To: <20200130221021.5f0211c56346d5485af07923@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: David Hildenbrand Subject: mm: remove "count" parameter from has_unmovable_pages() Now that the memory isolate notifier is gone, the parameter is always 0. Drop it and cleanup has_unmovable_pages(). Link: http://lkml.kernel.org/r/20191114131911.11783-3-david@redhat.com Signed-off-by: David Hildenbrand Acked-by: Michal Hocko Cc: Oscar Salvador Cc: Anshuman Khandual Cc: Qian Cai Cc: Pingfan Liu Cc: Stephen Rothwell Cc: Dan Williams Cc: Pavel Tatashin Cc: Vlastimil Babka Cc: Mel Gorman Cc: Mike Rapoport Cc: Wei Yang Cc: Alexander Duyck Cc: Alexander Potapenko Cc: Arun KS Cc: Michael Ellerman Signed-off-by: Andrew Morton --- include/linux/page-isolation.h | 4 ++-- mm/memory_hotplug.c | 2 +- mm/page_alloc.c | 21 +++++++-------------- mm/page_isolation.c | 2 +- 4 files changed, 11 insertions(+), 18 deletions(-) --- a/include/linux/page-isolation.h~mm-remove-count-parameter-from-has_unmovable_pages +++ a/include/linux/page-isolation.h @@ -33,8 +33,8 @@ static inline bool is_migrate_isolate(in #define MEMORY_OFFLINE 0x1 #define REPORT_FAILURE 0x2 -bool has_unmovable_pages(struct zone *zone, struct page *page, int count, - int migratetype, int flags); +bool has_unmovable_pages(struct zone *zone, struct page *page, int migratetype, + int flags); void set_pageblock_migratetype(struct page *page, int migratetype); int move_freepages_block(struct zone *zone, struct page *page, int migratetype, int *num_movable); --- a/mm/memory_hotplug.c~mm-remove-count-parameter-from-has_unmovable_pages +++ a/mm/memory_hotplug.c @@ -1182,7 +1182,7 @@ static bool is_pageblock_removable_noloc if (!zone_spans_pfn(zone, pfn)) return false; - return !has_unmovable_pages(zone, page, 0, MIGRATE_MOVABLE, + return !has_unmovable_pages(zone, page, MIGRATE_MOVABLE, MEMORY_OFFLINE); } --- a/mm/page_alloc.c~mm-remove-count-parameter-from-has_unmovable_pages +++ a/mm/page_alloc.c @@ -8180,17 +8180,15 @@ void *__init alloc_large_system_hash(con /* * This function checks whether pageblock includes unmovable pages or not. - * If @count is not zero, it is okay to include less @count unmovable pages * * PageLRU check without isolation or lru_lock could race so that * MIGRATE_MOVABLE block might include unmovable pages. And __PageMovable * check without lock_page also may miss some movable non-lru pages at * race condition. So you can't expect this function should be exact. */ -bool has_unmovable_pages(struct zone *zone, struct page *page, int count, - int migratetype, int flags) +bool has_unmovable_pages(struct zone *zone, struct page *page, int migratetype, + int flags) { - unsigned long found; unsigned long iter = 0; unsigned long pfn = page_to_pfn(page); const char *reason = "unmovable page"; @@ -8216,13 +8214,11 @@ bool has_unmovable_pages(struct zone *zo goto unmovable; } - for (found = 0; iter < pageblock_nr_pages; iter++) { - unsigned long check = pfn + iter; - - if (!pfn_valid_within(check)) + for (; iter < pageblock_nr_pages; iter++) { + if (!pfn_valid_within(pfn + iter)) continue; - page = pfn_to_page(check); + page = pfn_to_page(pfn + iter); if (PageReserved(page)) goto unmovable; @@ -8271,11 +8267,9 @@ bool has_unmovable_pages(struct zone *zo if ((flags & MEMORY_OFFLINE) && PageHWPoison(page)) continue; - if (__PageMovable(page)) + if (__PageMovable(page) || PageLRU(page)) continue; - if (!PageLRU(page)) - found++; /* * If there are RECLAIMABLE pages, we need to check * it. But now, memory offline itself doesn't call @@ -8289,8 +8283,7 @@ bool has_unmovable_pages(struct zone *zo * is set to both of a memory hole page and a _used_ kernel * page at boot. */ - if (found > count) - goto unmovable; + goto unmovable; } return false; unmovable: --- a/mm/page_isolation.c~mm-remove-count-parameter-from-has_unmovable_pages +++ a/mm/page_isolation.c @@ -37,7 +37,7 @@ static int set_migratetype_isolate(struc * FIXME: Now, memory hotplug doesn't call shrink_slab() by itself. * We just check MOVABLE pages. */ - if (!has_unmovable_pages(zone, page, 0, migratetype, isol_flags)) { + if (!has_unmovable_pages(zone, page, migratetype, isol_flags)) { unsigned long nr_pages; int mt = get_pageblock_migratetype(page);