From patchwork Thu Aug 29 07:00:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11120481 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 71F8014E5 for ; Thu, 29 Aug 2019 07:00:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4A560233A1 for ; Thu, 29 Aug 2019 07:00:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4A560233A1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7F5AE6B026A; Thu, 29 Aug 2019 03:00:51 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7820B6B026B; Thu, 29 Aug 2019 03:00:51 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5F87A6B026C; Thu, 29 Aug 2019 03:00:51 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0127.hostedemail.com [216.40.44.127]) by kanga.kvack.org (Postfix) with ESMTP id 3C0126B026A for ; Thu, 29 Aug 2019 03:00:51 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id AC02F181AC9B6 for ; Thu, 29 Aug 2019 07:00:50 +0000 (UTC) X-FDA: 75874567860.14.crown46_7508671dd017 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,david@redhat.com,:linux-kernel@vger.kernel.org::david@redhat.com:akpm@linux-foundation.org:osalvador@suse.de:mhocko@suse.com:pasha.tatashin@soleen.com:dan.j.williams@intel.com:richardw.yang@linux.intel.com,RULES_HIT:30054:30064:30070,0,RBL:209.132.183.28:@redhat.com:.lbl8.mailshell.net-62.18.0.0 64.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:2,LUA_SUMMARY:none X-HE-Tag: crown46_7508671dd017 X-Filterd-Recvd-Size: 4663 Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Thu, 29 Aug 2019 07:00:50 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 5C35388319; Thu, 29 Aug 2019 07:00:49 +0000 (UTC) Received: from t460s.redhat.com (ovpn-117-166.ams2.redhat.com [10.36.117.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id 810801001B05; Thu, 29 Aug 2019 07:00:47 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , Oscar Salvador , Michal Hocko , Pavel Tatashin , Dan Williams , Wei Yang Subject: [PATCH v3 04/11] mm/memory_hotplug: Drop local variables in shrink_zone_span() Date: Thu, 29 Aug 2019 09:00:12 +0200 Message-Id: <20190829070019.12714-5-david@redhat.com> In-Reply-To: <20190829070019.12714-1-david@redhat.com> References: <20190829070019.12714-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.28]); Thu, 29 Aug 2019 07:00:49 +0000 (UTC) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Especially, when checking for "all holes" we can avoid rechecking already processed pieces (that we are removing) - use the updated zone data instead (possibly already zero). Cc: Andrew Morton Cc: Oscar Salvador Cc: David Hildenbrand Cc: Michal Hocko Cc: Pavel Tatashin Cc: Dan Williams Cc: Wei Yang Signed-off-by: David Hildenbrand --- mm/memory_hotplug.c | 26 +++++++++++++++----------- 1 file changed, 15 insertions(+), 11 deletions(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 0a21f6f99753..d3c34bbeb36d 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -374,14 +374,11 @@ static unsigned long find_biggest_section_pfn(int nid, struct zone *zone, static void shrink_zone_span(struct zone *zone, unsigned long start_pfn, unsigned long end_pfn) { - unsigned long zone_start_pfn = zone->zone_start_pfn; - unsigned long z = zone_end_pfn(zone); /* zone_end_pfn namespace clash */ - unsigned long zone_end_pfn = z; unsigned long pfn; int nid = zone_to_nid(zone); zone_span_writelock(zone); - if (zone_start_pfn == start_pfn) { + if (zone->zone_start_pfn == start_pfn) { /* * If the section is smallest section in the zone, it need * shrink zone->zone_start_pfn and zone->zone_spanned_pages. @@ -389,22 +386,29 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn, * for shrinking zone. */ pfn = find_smallest_section_pfn(nid, zone, end_pfn, - zone_end_pfn); + zone_end_pfn(zone)); if (pfn) { + zone->spanned_pages = zone_end_pfn(zone) - pfn; zone->zone_start_pfn = pfn; - zone->spanned_pages = zone_end_pfn - pfn; + } else { + zone->zone_start_pfn = 0; + zone->spanned_pages = 0; } - } else if (zone_end_pfn == end_pfn) { + } else if (zone_end_pfn(zone) == end_pfn) { /* * If the section is biggest section in the zone, it need * shrink zone->spanned_pages. * In this case, we find second biggest valid mem_section for * shrinking zone. */ - pfn = find_biggest_section_pfn(nid, zone, zone_start_pfn, + pfn = find_biggest_section_pfn(nid, zone, zone->zone_start_pfn, start_pfn); if (pfn) - zone->spanned_pages = pfn - zone_start_pfn + 1; + zone->spanned_pages = pfn - zone->zone_start_pfn + 1; + else { + zone->zone_start_pfn = 0; + zone->spanned_pages = 0; + } } /* @@ -413,8 +417,8 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn, * change the zone. But perhaps, the zone has only hole data. Thus * it check the zone has only hole or not. */ - pfn = zone_start_pfn; - for (; pfn < zone_end_pfn; pfn += PAGES_PER_SUBSECTION) { + for (pfn = zone->zone_start_pfn; + pfn < zone_end_pfn(zone); pfn += PAGES_PER_SUBSECTION) { if (unlikely(!pfn_valid(pfn))) continue;