From patchwork Tue Nov 27 16:20:05 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oscar Salvador X-Patchwork-Id: 10700791 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EF04F15A7 for ; Tue, 27 Nov 2018 16:20:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DBF3B2BFFF for ; Tue, 27 Nov 2018 16:20:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D01272C084; Tue, 27 Nov 2018 16:20:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1D57A2BFFF for ; Tue, 27 Nov 2018 16:20:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EF2566B48F5; Tue, 27 Nov 2018 11:20:43 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E7B1B6B48F7; Tue, 27 Nov 2018 11:20:43 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CCDF66B48F8; Tue, 27 Nov 2018 11:20:43 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) by kanga.kvack.org (Postfix) with ESMTP id 8475F6B48F5 for ; Tue, 27 Nov 2018 11:20:43 -0500 (EST) Received: by mail-pf1-f198.google.com with SMTP id 75so12143503pfq.8 for ; Tue, 27 Nov 2018 08:20:43 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=x/y2fSWSjzgnUrsW0zmFCZt5KOKR2Mvo+rugbSemlkM=; b=GwXXPODZ3n8HnLDlj27JKLssUbNhHr1HUC3qPsja2RsJ4bug1i642zFFDMgvsScnKU rGkXA8bcnrFaFjCi2BhHYdy41XlGKht0ALSMLTVJdl99rc8yW8gwPhSCdg6irPaBTzRb a0lN+zB2MYr6sMTn9BTrKTNebPLh1ljWLfEwjuOtslIGhx/0hX/2g9w0soc+OD8p3gsV 7T4PvTKjh4Skkl+YseNz7of9wdGqmbrt6HQ1kwo029BHlhKErWAdnSs54AB1+MSjUA5A o8h/sLGky4I+dHOMDK2qIugn3VG700Zvxq6x07BRu4c4bhFamXXyPYgseH88Z6g08LYz pMIQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of osalvador@suse.de designates 195.135.221.5 as permitted sender) smtp.mailfrom=osalvador@suse.de X-Gm-Message-State: AA+aEWbMjTz4lsqXMvOxCWTkGYYmw4uXOg9P1KxpWeWmpx48ZW+vy6bY heOCwN0JjAwcyRPFRzEtd1tA9jsrGDoNkLVgZIu1I8ZLjQs7ut9G5W9eVxUkQ2UOXOXJSp2xncc is+y19sPJxDtaHm1ETjrEgF1FQk1q92uRRXf1FtQUhn1I3sMTSfqKqAZZ3DVntn+OAw== X-Received: by 2002:a17:902:820d:: with SMTP id x13mr33734875pln.229.1543335643123; Tue, 27 Nov 2018 08:20:43 -0800 (PST) X-Google-Smtp-Source: AFSGD/VoYMkL4jHe2RgU62V+DHgWvUWNd5YfvgYYSuOl2IvvKGEgqJz4qxFmghAmVhtZi+tNaHmd X-Received: by 2002:a17:902:820d:: with SMTP id x13mr33734784pln.229.1543335641734; Tue, 27 Nov 2018 08:20:41 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543335641; cv=none; d=google.com; s=arc-20160816; b=CktURyj53cihnVnDbUK3NIC5gPzxydKer9PxytVj32Er8rtQI8yP/Yam3KcMcXoI17 1vrJ4qRZZ+YCIM9DBxk+5ZDLyac/wXZOfId35P7FamtTXqWbpL+a25ZOMuC6eRiHHPkk OQisUdfr0Im69BqwEI4R3QrA1psNzAcdA+S7XmYMJSQmZ8+6dioStJcZRn73tqPhQaMM spvMqoKfxYP7yIbOf8vwcL5Trgjv9+q2DVd9NtDf9i2dCxMRPUidEf6F/SDptWh+e7Wk udaav4ZQakQ+MipBKPMkjL/WZGQG4onEP5zfBCQDn3YLFCaNZxvcO1f/EpUz0RSY0vZV RTRg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from; bh=x/y2fSWSjzgnUrsW0zmFCZt5KOKR2Mvo+rugbSemlkM=; b=S77xGh6ybA5SFZ3TpgK/L4D7AWm4WXRWPM8jbIjWLO2uwlhhxyFi+i2YsVqq/eoynn vydFRn8lyz0dWoi7OhYh8WgbfZfBLvpdTzkM15/Ddwe6uh/h6WIVqPe4snWqvQhvunFs f48uMn2/VEOTpy8CnnJQ6949IcPHQV382TLGoHB+UFepngI0GfOUjX3XH+/joYC1ZUrC ye0Eezljgz+tww0UJobRFfjliMSeCvzaDDCT57OEDk9UakzqD/1hHrZQX8ob7DnXudRp UjfSLmvUsopT5ZHXEj82lcNPZwU8ds+BPZMM/O9dwR/IuW4yu9kQlhjP1GLuSBCFxRMw GnFw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of osalvador@suse.de designates 195.135.221.5 as permitted sender) smtp.mailfrom=osalvador@suse.de Received: from smtp.nue.novell.com (smtp.nue.novell.com. [195.135.221.5]) by mx.google.com with ESMTPS id h19si4399825plr.67.2018.11.27.08.20.40 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 27 Nov 2018 08:20:41 -0800 (PST) Received-SPF: pass (google.com: domain of osalvador@suse.de designates 195.135.221.5 as permitted sender) client-ip=195.135.221.5; Authentication-Results: mx.google.com; spf=pass (google.com: domain of osalvador@suse.de designates 195.135.221.5 as permitted sender) smtp.mailfrom=osalvador@suse.de Received: from emea4-mta.ukb.novell.com ([10.120.13.87]) by smtp.nue.novell.com with ESMTP (TLS encrypted); Tue, 27 Nov 2018 17:20:39 +0100 Received: from d104.suse.de (nwb-a10-snat.microfocus.com [10.120.13.201]) by emea4-mta.ukb.novell.com with ESMTP (NOT encrypted); Tue, 27 Nov 2018 16:20:15 +0000 From: Oscar Salvador To: akpm@linux-foundation.org Cc: mhocko@suse.com, dan.j.williams@intel.com, pavel.tatashin@microsoft.com, jglisse@redhat.com, Jonathan.Cameron@huawei.com, rafael@kernel.org, david@redhat.com, linux-mm@kvack.org, Oscar Salvador , Oscar Salvador Subject: [PATCH v2 5/5] mm, memory_hotplug: Refactor shrink_zone/pgdat_span Date: Tue, 27 Nov 2018 17:20:05 +0100 Message-Id: <20181127162005.15833-6-osalvador@suse.de> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20181127162005.15833-1-osalvador@suse.de> References: <20181127162005.15833-1-osalvador@suse.de> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Oscar Salvador shrink_zone_span and shrink_pgdat_span look a bit weird. They both have a loop at the end to check if the zone or pgdat contains only holes in case the section to be removed was not either the first one or the last one. Both code loops look quite similar, so we can simplify it a bit. We do that by creating a function (has_only_holes), that basically calls find_smallest_section_pfn() with the full range. In case nothing has to be found, we do not have any more sections there. To be honest, I am not really sure we even need to go through this check in case we are removing a middle section, because from what I can see, we will always have a first/last section. Taking the chance, we could also simplify both find_smallest_section_pfn() and find_biggest_section_pfn() functions and move the common code to a helper. Signed-off-by: Oscar Salvador --- mm/memory_hotplug.c | 163 +++++++++++++++++++++------------------------------- 1 file changed, 64 insertions(+), 99 deletions(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 49b91907e19e..0e3f89423153 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -325,28 +325,29 @@ static bool is_section_ok(struct mem_section *ms, bool zone) else return valid_section(ms); } + +static bool is_valid_section(struct zone *zone, int nid, unsigned long pfn) +{ + struct mem_section *ms = __pfn_to_section(pfn); + + if (unlikely(!is_section_ok(ms, !!zone))) + return false; + if (unlikely(pfn_to_nid(pfn) != nid)) + return false; + if (zone && zone != page_zone(pfn_to_page(pfn))) + return false; + + return true; +} + /* find the smallest valid pfn in the range [start_pfn, end_pfn) */ static unsigned long find_smallest_section_pfn(int nid, struct zone *zone, unsigned long start_pfn, unsigned long end_pfn) { - struct mem_section *ms; - - for (; start_pfn < end_pfn; start_pfn += PAGES_PER_SECTION) { - ms = __pfn_to_section(start_pfn); - - if (!is_section_ok(ms, !!zone)) - continue; - - if (unlikely(pfn_to_nid(start_pfn) != nid)) - continue; - - if (zone && zone != page_zone(pfn_to_page(start_pfn))) - continue; - - return start_pfn; - } - + for (; start_pfn < end_pfn; start_pfn += PAGES_PER_SECTION) + if (is_valid_section(zone, nid, start_pfn)) + return start_pfn; return 0; } @@ -355,29 +356,26 @@ static unsigned long find_biggest_section_pfn(int nid, struct zone *zone, unsigned long start_pfn, unsigned long end_pfn) { - struct mem_section *ms; unsigned long pfn; /* pfn is the end pfn of a memory section. */ pfn = end_pfn - 1; - for (; pfn >= start_pfn; pfn -= PAGES_PER_SECTION) { - ms = __pfn_to_section(pfn); - - if (!is_section_ok(ms, !!zone)) - continue; - - if (unlikely(pfn_to_nid(pfn) != nid)) - continue; - - if (zone && zone != page_zone(pfn_to_page(pfn))) - continue; - - return pfn; - } - + for (; pfn >= start_pfn; pfn -= PAGES_PER_SECTION) + if (is_valid_section(zone, nid, pfn)) + return pfn; return 0; } +static bool has_only_holes(int nid, struct zone *zone, + unsigned long start_pfn, + unsigned long end_pfn) +{ + /* + * Let us check if the range has only holes + */ + return !find_smallest_section_pfn(nid, zone, start_pfn, end_pfn); +} + static void shrink_zone_span(struct zone *zone, unsigned long start_pfn, unsigned long end_pfn) { @@ -385,7 +383,6 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn, unsigned long z = zone_end_pfn(zone); /* zone_end_pfn namespace clash */ unsigned long zone_end_pfn = z; unsigned long pfn; - struct mem_section *ms; int nid = zone_to_nid(zone); zone_span_writelock(zone); @@ -397,11 +394,12 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn, * for shrinking zone. */ pfn = find_smallest_section_pfn(nid, zone, end_pfn, - zone_end_pfn); - if (pfn) { - zone->zone_start_pfn = pfn; - zone->spanned_pages = zone_end_pfn - pfn; - } + zone_end_pfn); + if (!pfn) + goto no_sections; + + zone->zone_start_pfn = pfn; + zone->spanned_pages = zone_end_pfn - pfn; } else if (zone_end_pfn == end_pfn) { /* * If the section is biggest section in the zone, it need @@ -410,39 +408,23 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn, * shrinking zone. */ pfn = find_biggest_section_pfn(nid, zone, zone_start_pfn, - start_pfn); - if (pfn) - zone->spanned_pages = pfn - zone_start_pfn + 1; - } - - /* - * The section is not biggest or smallest mem_section in the zone, it - * only creates a hole in the zone. So in this case, we need not - * change the zone. But perhaps, the zone has only hole data. Thus - * it check the zone has only hole or not. - */ - pfn = zone_start_pfn; - for (; pfn < zone_end_pfn; pfn += PAGES_PER_SECTION) { - ms = __pfn_to_section(pfn); - - if (unlikely(!online_section(ms))) - continue; + start_pfn); + if (!pfn) + goto no_sections; - if (page_zone(pfn_to_page(pfn)) != zone) - continue; - - /* If the section is current section, it continues the loop */ - if (start_pfn == pfn) - continue; - - /* If we find valid section, we have nothing to do */ - zone_span_writeunlock(zone); - return; + zone->spanned_pages = pfn - zone_start_pfn + 1; + } else { + if (has_only_holes(nid, zone, zone_start_pfn, zone_end_pfn)) + goto no_sections; } + goto out; + +no_sections: /* The zone has no valid section */ zone->zone_start_pfn = 0; zone->spanned_pages = 0; +out: zone_span_writeunlock(zone); } @@ -453,7 +435,6 @@ static void shrink_pgdat_span(struct pglist_data *pgdat, unsigned long p = pgdat_end_pfn(pgdat); /* pgdat_end_pfn namespace clash */ unsigned long pgdat_end_pfn = p; unsigned long pfn; - struct mem_section *ms; int nid = pgdat->node_id; if (pgdat_start_pfn == start_pfn) { @@ -465,10 +446,11 @@ static void shrink_pgdat_span(struct pglist_data *pgdat, */ pfn = find_smallest_section_pfn(nid, NULL, end_pfn, pgdat_end_pfn); - if (pfn) { - pgdat->node_start_pfn = pfn; - pgdat->node_spanned_pages = pgdat_end_pfn - pfn; - } + if (!pfn) + goto no_sections; + + pgdat->node_start_pfn = pfn; + pgdat->node_spanned_pages = pgdat_end_pfn - pfn; } else if (pgdat_end_pfn == end_pfn) { /* * If the section is biggest section in the pgdat, it need @@ -478,35 +460,18 @@ static void shrink_pgdat_span(struct pglist_data *pgdat, */ pfn = find_biggest_section_pfn(nid, NULL, pgdat_start_pfn, start_pfn); - if (pfn) - pgdat->node_spanned_pages = pfn - pgdat_start_pfn + 1; - } - - /* - * If the section is not biggest or smallest mem_section in the pgdat, - * it only creates a hole in the pgdat. So in this case, we need not - * change the pgdat. - * But perhaps, the pgdat has only hole data. Thus it check the pgdat - * has only hole or not. - */ - pfn = pgdat_start_pfn; - for (; pfn < pgdat_end_pfn; pfn += PAGES_PER_SECTION) { - ms = __pfn_to_section(pfn); - - if (unlikely(!valid_section(ms))) - continue; - - if (pfn_to_nid(pfn) != nid) - continue; + if (!pfn) + goto no_sections; - /* If the section is current section, it continues the loop */ - if (start_pfn == pfn) - continue; - - /* If we find valid section, we have nothing to do */ - return; + pgdat->node_spanned_pages = pfn - pgdat_start_pfn + 1; + } else { + if (has_only_holes(nid, NULL, pgdat_start_pfn, pgdat_end_pfn)) + goto no_sections; } + return; + +no_sections: /* The pgdat has no valid section */ pgdat->node_start_pfn = 0; pgdat->node_spanned_pages = 0; @@ -540,6 +505,7 @@ static int __remove_section(int nid, struct mem_section *ms, unsigned long map_offset, struct vmem_altmap *altmap) { int ret = -EINVAL; + int sect_nr = __section_nr(ms); if (!valid_section(ms)) return ret; @@ -548,9 +514,8 @@ static int __remove_section(int nid, struct mem_section *ms, if (ret) return ret; - shrink_pgdat(nid, __section_nr(ms)); - sparse_remove_one_section(nid, ms, map_offset, altmap); + shrink_pgdat(nid, (unsigned long)sect_nr); return 0; }