From patchwork Tue Dec 15 03:11:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11973873 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAD9DC4361B for ; Tue, 15 Dec 2020 03:11:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 52C8F22510 for ; Tue, 15 Dec 2020 03:11:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 52C8F22510 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DA8098D004D; Mon, 14 Dec 2020 22:11:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D7E338D001C; Mon, 14 Dec 2020 22:11:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CBB328D004D; Mon, 14 Dec 2020 22:11:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0117.hostedemail.com [216.40.44.117]) by kanga.kvack.org (Postfix) with ESMTP id B22E28D001C for ; Mon, 14 Dec 2020 22:11:24 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 81E91181AEF30 for ; Tue, 15 Dec 2020 03:11:24 +0000 (UTC) X-FDA: 77594040888.07.skate42_0c09fbd27420 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin07.hostedemail.com (Postfix) with ESMTP id 629F21803F9BD for ; Tue, 15 Dec 2020 03:11:24 +0000 (UTC) X-HE-Tag: skate42_0c09fbd27420 X-Filterd-Recvd-Size: 4373 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf13.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Dec 2020 03:11:23 +0000 (UTC) Date: Mon, 14 Dec 2020 19:11:22 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608001883; bh=N/kQNJ/80/2yieFnq5jLpb4aVnOmvncFIO6ozUltynY=; h=From:To:Subject:In-Reply-To:From; b=GTvftyTeC90MgUhDaUdpZ5CeeqZbq2BV/5dG3HMYkNmGglyWxDKde20+0tBSTEDAh TZHzQABOs8n9XwaAeObEURbYiT8HM1jS6LRmxMHmav7rsiby9DV06anlQ4PC8CrF8c +7HrpQ+Myn1WiUrdFIBTB7x1UjvY0f5Zz8S0Iu+Q= From: Andrew Morton To: akpm@linux-foundation.org, bhe@redhat.com, guro@fb.com, linux-mm@kvack.org, lstoakes@gmail.com, mhocko@suse.com, mm-commits@vger.kernel.org, npiggin@gmail.com, torvalds@linux-foundation.org, vbabka@suse.cz Subject: [patch 136/200] mm: page_alloc: refactor setup_per_zone_lowmem_reserve() Message-ID: <20201215031122.WX7a8bg-5%akpm@linux-foundation.org> In-Reply-To: <20201214190237.a17b70ae14f129e2dca3d204@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Lorenzo Stoakes Subject: mm: page_alloc: refactor setup_per_zone_lowmem_reserve() setup_per_zone_lowmem_reserve() iterates through each zone setting zone->lowmem_reserve[j] = 0 (where j is the zone's index) then iterates backwards through all preceding zones, setting lower_zone->lowmem_reserve[j] = sum(managed pages of higher zones) / lowmem_reserve_ratio[idx] for each (where idx is the lower zone's index). If the lower zone has no managed pages or its ratio is 0 then all of its lowmem_reserve[] entries are effectively zeroed. As these arrays are only assigned here and all lowmem_reserve[] entries for index < this zone's index are implicitly assumed to be 0 (as these are specifically output in show_free_areas() and zoneinfo_show_print() for example) there is no need to additionally zero index == this zone's index too. This patch avoids zeroing unnecessarily. Rather than iterating through zones and setting lowmem_reserve[j] for each lower zone this patch reverse the process and populates each zone's lowmem_reserve[] values in ascending order. This clarifies what is going on especially in the case of zero managed pages or ratio which is now explicitly shown to clear these values. Link: https://lkml.kernel.org/r/20201129162758.115907-1-lstoakes@gmail.com Signed-off-by: Lorenzo Stoakes Cc: Baoquan He Cc: Michal Hocko Cc: Nicholas Piggin Cc: Vlastimil Babka Cc: Roman Gushchin Signed-off-by: Andrew Morton --- mm/page_alloc.c | 33 +++++++++++++-------------------- 1 file changed, 13 insertions(+), 20 deletions(-) --- a/mm/page_alloc.c~mm-page_alloc-refactor-setup_per_zone_lowmem_reserve +++ a/mm/page_alloc.c @@ -7862,31 +7862,24 @@ static void calculate_totalreserve_pages static void setup_per_zone_lowmem_reserve(void) { struct pglist_data *pgdat; - enum zone_type j, idx; + enum zone_type i, j; for_each_online_pgdat(pgdat) { - for (j = 0; j < MAX_NR_ZONES; j++) { - struct zone *zone = pgdat->node_zones + j; - unsigned long managed_pages = zone_managed_pages(zone); + for (i = 0; i < MAX_NR_ZONES - 1; i++) { + struct zone *zone = &pgdat->node_zones[i]; + int ratio = sysctl_lowmem_reserve_ratio[i]; + bool clear = !ratio || !zone_managed_pages(zone); + unsigned long managed_pages = 0; - zone->lowmem_reserve[j] = 0; - - idx = j; - while (idx) { - struct zone *lower_zone; - - idx--; - lower_zone = pgdat->node_zones + idx; - - if (!sysctl_lowmem_reserve_ratio[idx] || - !zone_managed_pages(lower_zone)) { - lower_zone->lowmem_reserve[j] = 0; - continue; + for (j = i + 1; j < MAX_NR_ZONES; j++) { + if (clear) { + zone->lowmem_reserve[j] = 0; } else { - lower_zone->lowmem_reserve[j] = - managed_pages / sysctl_lowmem_reserve_ratio[idx]; + struct zone *upper_zone = &pgdat->node_zones[j]; + + managed_pages += zone_managed_pages(upper_zone); + zone->lowmem_reserve[j] = managed_pages / ratio; } - managed_pages += zone_managed_pages(lower_zone); } } }