From patchwork Wed Nov 11 09:28:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 11897269 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D23BA921 for ; Wed, 11 Nov 2020 09:28:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 76AD32075A for ; Wed, 11 Nov 2020 09:28:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 76AD32075A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4BEBA6B0073; Wed, 11 Nov 2020 04:28:25 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2DB1D6B0072; Wed, 11 Nov 2020 04:28:25 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 01BB16B0070; Wed, 11 Nov 2020 04:28:24 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0204.hostedemail.com [216.40.44.204]) by kanga.kvack.org (Postfix) with ESMTP id BBD686B006C for ; Wed, 11 Nov 2020 04:28:24 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 573718249980 for ; Wed, 11 Nov 2020 09:28:24 +0000 (UTC) X-FDA: 77471611728.11.wrist93_0514866272fc Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin11.hostedemail.com (Postfix) with ESMTP id 39CF1180F8B80 for ; Wed, 11 Nov 2020 09:28:24 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,vbabka@suse.cz,,RULES_HIT:30054:30070,0,RBL:195.135.220.15:@suse.cz:.lbl8.mailshell.net-62.2.6.2 64.100.201.201;04yg7j31jrhf6xkxje61w9qe5afegoph4qb9p66wa88rjb8kaa8g93p3uy465j3.wcff8bfshz3zppjz9cs9c3yu1dsdaomh11p1kzj1wfjdrd68tuscp799c3hatdh.y-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:70,LUA_SUMMARY:none X-HE-Tag: wrist93_0514866272fc X-Filterd-Recvd-Size: 4652 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Wed, 11 Nov 2020 09:28:23 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 5CA44AC2E; Wed, 11 Nov 2020 09:28:22 +0000 (UTC) From: Vlastimil Babka To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Michal Hocko , Pavel Tatashin , David Hildenbrand , Oscar Salvador , Joonsoo Kim , Vlastimil Babka , Michal Hocko Subject: [PATCH v3 2/7] mm, page_alloc: calculate pageset high and batch once per zone Date: Wed, 11 Nov 2020 10:28:07 +0100 Message-Id: <20201111092812.11329-3-vbabka@suse.cz> X-Mailer: git-send-email 2.29.1 In-Reply-To: <20201111092812.11329-1-vbabka@suse.cz> References: <20201111092812.11329-1-vbabka@suse.cz> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We currently call pageset_set_high_and_batch() for each possible cpu, which repeats the same calculations of high and batch values. Instead call the function just once per zone, and make it apply the calculated values to all per-cpu pagesets of the zone. This also allows removing the zone_pageset_init() and __zone_pcp_update() wrappers. No functional change. Signed-off-by: Vlastimil Babka Reviewed-by: Oscar Salvador Reviewed-by: David Hildenbrand Acked-by: Michal Hocko --- mm/page_alloc.c | 42 ++++++++++++++++++------------------------ 1 file changed, 18 insertions(+), 24 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3f1c57344e73..2fa432762908 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -6307,13 +6307,14 @@ static void setup_pageset(struct per_cpu_pageset *p) } /* - * Calculate and set new high and batch values for given per-cpu pageset of a + * Calculate and set new high and batch values for all per-cpu pagesets of a * zone, based on the zone's size and the percpu_pagelist_fraction sysctl. */ -static void pageset_set_high_and_batch(struct zone *zone, - struct per_cpu_pageset *p) +static void zone_set_pageset_high_and_batch(struct zone *zone) { unsigned long new_high, new_batch; + struct per_cpu_pageset *p; + int cpu; if (percpu_pagelist_fraction) { new_high = zone_managed_pages(zone) / percpu_pagelist_fraction; @@ -6325,23 +6326,25 @@ static void pageset_set_high_and_batch(struct zone *zone, new_high = 6 * new_batch; new_batch = max(1UL, 1 * new_batch); } - pageset_update(&p->pcp, new_high, new_batch); -} - -static void __meminit zone_pageset_init(struct zone *zone, int cpu) -{ - struct per_cpu_pageset *pcp = per_cpu_ptr(zone->pageset, cpu); - pageset_init(pcp); - pageset_set_high_and_batch(zone, pcp); + for_each_possible_cpu(cpu) { + p = per_cpu_ptr(zone->pageset, cpu); + pageset_update(&p->pcp, new_high, new_batch); + } } void __meminit setup_zone_pageset(struct zone *zone) { + struct per_cpu_pageset *p; int cpu; + zone->pageset = alloc_percpu(struct per_cpu_pageset); - for_each_possible_cpu(cpu) - zone_pageset_init(zone, cpu); + for_each_possible_cpu(cpu) { + p = per_cpu_ptr(zone->pageset, cpu); + pageset_init(p); + } + + zone_set_pageset_high_and_batch(zone); } /* @@ -8080,15 +8083,6 @@ int lowmem_reserve_ratio_sysctl_handler(struct ctl_table *table, int write, return 0; } -static void __zone_pcp_update(struct zone *zone) -{ - unsigned int cpu; - - for_each_possible_cpu(cpu) - pageset_set_high_and_batch(zone, - per_cpu_ptr(zone->pageset, cpu)); -} - /* * percpu_pagelist_fraction - changes the pcp->high for each zone on each * cpu. It is the fraction of total pages in each zone that a hot per cpu @@ -8121,7 +8115,7 @@ int percpu_pagelist_fraction_sysctl_handler(struct ctl_table *table, int write, goto out; for_each_populated_zone(zone) - __zone_pcp_update(zone); + zone_set_pageset_high_and_batch(zone); out: mutex_unlock(&pcp_batch_high_lock); return ret; @@ -8746,7 +8740,7 @@ EXPORT_SYMBOL(free_contig_range); void __meminit zone_pcp_update(struct zone *zone) { mutex_lock(&pcp_batch_high_lock); - __zone_pcp_update(zone); + zone_set_pageset_high_and_batch(zone); mutex_unlock(&pcp_batch_high_lock); }