From patchwork Mon Jan 15 09:34:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vern Hao X-Patchwork-Id: 13519431 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C638C3DA79 for ; Mon, 15 Jan 2024 09:34:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2B7C06B0096; Mon, 15 Jan 2024 04:34:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 21B4C6B009A; Mon, 15 Jan 2024 04:34:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 06C236B009C; Mon, 15 Jan 2024 04:34:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id DF73F6B0096 for ; Mon, 15 Jan 2024 04:34:55 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 9B1D980842 for ; Mon, 15 Jan 2024 09:34:55 +0000 (UTC) X-FDA: 81681036150.26.6E90768 Received: from mail-oo1-f50.google.com (mail-oo1-f50.google.com [209.85.161.50]) by imf01.hostedemail.com (Postfix) with ESMTP id 9E85340005 for ; Mon, 15 Jan 2024 09:34:53 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ZXkgau8u; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf01.hostedemail.com: domain of haoxing990@gmail.com designates 209.85.161.50 as permitted sender) smtp.mailfrom=haoxing990@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1705311293; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ryA6H5cbLCNpCpmZxc3/gjIEd+58MWOB7i0gvowH1/c=; b=vXG468dfvvSXmGYF1Q8u/dmKK80ugzcD70rGUgVTscQsAGvJMKQTYe1+IoL2oiBD3X86OI C4YrcW36vWTKe031Z6QyEAdU7JMcZY6za6LaFhRMhuJbUweM0TSVyjnSLcTUg6ArbwFaMQ aHjQlqE+1anOxULdLbKZ1BbMCuxsCL0= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ZXkgau8u; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf01.hostedemail.com: domain of haoxing990@gmail.com designates 209.85.161.50 as permitted sender) smtp.mailfrom=haoxing990@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1705311293; a=rsa-sha256; cv=none; b=WzUlRwn45exzqTHpBVFyS5N+XQRTpUEvnT3XnaJIwow2GX0zMpheWvdvor2tSHHp9Lbz/D CIWc61yr09kZd1RAT+BTqFpIh+JZ+qiJc7LC0d/v0yNuOpt3ghQisp2w5wj9hBdl5ngLIt s/DdJhp5Jtip5PpvAy1D77ose7ck4oU= Received: by mail-oo1-f50.google.com with SMTP id 006d021491bc7-598e01ce434so865986eaf.1 for ; Mon, 15 Jan 2024 01:34:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1705311292; x=1705916092; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ryA6H5cbLCNpCpmZxc3/gjIEd+58MWOB7i0gvowH1/c=; b=ZXkgau8utLlUq91Cfq5u8fxpg2oCFOZC4aoH0Cv5G9D+wXC7j10Nfoy0/0K8rHUTpZ VJwjbFEiQTBQeh5KBk1nwNpx9wqKpawHXfE5cuBANytiRpd6JAGAPBPIUypv6jYucUpP nc/Z9O83mTDB1Zb5UWxQK67M7h2huRCwB+EJGQYM7yD+MQRk1vMUqJjwtadaWhwc0u4S y+UyNzfw55wFq3J6pKkinhX+L1ceIlFKOZSZlU8cDbtcTShKMF1CXPYdgyAP4Ahggudl JjA3Vvet36bn40n2Kk30+Tfk13NJ+ZwCVlsna2qdTTsaJqkmHLC7+ywkRbCdT3ex8yHD nhmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1705311292; x=1705916092; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ryA6H5cbLCNpCpmZxc3/gjIEd+58MWOB7i0gvowH1/c=; b=IZFCWigN1Zi9FVUPwKBu0xZgYg9e4/0pqRXzGOZZKk5G/6bCzRBFNgchNvOcYQTgfd rA8shnbJklmyac1Hcqbqyrjrx1zyEfIEcfF0K6VbDdeuHsgQtjpg1dlPjC0mOG7ye89L B3q7bk61Jk7jHn0iAc21/4OYo2aHwgZoYgrDEGuMBTnqrW2QphFA3oz/CyuWeETy4m++ P14VwKmtMv9m3/vQoMsURCI1QA0lzgNABTYqCccQAU3qVu1YW6Hiv00dnAE/rg93bSzF ++70ahCTTuH1XzcuPGjUZW+8Ce/1vF1O6eLKDoYasEVOA6DSRFukRMRK4Ejky4djm+9C 0mng== X-Gm-Message-State: AOJu0YwgveOqGLmASmmcSZpN7k5aANFdIMVu8V1FhqbZqMl0MJxSeoLH vqwnNS2JLt5gyOcW/kQds0A= X-Google-Smtp-Source: AGHT+IHE8SbOePg1AFjMpNHMlBnLKCKJyk7x7gN/qIfox+OvYCUNtOiGJVOlRc5pkEr+TSi68Md5Tg== X-Received: by 2002:a05:6358:fd19:b0:174:c540:dfd1 with SMTP id ui25-20020a056358fd1900b00174c540dfd1mr3258390rwb.56.1705311292566; Mon, 15 Jan 2024 01:34:52 -0800 (PST) Received: from VERNHAO-MC1.tencent.com ([43.132.98.40]) by smtp.gmail.com with ESMTPSA id b12-20020aa78ecc000000b006db105027basm7200686pfr.50.2024.01.15.01.34.50 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 15 Jan 2024 01:34:52 -0800 (PST) From: Vern Hao X-Google-Original-From: Vern Hao To: mgorman@techsingularity.net Cc: akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, haoxing990@gmail.com, Xin Hao Subject: [PATCH RFC v1 1/2] mm, pcp: rename pcp->count to pcp->total_count Date: Mon, 15 Jan 2024 17:34:35 +0800 Message-ID: <20240115093437.87814-2-vernhao@tencent.com> X-Mailer: git-send-email 2.42.1 In-Reply-To: <20240115093437.87814-1-vernhao@tencent.com> References: <20240115093437.87814-1-vernhao@tencent.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 9E85340005 X-Stat-Signature: d4o3eqqszr6sxchykc9werzwgi1ubbdz X-HE-Tag: 1705311293-684522 X-HE-Meta: U2FsdGVkX1+jeCVAQgT7eTmweyl9g4izY/SjTwX5Yt7GwGU6wVGft4cSF1U1DqIgxjbb8QcLj3FEx7pqY2e1nIUyoc4RWSxtp/FtBeP/8GqCgmpylmGGlsNWQuTuO1FCzbfEWOFboNlbQfi5uirMweqHBXkp6HfpwxyUUBz7bDbGDEGkK03NH1C2fQNn9BVY1qaKkQaBGZladgNtaypks8vNwoE1g+omdG1TP+eryTOW6uWYrg/1dPiOzDa5yu21pmns976gHeM6afYeRwSx7zI+s7+vI+jV8pJTZ4fkIPYz4ptuNe3YjChLYTWBbHC1P7f6bP6x6ZlcOgk2egPV65ARp0z5kLry267V0cZj4y0KyLwcSLOjXi+LXUk2XIfy1dYtZgTxUa7e7GRVretuFR3YyYtEawuIxsUk4EC+CH0zqKZomzuWJAbX3tlNDZ0UlmaTkmUGrrmKqOe1EqQHfJbHg8FRDWBtQlLZJfsSnGmRE0YBFj+UBKYhCTyZTaS3jIHJQHNBnZrxX06r3xudR2ZJR8qyT5E9qjihfNkVMfgIm0kUat3DQxgH930gjBC92Faov7ob+r9qMmX5kdW+tWJFmFNd+TB8JTH7OyyuQnSzFsjxUJa+bXRThUXmPsPZ0MkzLs56JVX/bz+5DntyoMqO1/Q8VkD1wk6rKlhkB8Wbl/XJOjHm2RRhxWBpCLoNPhSVhT+EXE7FweB565ktbnKX/YphJW0ISkbWrZJJwICs40ZCZa3q7yPfeUlthpH4FV3wfoWUwJF0pQlV+BxnyoLvv5jJ7XrGdZpM63bX0IWhqrWpJBUva6M4bmFzYd/LrJ3bKNHuVN23EQPWNr7PWePnv4frLCCdmDo+1USkA2MIapo6m/Er2M7Ipl2ICzPbabqI2KkHVZj2iUqRlXMdsnrEBFk/vQDY/I05NiUe6aPymImqYjtFt4palC/a1eZimodx5c406ZIJvBpQtXh oPyE8uKi tpYZHCLeigk9AV4FBX0XqhJr4Z634W0HC2Eppkfh9d633Yzt1HdzHNq/r1uuuR1T+DI9UZ5j/wYgIIgLhhwK8oV7lpUNQICV7GmMWPiwNOPvKy88hfakxbDcicGQEqn25u5tdzm7mXFsDxKofS3BaAdBCJxEPijGijJyxdHBtXEOOVcTjQm3BGcnz+hU+ME526IFBvdfstyeOlvJBHcU8pPu/qb/TinCtzGDqFfVjw6jYlCRB+X/u8CXF4++ScEfF4dor2gbAsDd2BA6dQn6fu6T8uCf25nPka4idwM566TPmIQ0WLtMMp1UmrjPWJbTWfNJCXmlBhO7DeYoBuUYy1E2HPDmRrnbxudObDiByjWpseLjND5hg9TXoIOHZIJ+O7EMY+1H/Zj1COm16fyO0xfQ+F7dpDLP3TdCthMQ4mROtZy7TJRdvIt0plpT0Jkqi9G6cwqhlffCuzC2NUAcMnWI5Yu/rs/DLbg0/O/+nnhV3v+s= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Xin Hao Just a rename for avoiding name conflicts in the next patch Signed-off-by: Xin Hao --- include/linux/mmzone.h | 2 +- mm/page_alloc.c | 42 +++++++++++++++++++++--------------------- mm/show_mem.c | 6 +++--- mm/vmstat.c | 6 +++--- 4 files changed, 28 insertions(+), 28 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 4ed33b127821..883168776fea 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -683,7 +683,7 @@ enum zone_watermarks { struct per_cpu_pages { spinlock_t lock; /* Protects lists field */ - int count; /* number of pages in the list */ + int total_count; /* total number of pages in the list */ int high; /* high watermark, emptying needed */ int high_min; /* min high watermark */ int high_max; /* max high watermark */ diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 5be4cd8f6b5a..4e91e429b8d1 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1197,7 +1197,7 @@ static void free_pcppages_bulk(struct zone *zone, int count, * Ensure proper count is passed which otherwise would stuck in the * below while (list_empty(list)) loop. */ - count = min(pcp->count, count); + count = min(pcp->total_count, count); /* Ensure requested pindex is drained first. */ pindex = pindex - 1; @@ -1227,7 +1227,7 @@ static void free_pcppages_bulk(struct zone *zone, int count, /* must delete to avoid corrupting pcp list */ list_del(&page->pcp_list); count -= nr_pages; - pcp->count -= nr_pages; + pcp->total_count -= nr_pages; /* MIGRATE_ISOLATE page should not go to pcplists */ VM_BUG_ON_PAGE(is_migrate_isolate(mt), page); @@ -2209,13 +2209,13 @@ int decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp) * control latency. This caps pcp->high decrement too. */ if (pcp->high > high_min) { - pcp->high = max3(pcp->count - (batch << CONFIG_PCP_BATCH_SCALE_MAX), + pcp->high = max3(pcp->total_count - (batch << CONFIG_PCP_BATCH_SCALE_MAX), pcp->high - (pcp->high >> 3), high_min); if (pcp->high > high_min) todo++; } - to_drain = pcp->count - pcp->high; + to_drain = pcp->total_count - pcp->high; if (to_drain > 0) { spin_lock(&pcp->lock); free_pcppages_bulk(zone, to_drain, pcp, 0); @@ -2237,7 +2237,7 @@ void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp) int to_drain, batch; batch = READ_ONCE(pcp->batch); - to_drain = min(pcp->count, batch); + to_drain = min(pcp->total_count, batch); if (to_drain > 0) { spin_lock(&pcp->lock); free_pcppages_bulk(zone, to_drain, pcp, 0); @@ -2254,9 +2254,9 @@ static void drain_pages_zone(unsigned int cpu, struct zone *zone) struct per_cpu_pages *pcp; pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu); - if (pcp->count) { + if (pcp->total_count) { spin_lock(&pcp->lock); - free_pcppages_bulk(zone, pcp->count, pcp, 0); + free_pcppages_bulk(zone, pcp->total_count, pcp, 0); spin_unlock(&pcp->lock); } } @@ -2292,7 +2292,7 @@ void drain_local_pages(struct zone *zone) * * drain_all_pages() is optimized to only execute on cpus where pcplists are * not empty. The check for non-emptiness can however race with a free to - * pcplist that has not yet increased the pcp->count from 0 to 1. Callers + * pcplist that has not yet increased the pcp->total_count from 0 to 1. Callers * that need the guarantee that every CPU has drained can disable the * optimizing racy check. */ @@ -2336,12 +2336,12 @@ static void __drain_all_pages(struct zone *zone, bool force_all_cpus) has_pcps = true; } else if (zone) { pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu); - if (pcp->count) + if (pcp->total_count) has_pcps = true; } else { for_each_populated_zone(z) { pcp = per_cpu_ptr(z->per_cpu_pageset, cpu); - if (pcp->count) { + if (pcp->total_count) { has_pcps = true; break; } @@ -2393,7 +2393,7 @@ static int nr_pcp_free(struct per_cpu_pages *pcp, int batch, int high, bool free /* Free as much as possible if batch freeing high-order pages. */ if (unlikely(free_high)) - return min(pcp->count, batch << CONFIG_PCP_BATCH_SCALE_MAX); + return min(pcp->total_count, batch << CONFIG_PCP_BATCH_SCALE_MAX); /* Check for PCP disabled or boot pageset */ if (unlikely(high < batch)) @@ -2448,8 +2448,8 @@ static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone, int free_count = max_t(int, pcp->free_count, batch); pcp->high = max(high - free_count, high_min); - high = max(pcp->count, high_min); - } else if (pcp->count >= high) { + high = max(pcp->total_count, high_min); + } else if (pcp->total_count >= high) { int need_high = pcp->free_count + batch; /* pcp->high should be large enough to hold batch freed pages */ @@ -2477,7 +2477,7 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp, __count_vm_events(PGFREE, 1 << order); pindex = order_to_pindex(migratetype, order); list_add(&page->pcp_list, &pcp->lists[pindex]); - pcp->count += 1 << order; + pcp->total_count += 1 << order; batch = READ_ONCE(pcp->batch); /* @@ -2490,7 +2490,7 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp, free_high = (pcp->free_count >= batch && (pcp->flags & PCPF_PREV_FREE_HIGH_ORDER) && (!(pcp->flags & PCPF_FREE_HIGH_BATCH) || - pcp->count >= READ_ONCE(batch))); + pcp->total_count >= READ_ONCE(batch))); pcp->flags |= PCPF_PREV_FREE_HIGH_ORDER; } else if (pcp->flags & PCPF_PREV_FREE_HIGH_ORDER) { pcp->flags &= ~PCPF_PREV_FREE_HIGH_ORDER; @@ -2498,7 +2498,7 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp, if (pcp->free_count < (batch << CONFIG_PCP_BATCH_SCALE_MAX)) pcp->free_count += (1 << order); high = nr_pcp_high(pcp, zone, batch, free_high); - if (pcp->count >= high) { + if (pcp->total_count >= high) { free_pcppages_bulk(zone, nr_pcp_free(pcp, batch, high, free_high), pcp, pindex); if (test_bit(ZONE_BELOW_HIGH, &zone->flags) && @@ -2815,7 +2815,7 @@ static int nr_pcp_alloc(struct per_cpu_pages *pcp, struct zone *zone, int order) high = pcp->high = min(high + batch, high_max); if (!order) { - max_nr_alloc = max(high - pcp->count - base_batch, base_batch); + max_nr_alloc = max(high - pcp->total_count - base_batch, base_batch); /* * Double the number of pages allocated each time there is * subsequent allocation of order-0 pages without any freeing. @@ -2857,14 +2857,14 @@ struct page *__rmqueue_pcplist(struct zone *zone, unsigned int order, batch, list, migratetype, alloc_flags); - pcp->count += alloced << order; + pcp->total_count += alloced << order; if (unlikely(list_empty(list))) return NULL; } page = list_first_entry(list, struct page, pcp_list); list_del(&page->pcp_list); - pcp->count -= 1 << order; + pcp->total_count -= 1 << order; } while (check_new_pages(page, order)); return page; @@ -5482,7 +5482,7 @@ static int zone_highsize(struct zone *zone, int batch, int cpu_online, /* * pcp->high and pcp->batch values are related and generally batch is lower - * than high. They are also related to pcp->count such that count is lower + * than high. They are also related to pcp->total_count such that count is lower * than high, and as soon as it reaches high, the pcplist is flushed. * * However, guaranteeing these relations at all times would require e.g. write @@ -5490,7 +5490,7 @@ static int zone_highsize(struct zone *zone, int batch, int cpu_online, * thus be prone to error and bad for performance. Thus the update only prevents * store tearing. Any new users of pcp->batch, pcp->high_min and pcp->high_max * should ensure they can cope with those fields changing asynchronously, and - * fully trust only the pcp->count field on the local CPU with interrupts + * fully trust only the pcp->total_count field on the local CPU with interrupts * disabled. * * mutex_is_locked(&pcp_batch_high_lock) required when calling this function diff --git a/mm/show_mem.c b/mm/show_mem.c index 8dcfafbd283c..6fcb2c771613 100644 --- a/mm/show_mem.c +++ b/mm/show_mem.c @@ -197,7 +197,7 @@ static void show_free_areas(unsigned int filter, nodemask_t *nodemask, int max_z continue; for_each_online_cpu(cpu) - free_pcp += per_cpu_ptr(zone->per_cpu_pageset, cpu)->count; + free_pcp += per_cpu_ptr(zone->per_cpu_pageset, cpu)->total_count; } printk("active_anon:%lu inactive_anon:%lu isolated_anon:%lu\n" @@ -299,7 +299,7 @@ static void show_free_areas(unsigned int filter, nodemask_t *nodemask, int max_z free_pcp = 0; for_each_online_cpu(cpu) - free_pcp += per_cpu_ptr(zone->per_cpu_pageset, cpu)->count; + free_pcp += per_cpu_ptr(zone->per_cpu_pageset, cpu)->total_count; show_node(zone); printk(KERN_CONT @@ -342,7 +342,7 @@ static void show_free_areas(unsigned int filter, nodemask_t *nodemask, int max_z K(zone_page_state(zone, NR_MLOCK)), K(zone_page_state(zone, NR_BOUNCE)), K(free_pcp), - K(this_cpu_read(zone->per_cpu_pageset->count)), + K(this_cpu_read(zone->per_cpu_pageset->total_count)), K(zone_page_state(zone, NR_FREE_CMA_PAGES))); printk("lowmem_reserve[]:"); for (i = 0; i < MAX_NR_ZONES; i++) diff --git a/mm/vmstat.c b/mm/vmstat.c index db79935e4a54..c1e8096ff0a6 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -846,7 +846,7 @@ static int refresh_cpu_vm_stats(bool do_pagesets) * if not then there is nothing to expire. */ if (!__this_cpu_read(pcp->expire) || - !__this_cpu_read(pcp->count)) + !__this_cpu_read(pcp->total_count)) continue; /* @@ -862,7 +862,7 @@ static int refresh_cpu_vm_stats(bool do_pagesets) continue; } - if (__this_cpu_read(pcp->count)) { + if (__this_cpu_read(pcp->total_count)) { drain_zone_pages(zone, this_cpu_ptr(pcp)); changes++; } @@ -1745,7 +1745,7 @@ static void zoneinfo_show_print(struct seq_file *m, pg_data_t *pgdat, "\n high: %i" "\n batch: %i", i, - pcp->count, + pcp->total_count, pcp->high, pcp->batch); #ifdef CONFIG_SMP