From patchwork Sun Apr 2 10:42:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 13197363 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F19D1C7619A for ; Sun, 2 Apr 2023 10:43:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1F6DC6B0074; Sun, 2 Apr 2023 06:43:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1A6EB6B0075; Sun, 2 Apr 2023 06:43:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F3B166B0078; Sun, 2 Apr 2023 06:43:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id E34596B0074 for ; Sun, 2 Apr 2023 06:43:09 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id C3198401EB for ; Sun, 2 Apr 2023 10:43:09 +0000 (UTC) X-FDA: 80636113698.29.85761CC Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by imf27.hostedemail.com (Postfix) with ESMTP id 668294000B for ; Sun, 2 Apr 2023 10:43:07 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=phnvHRbA; spf=pass (imf27.hostedemail.com: domain of aneesh.kumar@linux.ibm.com designates 148.163.158.5 as permitted sender) smtp.mailfrom=aneesh.kumar@linux.ibm.com; dmarc=pass (policy=none) header.from=ibm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1680432187; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1ry+jYlVU28GDY1D//0emb3jPyyE3zc029+jUCACSlA=; b=BcoqSY4UC1Cvw69jDURHjlTO7LJe3PtG80dqnafncsLExPYm2vOxFybxvw79W7j0/i/WsA OXayepUADnQZyeBvOVtfqRWSMcfdoWLIvPkNjeTv2iu2Cb3VIu81mt1KxdJDt9su2lzooj LgyErVU0fGdPNrejMd4KiLzqzOY3h6E= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=phnvHRbA; spf=pass (imf27.hostedemail.com: domain of aneesh.kumar@linux.ibm.com designates 148.163.158.5 as permitted sender) smtp.mailfrom=aneesh.kumar@linux.ibm.com; dmarc=pass (policy=none) header.from=ibm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1680432187; a=rsa-sha256; cv=none; b=nL9Jr4M8fSL/Kv5xcdAGErbZx4BjqHdgBX0yknrZAODh95PPkwD3IA+r4i5Ib0vl1D7Mik SxGHPZeLKpC0XcixFkqsZN2Qo5d1GEARxJCQMZAABV3uESNoNcnsTNWToP+7+VfN0WaOEm GFHQwzOjWfxsWz+Z1nsBW23LRVsR348= Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3326uOLE021080; Sun, 2 Apr 2023 10:43:03 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=1ry+jYlVU28GDY1D//0emb3jPyyE3zc029+jUCACSlA=; b=phnvHRbAnVvv9DNv1ci5p7hlBgj/ksCg6o0gVx8ZKnfrAFc/Rky0kpP2ki2sIJZdJHtN kb0X7wUXSybGNcN+iSfMFPCUh/eewhZaGdb+Xy5uSjonT+sbRdsNNLJlZfrnMXbLbgmr IQW7PdKzmOZxJhKivjUYVAzEMBIwtiM7diQVKr75nDj+0YGZjiGjB1sSGBH3SFdmP9/r bkJ7Axbqx1ZFW7QpEde/4ynvhiWASyiz1pBYL16TB63b3S7q5FYHXOelGWZhpLVhGRgu BYqA45TZSxBAKwqxgZEbxgnno4YMAXzW6BgoGEpY+KInqHD/QGhkZ88g3z+v/epsJdEK Kw== Received: from pps.reinject (localhost [127.0.0.1]) by mx0b-001b2d01.pphosted.com (PPS) with ESMTPS id 3ppx6s7hcr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sun, 02 Apr 2023 10:43:03 +0000 Received: from m0098419.ppops.net (m0098419.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 332Ah2XP031946; Sun, 2 Apr 2023 10:43:02 GMT Received: from ppma02wdc.us.ibm.com (aa.5b.37a9.ip4.static.sl-reverse.com [169.55.91.170]) by mx0b-001b2d01.pphosted.com (PPS) with ESMTPS id 3ppx6s7hcm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sun, 02 Apr 2023 10:43:02 +0000 Received: from pps.filterd (ppma02wdc.us.ibm.com [127.0.0.1]) by ppma02wdc.us.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 3326gl5X024270; Sun, 2 Apr 2023 10:43:02 GMT Received: from smtprelay07.wdc07v.mail.ibm.com ([9.208.129.116]) by ppma02wdc.us.ibm.com (PPS) with ESMTPS id 3ppc87x1u1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sun, 02 Apr 2023 10:43:01 +0000 Received: from smtpav03.dal12v.mail.ibm.com (smtpav03.dal12v.mail.ibm.com [10.241.53.102]) by smtprelay07.wdc07v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 332Ah01k8585794 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sun, 2 Apr 2023 10:43:00 GMT Received: from smtpav03.dal12v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 7456058060; Sun, 2 Apr 2023 10:43:00 +0000 (GMT) Received: from smtpav03.dal12v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E8E2258056; Sun, 2 Apr 2023 10:42:56 +0000 (GMT) Received: from skywalker.ibmuc.com (unknown [9.43.8.200]) by smtpav03.dal12v.mail.ibm.com (Postfix) with ESMTP; Sun, 2 Apr 2023 10:42:56 +0000 (GMT) From: "Aneesh Kumar K.V" To: linux-mm@kvack.org, akpm@linux-foundation.org Cc: Dave Hansen , Johannes Weiner , Matthew Wilcox , Mel Gorman , Yu Zhao , Wei Xu , Guru Anbalagane , "Aneesh Kumar K.V" Subject: [RFC PATCH v1 1/7] mm: Move some code around so that next patch is simpler Date: Sun, 2 Apr 2023 16:12:34 +0530 Message-Id: <20230402104240.1734931-2-aneesh.kumar@linux.ibm.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230402104240.1734931-1-aneesh.kumar@linux.ibm.com> References: <20230402104240.1734931-1-aneesh.kumar@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: YCZP_5FxiKLHa_pYPSoGOaIWTAi8fVBl X-Proofpoint-ORIG-GUID: GIyyutpeZIwEU0Z33XfcTsuBsvFvVppl X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-31_07,2023-03-31_01,2023-02-09_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 adultscore=0 mlxscore=0 malwarescore=0 priorityscore=1501 suspectscore=0 bulkscore=0 spamscore=0 clxscore=1015 mlxlogscore=999 phishscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2303200000 definitions=main-2304020092 X-Rspamd-Queue-Id: 668294000B X-Stat-Signature: 8re8stjnndebupzsc5skbg9ky9te5yjz X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1680432187-958627 X-HE-Meta: U2FsdGVkX1/+bIVv8J0QABJ7+qlaG8C8FQsisWufLRWjnb/6DwQNGZ5dYDsKZbT60x2zVzCPFKA7QIqNgRbW3nSdaw8cUpOnluxnJoJOoSTKlh60JnbHA7ZDiB2zHcjDrT4CtD9Ue6gs/DG2lrIY/wEzfBFdIgtFzeII7omdZOKp4XZPlndEvAv1K99diMe22BNwumKgDxuPyYwNoptShqfPeEakx4+Nx9O1MgjhuF1RBXUt3LnGpbJH8Jf+jdRnnUjUQRTN6lkIJGP9Z1xXhnhTAfqqfWe9Pm589JyCgXAeW7ObNLJGa6JtaQetgXCplj81GvXtSkYTQoVa8MlWWhv8Js8vTUFNQTTooUxRtmrEKWEqXV5FsHri+DT5AMJ0yFxUBcW7ANJ3WfY4kAbJH/ANNNoo2uT0hZjNZQDhJ7KPvdnai5ohUmGAQJZI5Ocw0YXigWVtvbU9Te/SjmGUk6IFKgxJtQpvU37HnrGw9Evl0VhNOHILXtbJG++IyOXE0gIxt7X4g1pm5JB5El3gwmUXmpzDUQaqc+M5LIOiK2DpyF19cZ3u4Y2PwlJ+dVBqVQzkaNTb6M5mYdEbV8aaW9iAp9XJ62xGVafhxqqFVpnZWhJZIan8tSg742hndhENIVZLixHzw0rY7nVcKumR3YS3MZYBsuMNaLDqqQhDEsXSMLieXF1u/mtTkC5OSVThU+eUaYTmXoiVL6atX9kE8nrfNDXRfwq4KduNB2wc+W+7LpHLXFRY6gJCDX8oU9/GBfKLKfyXjZ+k72vct4IiaXJGXvqooZ7vADXIbagDtj/TNokyms8b14w2PB21SfHYkcvxMrVzKiwNlngVzXcW2kz2IZ9F/lvsxFsuM07IkJ2emd3LlWxhy4EvMcupl9R+HyA8O3zDU/tW9H3LCjUzxd0tY9KgJBGz4H5iV14OWsloq/dPi6vXEl/J3Y5YEUsWtZnoQncKiP9fH5Rb4PP Svk920mG ecjkhgND0HEVUPN5fwWnTHgaCfNJ9a/NUb1anGkqJny3IDBSCcwJO45JrII+f9EDWHPJ5GSTbz5C/hwPnLJGxwO6oCXjwDbCN1GXwjZEG4vxRgf4U4YisSoYZ0CicptcvWolRD49m44PGq55lrxkdmY2I0gOp4uTvQ0CZ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Move lrur_gen_add_folio to .c. We will support arch specific mapping of page access count to generation in a later patch and will use that when adding folio to lruvec. This move enables that. No functional change in this patch. Signed-off-by: Aneesh Kumar K.V --- include/linux/mm_inline.h | 47 +------------- mm/vmscan.c | 127 ++++++++++++++++++++++++++------------ 2 files changed, 88 insertions(+), 86 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index ff3f3f23f649..4dc2ab95d612 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -217,52 +217,7 @@ static inline void lru_gen_update_size(struct lruvec *lruvec, struct folio *foli VM_WARN_ON_ONCE(lru_gen_is_active(lruvec, old_gen) && !lru_gen_is_active(lruvec, new_gen)); } -static inline bool lru_gen_add_folio(struct lruvec *lruvec, struct folio *folio, bool reclaiming) -{ - unsigned long seq; - unsigned long flags; - int gen = folio_lru_gen(folio); - int type = folio_is_file_lru(folio); - int zone = folio_zonenum(folio); - struct lru_gen_struct *lrugen = &lruvec->lrugen; - - VM_WARN_ON_ONCE_FOLIO(gen != -1, folio); - - if (folio_test_unevictable(folio) || !lrugen->enabled) - return false; - /* - * There are three common cases for this page: - * 1. If it's hot, e.g., freshly faulted in or previously hot and - * migrated, add it to the youngest generation. - * 2. If it's cold but can't be evicted immediately, i.e., an anon page - * not in swapcache or a dirty page pending writeback, add it to the - * second oldest generation. - * 3. Everything else (clean, cold) is added to the oldest generation. - */ - if (folio_test_active(folio)) - seq = lrugen->max_seq; - else if ((type == LRU_GEN_ANON && !folio_test_swapcache(folio)) || - (folio_test_reclaim(folio) && - (folio_test_dirty(folio) || folio_test_writeback(folio)))) - seq = lrugen->min_seq[type] + 1; - else - seq = lrugen->min_seq[type]; - - gen = lru_gen_from_seq(seq); - flags = (gen + 1UL) << LRU_GEN_PGOFF; - /* see the comment on MIN_NR_GENS about PG_active */ - set_mask_bits(&folio->flags, LRU_GEN_MASK | BIT(PG_active), flags); - - lru_gen_update_size(lruvec, folio, -1, gen); - /* for folio_rotate_reclaimable() */ - if (reclaiming) - list_add_tail(&folio->lru, &lrugen->lists[gen][type][zone]); - else - list_add(&folio->lru, &lrugen->lists[gen][type][zone]); - - return true; -} - +bool lru_gen_add_folio(struct lruvec *lruvec, struct folio *folio, bool reclaiming); static inline bool lru_gen_del_folio(struct lruvec *lruvec, struct folio *folio, bool reclaiming) { unsigned long flags; diff --git a/mm/vmscan.c b/mm/vmscan.c index 5b7b8d4f5297..f47d80ae77ef 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3737,6 +3737,47 @@ static int folio_inc_gen(struct lruvec *lruvec, struct folio *folio, bool reclai return new_gen; } +static unsigned long get_pte_pfn(pte_t pte, struct vm_area_struct *vma, unsigned long addr) +{ + unsigned long pfn = pte_pfn(pte); + + VM_WARN_ON_ONCE(addr < vma->vm_start || addr >= vma->vm_end); + + if (!pte_present(pte) || is_zero_pfn(pfn)) + return -1; + + if (WARN_ON_ONCE(pte_devmap(pte) || pte_special(pte))) + return -1; + + if (WARN_ON_ONCE(!pfn_valid(pfn))) + return -1; + + return pfn; +} + +static struct folio *get_pfn_folio(unsigned long pfn, struct mem_cgroup *memcg, + struct pglist_data *pgdat, bool can_swap) +{ + struct folio *folio; + + /* try to avoid unnecessary memory loads */ + if (pfn < pgdat->node_start_pfn || pfn >= pgdat_end_pfn(pgdat)) + return NULL; + + folio = pfn_folio(pfn); + if (folio_nid(folio) != pgdat->node_id) + return NULL; + + if (folio_memcg_rcu(folio) != memcg) + return NULL; + + /* file VMAs can contain anon pages from COW */ + if (!folio_is_file_lru(folio) && !can_swap) + return NULL; + + return folio; +} + static void update_batch_size(struct lru_gen_mm_walk *walk, struct folio *folio, int old_gen, int new_gen) { @@ -3843,23 +3884,6 @@ static bool get_next_vma(unsigned long mask, unsigned long size, struct mm_walk return false; } -static unsigned long get_pte_pfn(pte_t pte, struct vm_area_struct *vma, unsigned long addr) -{ - unsigned long pfn = pte_pfn(pte); - - VM_WARN_ON_ONCE(addr < vma->vm_start || addr >= vma->vm_end); - - if (!pte_present(pte) || is_zero_pfn(pfn)) - return -1; - - if (WARN_ON_ONCE(pte_devmap(pte) || pte_special(pte))) - return -1; - - if (WARN_ON_ONCE(!pfn_valid(pfn))) - return -1; - - return pfn; -} #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG) static unsigned long get_pmd_pfn(pmd_t pmd, struct vm_area_struct *vma, unsigned long addr) @@ -3881,29 +3905,6 @@ static unsigned long get_pmd_pfn(pmd_t pmd, struct vm_area_struct *vma, unsigned } #endif -static struct folio *get_pfn_folio(unsigned long pfn, struct mem_cgroup *memcg, - struct pglist_data *pgdat, bool can_swap) -{ - struct folio *folio; - - /* try to avoid unnecessary memory loads */ - if (pfn < pgdat->node_start_pfn || pfn >= pgdat_end_pfn(pgdat)) - return NULL; - - folio = pfn_folio(pfn); - if (folio_nid(folio) != pgdat->node_id) - return NULL; - - if (folio_memcg_rcu(folio) != memcg) - return NULL; - - /* file VMAs can contain anon pages from COW */ - if (!folio_is_file_lru(folio) && !can_swap) - return NULL; - - return folio; -} - static bool suitable_to_scan(int total, int young) { int n = clamp_t(int, cache_line_size() / sizeof(pte_t), 2, 8); @@ -5252,6 +5253,52 @@ static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc blk_finish_plug(&plug); } +bool lru_gen_add_folio(struct lruvec *lruvec, struct folio *folio, bool reclaiming) +{ + unsigned long seq; + unsigned long flags; + int gen = folio_lru_gen(folio); + int type = folio_is_file_lru(folio); + int zone = folio_zonenum(folio); + struct lru_gen_struct *lrugen = &lruvec->lrugen; + + VM_WARN_ON_ONCE_FOLIO(gen != -1, folio); + + if (folio_test_unevictable(folio) || !lrugen->enabled) + return false; + /* + * There are three common cases for this page: + * 1. If it's hot, e.g., freshly faulted in or previously hot and + * migrated, add it to the youngest generation. + * 2. If it's cold but can't be evicted immediately, i.e., an anon page + * not in swapcache or a dirty page pending writeback, add it to the + * second oldest generation. + * 3. Everything else (clean, cold) is added to the oldest generation. + */ + if (folio_test_active(folio)) + seq = lrugen->max_seq; + else if ((type == LRU_GEN_ANON && !folio_test_swapcache(folio)) || + (folio_test_reclaim(folio) && + (folio_test_dirty(folio) || folio_test_writeback(folio)))) + seq = lrugen->min_seq[type] + 1; + else + seq = lrugen->min_seq[type]; + + gen = lru_gen_from_seq(seq); + flags = (gen + 1UL) << LRU_GEN_PGOFF; + /* see the comment on MIN_NR_GENS about PG_active */ + set_mask_bits(&folio->flags, LRU_GEN_MASK | BIT(PG_active), flags); + + lru_gen_update_size(lruvec, folio, -1, gen); + /* for folio_rotate_reclaimable() */ + if (reclaiming) + list_add_tail(&folio->lru, &lrugen->lists[gen][type][zone]); + else + list_add(&folio->lru, &lrugen->lists[gen][type][zone]); + + return true; +} + /****************************************************************************** * state change ******************************************************************************/ From patchwork Sun Apr 2 10:42:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 13197364 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74DE3C7619A for ; Sun, 2 Apr 2023 10:43:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BE1BD6B0075; Sun, 2 Apr 2023 06:43:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B92A06B0078; Sun, 2 Apr 2023 06:43:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9E45C6B007B; Sun, 2 Apr 2023 06:43:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 8A6776B0075 for ; Sun, 2 Apr 2023 06:43:14 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 561B712016D for ; Sun, 2 Apr 2023 10:43:14 +0000 (UTC) X-FDA: 80636113908.26.A6BB65C Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by imf28.hostedemail.com (Postfix) with ESMTP id 1DCEBC0002 for ; Sun, 2 Apr 2023 10:43:11 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=bb1jef8K; spf=pass (imf28.hostedemail.com: domain of aneesh.kumar@linux.ibm.com designates 148.163.158.5 as permitted sender) smtp.mailfrom=aneesh.kumar@linux.ibm.com; dmarc=pass (policy=none) header.from=ibm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1680432192; a=rsa-sha256; cv=none; b=QVSud6ox7h5mPS00z9YIq3TjjNoAaFMgsXuevoYwmFlEbqBtI8ncK97ah3N6CV8Munt+QA dV0PIqpIc8hfTCXKitG6gthkFlz1Cz8DWF/JFtCg26WxX/rrwInDKlSWXOgA46XBmYAPv3 HLjxAjH7wIFGysZzWTCHZSAWzXCo8LA= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=bb1jef8K; spf=pass (imf28.hostedemail.com: domain of aneesh.kumar@linux.ibm.com designates 148.163.158.5 as permitted sender) smtp.mailfrom=aneesh.kumar@linux.ibm.com; dmarc=pass (policy=none) header.from=ibm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1680432192; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wXoF+kq9STETG6gUa92YcDEZ7idq981s/A7M3nrCBak=; b=czvczpc+o+A8AHeRRaJ5350Vyk1t0DYfZMMMU1dM6/chBTyDGtaiEM3WhTEoR+jBrJDNSp 2/5UO4HMcaCTmVK4EsGUnAaKAiu9oP3pQvjpjQmBYC4TetMyPhNYAFi6OXwNjLLaL6YmQG IkyggcxLg4bdnETBvGeHd7lgBmrn6m0= Received: from pps.filterd (m0098421.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3327jUtl024232; Sun, 2 Apr 2023 10:43:07 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=wXoF+kq9STETG6gUa92YcDEZ7idq981s/A7M3nrCBak=; b=bb1jef8KodjArGJr/5UsmIr+Av6Lsi/bqtHtGh2qW1vgMy72p4uR3kZFTeiSwaveY+bi /FiZZn9fbxDvr1Ylaho3fhh3Cw/ZNoI099K3ioaByk4/+xAC0ZJuZTpHvaYRJ5v88ixl MGqy+lWntFQGYO7rVntnw5/F1cSLRbmLcgDrneR0J4+LMwDoddsFpCjCMxz0QWwiQ9zq 4d3usMMUr77zx65xpuzWhqin0vAQgYLpD/0j5De4bFqf2zJK1mdzqItG8JkXvYlqjNpM +1dtbqFdJz+2diVAoRm18IDgqkf8qEqRDN7v5A3GhGWrHnmM8L3oOPmRtaC+tWsoFgui JA== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3ppxf6q6pd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sun, 02 Apr 2023 10:43:07 +0000 Received: from m0098421.ppops.net (m0098421.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 332AdkYB020802; Sun, 2 Apr 2023 10:43:07 GMT Received: from ppma04dal.us.ibm.com (7a.29.35a9.ip4.static.sl-reverse.com [169.53.41.122]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3ppxf6q6p7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sun, 02 Apr 2023 10:43:07 +0000 Received: from pps.filterd (ppma04dal.us.ibm.com [127.0.0.1]) by ppma04dal.us.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 3328WJ5S028488; Sun, 2 Apr 2023 10:43:06 GMT Received: from smtprelay06.dal12v.mail.ibm.com ([9.208.130.100]) by ppma04dal.us.ibm.com (PPS) with ESMTPS id 3ppc87qj62-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sun, 02 Apr 2023 10:43:06 +0000 Received: from smtpav03.dal12v.mail.ibm.com (smtpav03.dal12v.mail.ibm.com [10.241.53.102]) by smtprelay06.dal12v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 332Ah5Wm9306812 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sun, 2 Apr 2023 10:43:05 GMT Received: from smtpav03.dal12v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 23DBB58056; Sun, 2 Apr 2023 10:43:05 +0000 (GMT) Received: from smtpav03.dal12v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 0CE005805A; Sun, 2 Apr 2023 10:43:01 +0000 (GMT) Received: from skywalker.ibmuc.com (unknown [9.43.8.200]) by smtpav03.dal12v.mail.ibm.com (Postfix) with ESMTP; Sun, 2 Apr 2023 10:43:00 +0000 (GMT) From: "Aneesh Kumar K.V" To: linux-mm@kvack.org, akpm@linux-foundation.org Cc: Dave Hansen , Johannes Weiner , Matthew Wilcox , Mel Gorman , Yu Zhao , Wei Xu , Guru Anbalagane , "Aneesh Kumar K.V" Subject: [RFC PATCH v1 2/7] mm: Don't build multi-gen LRU page table walk code on architecture not supported Date: Sun, 2 Apr 2023 16:12:35 +0530 Message-Id: <20230402104240.1734931-3-aneesh.kumar@linux.ibm.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230402104240.1734931-1-aneesh.kumar@linux.ibm.com> References: <20230402104240.1734931-1-aneesh.kumar@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: Aop8-W6juKjHSnSZ-IM0daA1XjK4YcIU X-Proofpoint-ORIG-GUID: 8ebglcjEiS1HNCdwRIbYfRaXdlia09Lt X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-31_07,2023-03-31_01,2023-02-09_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 bulkscore=0 clxscore=1015 malwarescore=0 adultscore=0 lowpriorityscore=0 suspectscore=0 mlxlogscore=775 impostorscore=0 priorityscore=1501 phishscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2303200000 definitions=main-2304020092 X-Rspam-User: X-Rspamd-Queue-Id: 1DCEBC0002 X-Rspamd-Server: rspam01 X-Stat-Signature: 7fqy87q7i9p95whykqtye34axo9x4de8 X-HE-Tag: 1680432191-703305 X-HE-Meta: U2FsdGVkX18ZAW+By3C7Leo+EQlKBrxhzuLUJZOMyyVYi4AoxtjVLNEbJihFU29HnAbnTkeKQaA9hGVP0tYEe+zrbqB6OLVOswnWF8NNEzvZkWlpA4TCgOD63WkVrUFUgxgumzeKIt7Yvwm+faFukvdD12kWkwC0JjNt6/fwAVxyo0+6Sn1MPWJqe3caj0mMa04fbtMdxl9WKbOXjhqWsXKilFqt67lmw9eRLFMnn3Vq+3TTC4FwcthiFkxFurvQus4ZZeIJWBa1AWnxSLrCqt/oKM6Yz/aHP4ULn2554lfj41N55/q8QeV0SqpX7z1EpMP7COSWP6Jheb4xUIS11tGcp23ARX2XDfH9FGpCWpTc+epCpcyiFWD5xJFi6OonUZ+6w0bfAp7lFC7nqD1CZ2ChrlXuUdDgEJAJKMIsyBFkj+2KWcJDeidjsbEaNQIivrAWKOBHK40wJ1KbZGuE2dR7JMpmCIZ9Vt0MmfrBX6aYda+woCh1phAriU862DcES/iyTOgx0jR7ROrpwOmiCf0xho4t6+OoG2sOx2DCfgbhUlAQYxEZP0lKk9jvCulLkfPcWnejTx2/2ORTw/EME8JaGVlv+dvsibh6azudKLsVAmQeJm9+0EVAMD9AxM9Ixt3bXbdzTrTWfuggo4d1QFTnd4gBA93i/cwZtu2XjEvW8Hsj8ldbTv/cbSCbRiXu7nfEJ6qiJdhnHwU5ktjB8oVFA3RRJxBu5OZhT2gvXnsDwqYe1E7Npy4HUvJ5FRPW8HtJyl+2GmlT5Nrleck5yun7lxNtyudVzDgOIUwmfjcMXRYQENV/TVxxDqMtIc2vr1x5JYF5TK0VAVT8PHvVeSMTd2LpF9okFsgHW+t/z67PaTmOoAsRD54DlJJPYn2vLboh4U6ptXUoPYrEXCyYHX4oWR4qprkgLpWljlBaoVPB831jGb6/n33Wn+XV3H5DO3c4p8zbnCfHqs+bWFs 995OUtC5 EO+J9b6MEMUAivJb9NqUZN9204DE4V3bWvBpNebU6dFf3LbqOkVkgzObgaONNiSkerSo2dkjPD+FCTZSKqdtYkIG2c6PKCLtnhnU1FOiEtVcgljCZOc1IjjtNbf4n13f0XogTx2mpVPoWczoEmcfGKhE721Sr/t/IK6IZxegiWWJ22SSi+vqctpsIrTgfGUST6FCJ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Not all architecture supports hardware atomic updates of access bits. On such an arch, we don't use page table walk to classify pages into generations. Add a kernel config option and remove adding all the page table walk code on such architecture. lru_gen_look_around() code is duplicated because lru_gen_mm_walk is not available always. This patch did result in some improvements on powerpc because it is removing all additional code that is not used in page classification. memcached: patch details Total Ops/sec: mglru 160821 PATCH 2 164572 mongodb: Patch details Throughput(Ops/sec) mglru 92987 PATCH 2 93740 Signed-off-by: Aneesh Kumar K.V --- arch/Kconfig | 3 + arch/arm64/Kconfig | 1 + arch/x86/Kconfig | 1 + include/linux/memcontrol.h | 2 +- include/linux/mm_types.h | 8 +- include/linux/mmzone.h | 10 +- include/linux/swap.h | 2 +- kernel/fork.c | 2 +- mm/memcontrol.c | 2 +- mm/vmscan.c | 221 ++++++++++++++++++++++++++++++++++--- 10 files changed, 230 insertions(+), 22 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 12e3ddabac9d..61fc138bb91a 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -1426,6 +1426,9 @@ config DYNAMIC_SIGFRAME config HAVE_ARCH_NODE_DEV_GROUP bool +config LRU_TASK_PAGE_AGING + bool + config ARCH_HAS_NONLEAF_PMD_YOUNG bool help diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 27b2592698b0..b783b339ef59 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -219,6 +219,7 @@ config ARM64 select IRQ_DOMAIN select IRQ_FORCED_THREADING select KASAN_VMALLOC if KASAN + select LRU_TASK_PAGE_AGING select MODULES_USE_ELF_RELA select NEED_DMA_MAP_STATE select NEED_SG_DMA_LENGTH diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index a825bf031f49..805d3f6a1a58 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -274,6 +274,7 @@ config X86 select HAVE_GENERIC_VDSO select HOTPLUG_SMT if SMP select IRQ_FORCED_THREADING + select LRU_TASK_PAGE_AGING select NEED_PER_CPU_EMBED_FIRST_CHUNK select NEED_PER_CPU_PAGE_FIRST_CHUNK select NEED_SG_DMA_LENGTH diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index a7b5925a033e..6b48a30a0dae 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -320,7 +320,7 @@ struct mem_cgroup { struct deferred_split deferred_split_queue; #endif -#ifdef CONFIG_LRU_GEN +#ifdef CONFIG_LRU_TASK_PAGE_AGING /* per-memcg mm_struct list */ struct lru_gen_mm_list mm_list; #endif diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index af8119776ab1..7bca8987a86b 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -796,7 +796,7 @@ struct mm_struct { */ unsigned long ksm_rmap_items; #endif -#ifdef CONFIG_LRU_GEN +#ifdef CONFIG_LRU_TASK_PAGE_AGING struct { /* this mm_struct is on lru_gen_mm_list */ struct list_head list; @@ -811,7 +811,7 @@ struct mm_struct { struct mem_cgroup *memcg; #endif } lru_gen; -#endif /* CONFIG_LRU_GEN */ +#endif /* CONFIG_LRU_TASK_PAGE_AGING */ } __randomize_layout; /* @@ -839,7 +839,7 @@ static inline cpumask_t *mm_cpumask(struct mm_struct *mm) return (struct cpumask *)&mm->cpu_bitmap; } -#ifdef CONFIG_LRU_GEN +#ifdef CONFIG_LRU_TASK_PAGE_AGING struct lru_gen_mm_list { /* mm_struct list for page table walkers */ @@ -873,7 +873,7 @@ static inline void lru_gen_use_mm(struct mm_struct *mm) WRITE_ONCE(mm->lru_gen.bitmap, -1); } -#else /* !CONFIG_LRU_GEN */ +#else /* !CONFIG_LRU_TASK_PAGE_AGING */ static inline void lru_gen_add_mm(struct mm_struct *mm) { diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index cd28a100d9e4..0bcc5d88239a 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -428,6 +428,7 @@ struct lru_gen_struct { bool enabled; }; +#ifdef CONFIG_LRU_TASK_PAGE_AGING enum { MM_LEAF_TOTAL, /* total leaf entries */ MM_LEAF_OLD, /* old leaf entries */ @@ -474,6 +475,7 @@ struct lru_gen_mm_walk { bool can_swap; bool force_scan; }; +#endif void lru_gen_init_lruvec(struct lruvec *lruvec); void lru_gen_look_around(struct page_vma_mapped_walk *pvmw); @@ -525,8 +527,14 @@ struct lruvec { #ifdef CONFIG_LRU_GEN /* evictable pages divided into generations */ struct lru_gen_struct lrugen; +#ifdef CONFIG_LRU_TASK_PAGE_AGING /* to concurrently iterate lru_gen_mm_list */ struct lru_gen_mm_state mm_state; +#else + /* for concurrent update of max_seq without holding lru_lock */ + struct wait_queue_head seq_update_wait; + bool seq_update_progress; +#endif #endif #ifdef CONFIG_MEMCG struct pglist_data *pgdat; @@ -1240,7 +1248,7 @@ typedef struct pglist_data { unsigned long flags; -#ifdef CONFIG_LRU_GEN +#ifdef CONFIG_LRU_TASK_PAGE_AGING /* kswap mm walk data */ struct lru_gen_mm_walk mm_walk; #endif diff --git a/include/linux/swap.h b/include/linux/swap.h index 0ceed49516ad..d79976635c42 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -154,7 +154,7 @@ union swap_header { */ struct reclaim_state { unsigned long reclaimed_slab; -#ifdef CONFIG_LRU_GEN +#ifdef CONFIG_LRU_TASK_PAGE_AGING /* per-thread mm walk data */ struct lru_gen_mm_walk *mm_walk; #endif diff --git a/kernel/fork.c b/kernel/fork.c index 038b898dad52..804517394f55 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -2708,7 +2708,7 @@ pid_t kernel_clone(struct kernel_clone_args *args) get_task_struct(p); } - if (IS_ENABLED(CONFIG_LRU_GEN) && !(clone_flags & CLONE_VM)) { + if (IS_ENABLED(CONFIG_LRU_TASK_PAGE_AGING) && !(clone_flags & CLONE_VM)) { /* lock the task to synchronize with memcg migration */ task_lock(p); lru_gen_add_mm(p->mm); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 2ca843cc3aa6..1302f00bd5e7 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6305,7 +6305,7 @@ static void mem_cgroup_move_task(void) } #endif -#ifdef CONFIG_LRU_GEN +#ifdef CONFIG_LRU_TASK_PAGE_AGING static void mem_cgroup_attach(struct cgroup_taskset *tset) { struct task_struct *task; diff --git a/mm/vmscan.c b/mm/vmscan.c index f47d80ae77ef..f92b689af2a5 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3225,6 +3225,7 @@ static bool __maybe_unused seq_is_valid(struct lruvec *lruvec) get_nr_gens(lruvec, LRU_GEN_ANON) <= MAX_NR_GENS; } +#ifdef CONFIG_LRU_TASK_PAGE_AGING /****************************************************************************** * mm_struct list ******************************************************************************/ @@ -3586,6 +3587,7 @@ static bool iterate_mm_list_nowalk(struct lruvec *lruvec, unsigned long max_seq) return success; } +#endif /****************************************************************************** * refault feedback loop @@ -3778,6 +3780,7 @@ static struct folio *get_pfn_folio(unsigned long pfn, struct mem_cgroup *memcg, return folio; } +#ifdef CONFIG_LRU_TASK_PAGE_AGING static void update_batch_size(struct lru_gen_mm_walk *walk, struct folio *folio, int old_gen, int new_gen) { @@ -4235,7 +4238,7 @@ static void walk_mm(struct lruvec *lruvec, struct mm_struct *mm, struct lru_gen_ } while (err == -EAGAIN); } -static struct lru_gen_mm_walk *set_mm_walk(struct pglist_data *pgdat) +static void *set_mm_walk(struct pglist_data *pgdat) { struct lru_gen_mm_walk *walk = current->reclaim_state->mm_walk; @@ -4266,6 +4269,18 @@ static void clear_mm_walk(void) if (!current_is_kswapd()) kfree(walk); } +#else + +static inline void *set_mm_walk(struct pglist_data *pgdat) +{ + return NULL; +} + +static inline void clear_mm_walk(void) +{ +} + +#endif static bool inc_min_seq(struct lruvec *lruvec, int type, bool can_swap) { @@ -4399,11 +4414,14 @@ static void inc_max_seq(struct lruvec *lruvec, bool can_swap, bool force_scan) /* make sure preceding modifications appear */ smp_store_release(&lrugen->max_seq, lrugen->max_seq + 1); +#ifndef CONFIG_LRU_TASK_PAGE_AGING + lruvec->seq_update_progress = false; +#endif spin_unlock_irq(&lruvec->lru_lock); } - +#ifdef CONFIG_LRU_TASK_PAGE_AGING static bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long max_seq, - struct scan_control *sc, bool can_swap, bool force_scan) + int scan_priority, bool can_swap, bool force_scan) { bool success; struct lru_gen_mm_walk *walk; @@ -4429,7 +4447,7 @@ static bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long max_seq, goto done; } - walk = set_mm_walk(NULL); + walk = (struct lru_gen_mm_walk *)set_mm_walk(NULL); if (!walk) { success = iterate_mm_list_nowalk(lruvec, max_seq); goto done; @@ -4449,7 +4467,7 @@ static bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long max_seq, } while (mm); done: if (!success) { - if (sc->priority <= DEF_PRIORITY - 2) + if (scan_priority <= DEF_PRIORITY - 2) wait_event_killable(lruvec->mm_state.wait, max_seq < READ_ONCE(lrugen->max_seq)); @@ -4465,6 +4483,61 @@ static bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long max_seq, return true; } +#else + +/* + * inc_max_seq can drop the lru_lock in between. So use a waitqueue seq_update_progress + * to allow concurrent access. + */ +bool __try_to_inc_max_seq(struct lruvec *lruvec, + unsigned long max_seq, int scan_priority, + bool can_swap, bool force_scan) +{ + bool success = false; + struct lru_gen_struct *lrugen = &lruvec->lrugen; + + VM_WARN_ON_ONCE(max_seq > READ_ONCE(lrugen->max_seq)); + + /* see the comment in iterate_mm_list() */ + if (lruvec->seq_update_progress) + success = false; + else { + spin_lock_irq(&lruvec->lru_lock); + + if (max_seq != lrugen->max_seq) + goto done; + + if (lruvec->seq_update_progress) + goto done; + + success = true; + lruvec->seq_update_progress = true; +done: + spin_unlock_irq(&lruvec->lru_lock); + } + if (!success) { + if (scan_priority <= DEF_PRIORITY - 2) + wait_event_killable(lruvec->seq_update_wait, + max_seq < READ_ONCE(lrugen->max_seq)); + + return max_seq < READ_ONCE(lrugen->max_seq); + } + + VM_WARN_ON_ONCE(max_seq != READ_ONCE(lrugen->max_seq)); + inc_max_seq(lruvec, can_swap, force_scan); + /* either this sees any waiters or they will see updated max_seq */ + if (wq_has_sleeper(&lruvec->seq_update_wait)) + wake_up_all(&lruvec->seq_update_wait); + + return success; +} + +static bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long max_seq, + int scan_priority, bool can_swap, bool force_scan) +{ + return __try_to_inc_max_seq(lruvec, max_seq, scan_priority, can_swap, force_scan); +} +#endif static bool should_run_aging(struct lruvec *lruvec, unsigned long max_seq, unsigned long *min_seq, struct scan_control *sc, bool can_swap, unsigned long *nr_to_scan) @@ -4554,8 +4627,7 @@ static bool age_lruvec(struct lruvec *lruvec, struct scan_control *sc, unsigned } if (need_aging) - try_to_inc_max_seq(lruvec, max_seq, sc, swappiness, false); - + try_to_inc_max_seq(lruvec, max_seq, sc->priority, swappiness, false); return true; } @@ -4617,6 +4689,7 @@ static void lru_gen_age_node(struct pglist_data *pgdat, struct scan_control *sc) } } +#ifdef CONFIG_LRU_TASK_PAGE_AGING /* * This function exploits spatial locality when shrink_folio_list() walks the * rmap. It scans the adjacent PTEs of a young PTE and promotes hot pages. If @@ -4744,6 +4817,115 @@ void lru_gen_look_around(struct page_vma_mapped_walk *pvmw) mem_cgroup_unlock_pages(); } +#else +/* + * This function exploits spatial locality when shrink_page_list() walks the + * rmap. It scans the adjacent PTEs of a young PTE and promotes hot pages. + */ +void lru_gen_look_around(struct page_vma_mapped_walk *pvmw) +{ + int i; + pte_t *pte; + unsigned long start; + unsigned long end; + unsigned long addr; + unsigned long bitmap[BITS_TO_LONGS(MIN_LRU_BATCH)] = {}; + struct folio *folio = pfn_folio(pvmw->pfn); + struct mem_cgroup *memcg = folio_memcg(folio); + struct pglist_data *pgdat = folio_pgdat(folio); + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat); + DEFINE_MAX_SEQ(lruvec); + int old_gen, new_gen = lru_gen_from_seq(max_seq); + + lockdep_assert_held(pvmw->ptl); + VM_WARN_ON_ONCE_FOLIO(folio_test_lru(folio), folio); + + if (spin_is_contended(pvmw->ptl)) + return; + + start = max(pvmw->address & PMD_MASK, pvmw->vma->vm_start); + end = min(pvmw->address | ~PMD_MASK, pvmw->vma->vm_end - 1) + 1; + + if (end - start > MIN_LRU_BATCH * PAGE_SIZE) { + if (pvmw->address - start < MIN_LRU_BATCH * PAGE_SIZE / 2) + end = start + MIN_LRU_BATCH * PAGE_SIZE; + else if (end - pvmw->address < MIN_LRU_BATCH * PAGE_SIZE / 2) + start = end - MIN_LRU_BATCH * PAGE_SIZE; + else { + start = pvmw->address - MIN_LRU_BATCH * PAGE_SIZE / 2; + end = pvmw->address + MIN_LRU_BATCH * PAGE_SIZE / 2; + } + } + + pte = pvmw->pte - (pvmw->address - start) / PAGE_SIZE; + + rcu_read_lock(); + arch_enter_lazy_mmu_mode(); + + for (i = 0, addr = start; addr != end; i++, addr += PAGE_SIZE) { + unsigned long pfn; + + pfn = get_pte_pfn(pte[i], pvmw->vma, addr); + if (pfn == -1) + continue; + + if (!pte_young(pte[i])) + continue; + + folio = get_pfn_folio(pfn, memcg, pgdat, true); + if (!folio) + continue; + + if (!ptep_test_and_clear_young(pvmw->vma, addr, pte + i)) + VM_WARN_ON_ONCE(true); + + if (pte_dirty(pte[i]) && !folio_test_dirty(folio) && + !(folio_test_anon(folio) && folio_test_swapbacked(folio) && + !folio_test_swapcache(folio))) + folio_mark_dirty(folio); + + old_gen = folio_lru_gen(folio); + if (old_gen < 0) + folio_set_referenced(folio); + else if (old_gen != new_gen) + __set_bit(i, bitmap); + } + + arch_leave_lazy_mmu_mode(); + rcu_read_unlock(); + + if (bitmap_weight(bitmap, MIN_LRU_BATCH) < PAGEVEC_SIZE) { + for_each_set_bit(i, bitmap, MIN_LRU_BATCH) { + folio = pfn_folio(pte_pfn(pte[i])); + folio_activate(folio); + } + return; + } + + /* folio_update_gen() requires stable folio_memcg() */ + if (!mem_cgroup_trylock_pages(memcg)) + return; + + spin_lock_irq(&lruvec->lru_lock); + new_gen = lru_gen_from_seq(lruvec->lrugen.max_seq); + + for_each_set_bit(i, bitmap, MIN_LRU_BATCH) { + folio = pfn_folio(pte_pfn(pte[i])); + if (folio_memcg_rcu(folio) != memcg) + continue; + + old_gen = folio_update_gen(folio, new_gen); + if (old_gen < 0 || old_gen == new_gen) + continue; + + lru_gen_update_size(lruvec, folio, old_gen, new_gen); + } + + spin_unlock_irq(&lruvec->lru_lock); + + mem_cgroup_unlock_pages(); +} +#endif /****************************************************************************** * the eviction @@ -5026,7 +5208,9 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap struct folio *next; enum vm_event_item item; struct reclaim_stat stat; +#ifdef CONFIG_LRU_TASK_PAGE_AGING struct lru_gen_mm_walk *walk; +#endif bool skip_retry = false; struct mem_cgroup *memcg = lruvec_memcg(lruvec); struct pglist_data *pgdat = lruvec_pgdat(lruvec); @@ -5081,9 +5265,11 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap move_folios_to_lru(lruvec, &list); +#ifdef CONFIG_LRU_TASK_PAGE_AGING walk = current->reclaim_state->mm_walk; if (walk && walk->batched) reset_batch_size(lruvec, walk); +#endif item = PGSTEAL_KSWAPD + reclaimer_offset(); if (!cgroup_reclaim(sc)) @@ -5140,8 +5326,9 @@ static unsigned long get_nr_to_scan(struct lruvec *lruvec, struct scan_control * if (current_is_kswapd()) return 0; - if (try_to_inc_max_seq(lruvec, max_seq, sc, can_swap, false)) + if (try_to_inc_max_seq(lruvec, max_seq, sc->priority, can_swap, false)) return nr_to_scan; + done: return min_seq[!can_swap] + MIN_NR_GENS <= max_seq ? nr_to_scan : 0; } @@ -5610,6 +5797,7 @@ static void lru_gen_seq_show_full(struct seq_file *m, struct lruvec *lruvec, seq_putc(m, '\n'); } +#ifdef CONFIG_LRU_TASK_PAGE_AGING seq_puts(m, " "); for (i = 0; i < NR_MM_STATS; i++) { const char *s = " "; @@ -5626,6 +5814,7 @@ static void lru_gen_seq_show_full(struct seq_file *m, struct lruvec *lruvec, seq_printf(m, " %10lu%c", n, s[i]); } seq_putc(m, '\n'); +#endif } /* see Documentation/admin-guide/mm/multigen_lru.rst for details */ @@ -5707,7 +5896,7 @@ static int run_aging(struct lruvec *lruvec, unsigned long seq, struct scan_contr if (!force_scan && min_seq[!can_swap] + MAX_NR_GENS - 1 <= max_seq) return -ERANGE; - try_to_inc_max_seq(lruvec, max_seq, sc, can_swap, force_scan); + try_to_inc_max_seq(lruvec, max_seq, sc->priority, can_swap, force_scan); return 0; } @@ -5898,21 +6087,26 @@ void lru_gen_init_lruvec(struct lruvec *lruvec) for_each_gen_type_zone(gen, type, zone) INIT_LIST_HEAD(&lrugen->lists[gen][type][zone]); - +#ifdef CONFIG_LRU_TASK_PAGE_AGING lruvec->mm_state.seq = MIN_NR_GENS; init_waitqueue_head(&lruvec->mm_state.wait); +#else + lruvec->seq_update_progress = false; + init_waitqueue_head(&lruvec->seq_update_wait); +#endif } #ifdef CONFIG_MEMCG void lru_gen_init_memcg(struct mem_cgroup *memcg) { +#ifdef CONFIG_LRU_TASK_PAGE_AGING INIT_LIST_HEAD(&memcg->mm_list.fifo); spin_lock_init(&memcg->mm_list.lock); +#endif } void lru_gen_exit_memcg(struct mem_cgroup *memcg) { - int i; int nid; for_each_node(nid) { @@ -5920,11 +6114,12 @@ void lru_gen_exit_memcg(struct mem_cgroup *memcg) VM_WARN_ON_ONCE(memchr_inv(lruvec->lrugen.nr_pages, 0, sizeof(lruvec->lrugen.nr_pages))); - - for (i = 0; i < NR_BLOOM_FILTERS; i++) { +#ifdef CONFIG_LRU_TASK_PAGE_AGING + for (int i = 0; i < NR_BLOOM_FILTERS; i++) { bitmap_free(lruvec->mm_state.filters[i]); lruvec->mm_state.filters[i] = NULL; } +#endif } } #endif From patchwork Sun Apr 2 10:42:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 13197365 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6ADE9C7619A for ; Sun, 2 Apr 2023 10:43:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 10CB26B0078; Sun, 2 Apr 2023 06:43:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0BCC76B007B; Sun, 2 Apr 2023 06:43:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E9F8F6B007D; Sun, 2 Apr 2023 06:43:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id DB7E26B0078 for ; Sun, 2 Apr 2023 06:43:21 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 61F41AB774 for ; Sun, 2 Apr 2023 10:43:21 +0000 (UTC) X-FDA: 80636114202.26.C67B53E Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by imf07.hostedemail.com (Postfix) with ESMTP id 05E2C40006 for ; Sun, 2 Apr 2023 10:43:18 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=tj18M0pB; dmarc=pass (policy=none) header.from=ibm.com; spf=pass (imf07.hostedemail.com: domain of aneesh.kumar@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=aneesh.kumar@linux.ibm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1680432199; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5qxkyCYd49dRRXkdSjfBTAzu3lls6ES4h0/iKkFPgcE=; b=R9Pr/oI2XGiWdfcJZ2tm50JiMSsv+kT5AzDxUUDusicqvOu64nHOJBpySUXW8Yy1Ell3se hGXJUN1sfxeMmo0ZLi+oIAUnTSsWBshhHn12ayRGLupsn2MY48yOsvZ3XfL5SSUKUU5vys w/ctczCy3FIqY1lriDHKlynfPYUS0gc= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=tj18M0pB; dmarc=pass (policy=none) header.from=ibm.com; spf=pass (imf07.hostedemail.com: domain of aneesh.kumar@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=aneesh.kumar@linux.ibm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1680432199; a=rsa-sha256; cv=none; b=fQov2bj3nzwRVXufXepFHdLlYILtB8cZcbtQrU/Y/qXfC7fTZpfu9GFIDL/ibDpfs5Vz+0 KAWV9EZ/hbHlJba2X04pvl823YTs+nCj5oxrtxXQoTbr5bWrsqORx2kghCwyhTUOOP5ZW/ d/0VFQvXIWYOsucYlXKuhWhdq6LZyTs= Received: from pps.filterd (m0098409.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3327P8bT019990; Sun, 2 Apr 2023 10:43:13 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=5qxkyCYd49dRRXkdSjfBTAzu3lls6ES4h0/iKkFPgcE=; b=tj18M0pBVsy0g6uRtKK9DGlgCinaHUACvdYgHxUAXcLL1kE0kgLAX15zLobanSihFHtJ 1xWzmp/NFNo6dtceDR5tsTLWcaA94DLZGSFRkUKJkbGENlGXoQX5siH1pzEDS+KUh9bq UVBwqOwh+fdMaPvBPiJNczI+R5bi3VRRdJcT6/9qyknFM4PDwxvVM3sdojYuLG5b9z5q M4Rlu3AR99a9Tv6IdncynOOeH/g4oNu1FmXtVtS1171BDlNJ1eAwPJDTQlI+Qes0WQcA 4wfk+nXf57gDVIac+dLSltbM6tyBHKOXnSHMhq33ia8bHRyGpFY6Mdj9o8pBsVVuZjKW Ig== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3ppxdtfb2y-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sun, 02 Apr 2023 10:43:12 +0000 Received: from m0098409.ppops.net (m0098409.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 332Ae5Z4013827; Sun, 2 Apr 2023 10:43:12 GMT Received: from ppma01dal.us.ibm.com (83.d6.3fa9.ip4.static.sl-reverse.com [169.63.214.131]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3ppxdtfb2q-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sun, 02 Apr 2023 10:43:12 +0000 Received: from pps.filterd (ppma01dal.us.ibm.com [127.0.0.1]) by ppma01dal.us.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 3328bWKO016965; Sun, 2 Apr 2023 10:43:11 GMT Received: from smtprelay03.wdc07v.mail.ibm.com ([9.208.129.113]) by ppma01dal.us.ibm.com (PPS) with ESMTPS id 3ppc88fj6y-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sun, 02 Apr 2023 10:43:11 +0000 Received: from smtpav03.dal12v.mail.ibm.com (smtpav03.dal12v.mail.ibm.com [10.241.53.102]) by smtprelay03.wdc07v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 332Ah9sC24904410 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sun, 2 Apr 2023 10:43:09 GMT Received: from smtpav03.dal12v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 861BE58064; Sun, 2 Apr 2023 10:43:09 +0000 (GMT) Received: from smtpav03.dal12v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B98D158056; Sun, 2 Apr 2023 10:43:05 +0000 (GMT) Received: from skywalker.ibmuc.com (unknown [9.43.8.200]) by smtpav03.dal12v.mail.ibm.com (Postfix) with ESMTP; Sun, 2 Apr 2023 10:43:05 +0000 (GMT) From: "Aneesh Kumar K.V" To: linux-mm@kvack.org, akpm@linux-foundation.org Cc: Dave Hansen , Johannes Weiner , Matthew Wilcox , Mel Gorman , Yu Zhao , Wei Xu , Guru Anbalagane , "Aneesh Kumar K.V" Subject: [RFC PATCH v1 3/7] mm: multi-gen LRU: avoid using generation stored in page flags for generation Date: Sun, 2 Apr 2023 16:12:36 +0530 Message-Id: <20230402104240.1734931-4-aneesh.kumar@linux.ibm.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230402104240.1734931-1-aneesh.kumar@linux.ibm.com> References: <20230402104240.1734931-1-aneesh.kumar@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: IEDsy_NM0kJEcyGtdsH50S6N8qCe916z X-Proofpoint-GUID: 7iFHbo6PJVUYWe52UvWmk7XU7G6C7mrL X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-31_07,2023-03-31_01,2023-02-09_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 clxscore=1015 bulkscore=0 spamscore=0 suspectscore=0 adultscore=0 mlxlogscore=999 impostorscore=0 malwarescore=0 mlxscore=0 phishscore=0 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2303200000 definitions=main-2304020092 X-Rspamd-Queue-Id: 05E2C40006 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: t7gg4pwiwbt3d11zeni4mggxkpkp78j5 X-HE-Tag: 1680432198-732378 X-HE-Meta: U2FsdGVkX1+26leUjCY9L5ZkWVNEBm3YXwKMO+WxSxkyIXAQAv9Zg9HQEpr0u6ethM5RI3HHPnaHk0pywzVtZbWbGcwDbSy8rMWjivTRTd/UPtVENSatpbOG9DpFVNWNAZllMC2mafVRIIhFSKxSbZd0Vsuwg2dPPyJh/J2w8cdwF3AM8k5Aq5IPXQY6LhbL5kLCgxJZljFRui9dDABbbO7U6uJayl8imUmmTlug8X85/H6YssqkOHGKoVdHjLK8DNiucw3tVXTL5bFl14Ym7MA2xUzeNpo57+Bfwfl0sjCOnsGXsgRc7DHcdz+SWhnBj9JKSHErCc6QImHZ4wOpos/9WnRuOfbHU/CnvZ6KHlbrbgf/zfsHVfHnd/4gc9DP7ne3v9BtbEt/rocZJgGCeRne9UigZdz+POl2d9myaIfSRUnha7SNtwyY0OmFHWY4sPmaEMvJiJLCzi/x+YNXHQSrwws3wDUD2MYtpzRzxB0xx5cyxFATRyaOrtbaLx4j9ECN9gxHUazM317JgcgWmsoGR1zVV7iSzWRIU0e/ObEvSj1hghhjwQBzK8kjrEUEWpblYMgb9tuVlrWUXL+X4fowBmCQu0tQOiBOy6URQ7KGE2q1I1c3caD/f3MNFYbKIfMzofRtueNi18hlYRf9FWMvGt6N+pzimhOXrycmi372pUu6GMsB10FfkuN46nsB6V7imj+HsJqEJ3+GiOSgLtsmwIr3kl3o5Jwdw/2YYfLsJ84x7l6gVSX/Sf3BmYSMwGs/RV3eXaQ9Cos/YNhmtT8ez5ejSeIwCRXsFBVHMKRsns4tp53Qzen6VSXV2KnaPHVKc3kEhgCm9fqQ7Mg13nkuJ8rCLBYpz1loLvewBOfgIRABtA7lcYOqD2bFnIsHiOineSysskuE9ylsVP4tVBruwmSTnhzKvHpldAoU4WosPLsuPDr5FuvHYmnnacabxpcVmneFbuTFu/r+VHc VzBpHcg1 wnzwCGIKom/A5mVnVao8NO294kXZsL1yXlOEQ5yFkZ5nH6BPEMmk8DeydzmpCEe5ysXfMw67Xrf9HJY9E04iiuorsxNfcb4pa01/8HIWAT4PuHhn+JEfy2w+qi3pIkXTBSFebWCNuyY/WirXDeHEULbuGuoS4pI4VN1J3 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Some architectures can handle different methods for determining page access count. In such case, we may not really need to use page flags for tracking generation. We can possibly derive generation directly from arch-supported access count values. Hence avoid using page flags to store generation in that case. Signed-off-by: Aneesh Kumar K.V --- include/linux/page_aging.h | 26 ++++++++++++++++++++++++++ mm/rmap.c | 4 +++- mm/vmscan.c | 34 +++++++++++++++++++++++++--------- 3 files changed, 54 insertions(+), 10 deletions(-) create mode 100644 include/linux/page_aging.h diff --git a/include/linux/page_aging.h b/include/linux/page_aging.h new file mode 100644 index 000000000000..ab77f4578916 --- /dev/null +++ b/include/linux/page_aging.h @@ -0,0 +1,26 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ + +#ifndef _LINUX_PAGE_AGING_H +#define _LINUX_PAGE_AGING_H + +#ifndef arch_supports_page_access_count +static inline bool arch_supports_page_access_count(void) +{ + return false; +} +#endif + +#ifdef CONFIG_LRU_GEN +#ifndef arch_get_lru_gen_seq +static inline unsigned long arch_get_lru_gen_seq(struct lruvec *lruvec, struct folio *folio) +{ + int type = folio_is_file_lru(folio); + + return lruvec->lrugen.min_seq[type]; +} +#endif +#endif /* CONFIG_LRU_GEN */ + +#endif + + diff --git a/mm/rmap.c b/mm/rmap.c index b616870a09be..1ef3cb8119d5 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -74,6 +74,7 @@ #include #include #include +#include #include @@ -825,7 +826,8 @@ static bool folio_referenced_one(struct folio *folio, if (pvmw.pte) { if (lru_gen_enabled() && pte_young(*pvmw.pte) && !(vma->vm_flags & (VM_SEQ_READ | VM_RAND_READ))) { - lru_gen_look_around(&pvmw); + if (!arch_supports_page_access_count()) + lru_gen_look_around(&pvmw); referenced++; } diff --git a/mm/vmscan.c b/mm/vmscan.c index f92b689af2a5..518d1482f6ab 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -62,6 +62,7 @@ #include #include #include +#include #include "internal.h" #include "swap.h" @@ -4934,7 +4935,7 @@ void lru_gen_look_around(struct page_vma_mapped_walk *pvmw) static bool sort_folio(struct lruvec *lruvec, struct folio *folio, int tier_idx) { bool success; - int gen = folio_lru_gen(folio); + int gen; int type = folio_is_file_lru(folio); int zone = folio_zonenum(folio); int delta = folio_nr_pages(folio); @@ -4942,7 +4943,6 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, int tier_idx) int tier = lru_tier_from_refs(refs); struct lru_gen_struct *lrugen = &lruvec->lrugen; - VM_WARN_ON_ONCE_FOLIO(gen >= MAX_NR_GENS, folio); /* unevictable */ if (!folio_evictable(folio)) { @@ -4963,8 +4963,14 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, int tier_idx) return true; } - /* promoted */ + if (!arch_supports_page_access_count()) { + gen = folio_lru_gen(folio); + VM_WARN_ON_ONCE_FOLIO(gen >= MAX_NR_GENS, folio); + } else + gen = lru_gen_from_seq(arch_get_lru_gen_seq(lruvec, folio)); + if (gen != lru_gen_from_seq(lrugen->min_seq[type])) { + /* promote the folio */ list_move(&folio->lru, &lrugen->lists[gen][type][zone]); return true; } @@ -5464,12 +5470,22 @@ bool lru_gen_add_folio(struct lruvec *lruvec, struct folio *folio, bool reclaimi */ if (folio_test_active(folio)) seq = lrugen->max_seq; - else if ((type == LRU_GEN_ANON && !folio_test_swapcache(folio)) || - (folio_test_reclaim(folio) && - (folio_test_dirty(folio) || folio_test_writeback(folio)))) - seq = lrugen->min_seq[type] + 1; - else - seq = lrugen->min_seq[type]; + else { + /* + * For a non active folio use the arch based + * aging details to derive the MGLRU generation. + */ + seq = arch_get_lru_gen_seq(lruvec, folio); + + if (seq == lrugen->min_seq[type]) { + if ((type == LRU_GEN_ANON && + !folio_test_swapcache(folio)) || + (folio_test_reclaim(folio) && + (folio_test_dirty(folio) || + folio_test_writeback(folio)))) + seq = lrugen->min_seq[type] + 1; + } + } gen = lru_gen_from_seq(seq); flags = (gen + 1UL) << LRU_GEN_PGOFF; From patchwork Sun Apr 2 10:42:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 13197366 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C903C77B60 for ; Sun, 2 Apr 2023 10:43:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F404B6B007B; Sun, 2 Apr 2023 06:43:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E9F766B007D; Sun, 2 Apr 2023 06:43:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D191B6B007E; Sun, 2 Apr 2023 06:43:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id B41B76B007B for ; Sun, 2 Apr 2023 06:43:22 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 8641E1407A4 for ; Sun, 2 Apr 2023 10:43:22 +0000 (UTC) X-FDA: 80636114244.19.7006992 Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by imf01.hostedemail.com (Postfix) with ESMTP id 5566140008 for ; Sun, 2 Apr 2023 10:43:20 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=hmAi8lkS; spf=pass (imf01.hostedemail.com: domain of aneesh.kumar@linux.ibm.com designates 148.163.158.5 as permitted sender) smtp.mailfrom=aneesh.kumar@linux.ibm.com; dmarc=pass (policy=none) header.from=ibm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1680432200; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=H5xo3KX2h7RxDESOc++RDKuwt8D2+IlANc/bnVmey3k=; b=bSOaT0k4a0vaPiFUkQ26BGGDlBsAszpfzAwM0/pTj0cK59lefxzgK3J0DuNxS9N/nPQO3I U6yJmP4VWhCzVSDQ4ZCGal4BU3kIgBrWU0kLWng6+4IOOUl+Qcgm6lu31Xw4A75ephe4nQ OOTfKOSeAoWcVeSSlfblngv+i3dJWAk= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=hmAi8lkS; spf=pass (imf01.hostedemail.com: domain of aneesh.kumar@linux.ibm.com designates 148.163.158.5 as permitted sender) smtp.mailfrom=aneesh.kumar@linux.ibm.com; dmarc=pass (policy=none) header.from=ibm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1680432200; a=rsa-sha256; cv=none; b=T4RbZuHtpAmpjJFTPSwiV1zxneg+B5dfpLWMrF1/Gt5boknJsEgJ5xdTO1ddV/WtgYVp1Q zO+5ko8vV40/98tB5Tg7nM/tczg3kVm9/BWlVluRyTMKAIQvPTzzGq/x7+/KXI6511mbF2 MrOnEKhgPJM8/smogdOB1hAjosXaijM= Received: from pps.filterd (m0098421.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3327QHKd024183; Sun, 2 Apr 2023 10:43:17 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=H5xo3KX2h7RxDESOc++RDKuwt8D2+IlANc/bnVmey3k=; b=hmAi8lkSdBpy90ZRq19cg0tN+OPtGNhh6aZ8e+QLvyUVisvHrwKcfzptmTBZQlKREcbA 3YHO2NYSqcOY97IbNlNiyqTb0F9cDOe6Z/0wS3eBgiKW9BJVuDw/GGyo1dqFBvp2HJ2B AupgAdqBbTVY6XLEOq68/ehYtH+bAiKPgWn5scsWGN3+CJEM7CklGF7HgefXJ7stIO7c 23KoB8uI6otrTjfGiqaH6mJPczpOg1NIkw1sUGyUelt4/fIPNLGlx/dawDlLzxILR44Q Lv6dNrT781Hsw9hDvvgPDDOnZWtpRMozqIo5sMNUUZurPr4XP4CnBnTQ6pTSdgPVsepp KQ== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3ppxf6q6rc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sun, 02 Apr 2023 10:43:17 +0000 Received: from m0098421.ppops.net (m0098421.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 332AhGFn031269; Sun, 2 Apr 2023 10:43:16 GMT Received: from ppma01dal.us.ibm.com (83.d6.3fa9.ip4.static.sl-reverse.com [169.63.214.131]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3ppxf6q6r2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sun, 02 Apr 2023 10:43:16 +0000 Received: from pps.filterd (ppma01dal.us.ibm.com [127.0.0.1]) by ppma01dal.us.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 3328ObV5016948; Sun, 2 Apr 2023 10:43:15 GMT Received: from smtprelay03.dal12v.mail.ibm.com ([9.208.130.98]) by ppma01dal.us.ibm.com (PPS) with ESMTPS id 3ppc88fj7a-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sun, 02 Apr 2023 10:43:15 +0000 Received: from smtpav03.dal12v.mail.ibm.com (smtpav03.dal12v.mail.ibm.com [10.241.53.102]) by smtprelay03.dal12v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 332AhEgI14615048 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sun, 2 Apr 2023 10:43:14 GMT Received: from smtpav03.dal12v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E8A6458063; Sun, 2 Apr 2023 10:43:13 +0000 (GMT) Received: from smtpav03.dal12v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 27C2B58056; Sun, 2 Apr 2023 10:43:10 +0000 (GMT) Received: from skywalker.ibmuc.com (unknown [9.43.8.200]) by smtpav03.dal12v.mail.ibm.com (Postfix) with ESMTP; Sun, 2 Apr 2023 10:43:09 +0000 (GMT) From: "Aneesh Kumar K.V" To: linux-mm@kvack.org, akpm@linux-foundation.org Cc: Dave Hansen , Johannes Weiner , Matthew Wilcox , Mel Gorman , Yu Zhao , Wei Xu , Guru Anbalagane , "Aneesh Kumar K.V" Subject: [RFC PATCH v1 4/7] mm: multi-gen LRU: support different page aging mechanism Date: Sun, 2 Apr 2023 16:12:37 +0530 Message-Id: <20230402104240.1734931-5-aneesh.kumar@linux.ibm.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230402104240.1734931-1-aneesh.kumar@linux.ibm.com> References: <20230402104240.1734931-1-aneesh.kumar@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: uWEiwNEkV-VugLYlvXKiQMA0Z8iyvpVU X-Proofpoint-ORIG-GUID: LmiSZ2wYkVlfp2zi1XH5yq7XNqrJrP9f X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-31_07,2023-03-31_01,2023-02-09_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 bulkscore=0 clxscore=1015 malwarescore=0 adultscore=0 lowpriorityscore=0 suspectscore=0 mlxlogscore=916 impostorscore=0 priorityscore=1501 phishscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2303200000 definitions=main-2304020092 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 5566140008 X-Rspam-User: X-Stat-Signature: f65bmxep54oo69r693xhedxgtitpj48n X-HE-Tag: 1680432200-767787 X-HE-Meta: U2FsdGVkX1+bcxZ3u/kDYXU/53Ud1nf1AbfyKCK1nvwrIEXLqLVNPlPhiKFZP9gTl26ffuwr4TzzMr6H4iKNOgYsZTw1UuBlquRytPW0y6WC/NQenhE4k/vaRtKhlDWryH5lPb69y5BjYRwpBu6vVrsnwcT3Cj9AillWX0G5jKmMczeJU8zxFU5pXRI8N4z7SkfCJcs7NKcLE10JhI6tJ4xq4OSE7nJ/kXzLL+0axMTJMUXqW+V+0kQNldVa2GD79Df+JUgzaeX/WKNMtMV+xgmEkd7D2DFCu9WcO+uLU2MULlonxLv0I2y/hMqL6U6KePBUEQDbZ2gFaLnM+6uggUBT6OFHDA9mjE0ZBlPLUsihFpMoZPGWsozZei0czT13a9o3jJcNPEeEyDqRg74/W3TbgTE3jFpjQ+qNedWKapVaLfyyG7mJeOUYh5gzyyAnnKKI7KcCnAnZAvlcu8VuEsk50STqh8hvLuISB78JaSUvtFiVrsws4+c3umcG3JDQRQdHaWhrX7ClHmzpYeW76bUkmWv/XVzKSsgruGjBVtt0RjjTwZglZkhOegxWELjMsMeSER2epI4eUHGtc+OJMWaD6Iwirt4fHFWVnqssP7x+UYHvVI41BjWybzuuvWA34zwnIE5dA9gCjkW8W1Vr2rrYgtd42OnoIaa0PP/necgEJIX/reW4N1sGxHdZq+a36cONCZA2GhwKtcjFn/GtfRE/kjyExYamXHsfgqqB59x888edANC+wF3iEZw3mW73AbBKKqWE68/4GPUpuz6O1VqxCFU2yJsjEwf4hz3TeSiJTYoaW1/mT6Iysumkm9vUObKK1qK++XCnV1J0j1cKJlophzCMZKMdGvp5HwmscK/ffOWRmTSjY5osTDgilxAEzgMLnUrq4bBGJoMpkSGA/cu6gAvapVcpgEw29SLQXlcpPobFO3vtxbqqMUs1/b+43+g0A59cFDROERYj8yG KlsECljL CuALeLOnvzgyFyLImx9qBP1YcG9CxfhOHG1mdIpELjI+1WI9rOEZbCBTMvDJblHqBsBtAYpC7rsIQqnmM8Au68cZVY1EqQD2dk4YlTNmM3Z4XAbVLKLico111AqFpEUBGI+wTHeC1d0XDxSKaxAVrA59TRFEDgueC+Nr6 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Some architectures can handle different methods for determining page access count. We may want to do architecture-specific tasks reclassifying generation temperature etc during aging for such architecture. Add an arch hook to support that. Signed-off-by: Aneesh Kumar K.V --- include/linux/page_aging.h | 14 +++++++++++++- mm/vmscan.c | 8 +++++++- 2 files changed, 20 insertions(+), 2 deletions(-) diff --git a/include/linux/page_aging.h b/include/linux/page_aging.h index ab77f4578916..d7c63ce0d824 100644 --- a/include/linux/page_aging.h +++ b/include/linux/page_aging.h @@ -11,6 +11,10 @@ static inline bool arch_supports_page_access_count(void) #endif #ifdef CONFIG_LRU_GEN +bool __try_to_inc_max_seq(struct lruvec *lruvec, + unsigned long max_seq, int scan_priority, + bool can_swap, bool force_scan); + #ifndef arch_get_lru_gen_seq static inline unsigned long arch_get_lru_gen_seq(struct lruvec *lruvec, struct folio *folio) { @@ -19,8 +23,16 @@ static inline unsigned long arch_get_lru_gen_seq(struct lruvec *lruvec, struct f return lruvec->lrugen.min_seq[type]; } #endif -#endif /* CONFIG_LRU_GEN */ +#ifndef arch_try_to_inc_max_seq +static inline bool arch_try_to_inc_max_seq(struct lruvec *lruvec, unsigned long max_seq, + int scan_priority, bool can_swap, bool force_scan) +{ + return __try_to_inc_max_seq(lruvec, max_seq, scan_priority, can_swap, force_scan); +} +#endif + +#endif /* CONFIG_LRU_GEN */ #endif diff --git a/mm/vmscan.c b/mm/vmscan.c index 518d1482f6ab..c8b98201f0b0 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4536,7 +4536,13 @@ bool __try_to_inc_max_seq(struct lruvec *lruvec, static bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long max_seq, int scan_priority, bool can_swap, bool force_scan) { - return __try_to_inc_max_seq(lruvec, max_seq, scan_priority, can_swap, force_scan); + if (arch_supports_page_access_count()) + return arch_try_to_inc_max_seq(lruvec, max_seq, + scan_priority, can_swap, + force_scan); + else + return __try_to_inc_max_seq(lruvec, max_seq, + scan_priority, can_swap, force_scan); } #endif From patchwork Sun Apr 2 10:42:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 13197367 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50200C7619A for ; Sun, 2 Apr 2023 10:43:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DB5F86B007D; Sun, 2 Apr 2023 06:43:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D66406B007E; Sun, 2 Apr 2023 06:43:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C07016B0080; Sun, 2 Apr 2023 06:43:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id AE17F6B007D for ; Sun, 2 Apr 2023 06:43:28 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 340C31407CF for ; Sun, 2 Apr 2023 10:43:28 +0000 (UTC) X-FDA: 80636114496.21.431E097 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by imf18.hostedemail.com (Postfix) with ESMTP id D1B661C0010 for ; Sun, 2 Apr 2023 10:43:25 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=QBs1rv3O; spf=pass (imf18.hostedemail.com: domain of aneesh.kumar@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=aneesh.kumar@linux.ibm.com; dmarc=pass (policy=none) header.from=ibm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1680432206; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=UCxEDFtiGee16VthUhi5zJD9ESLgyFyMYv60OUZLUIE=; b=QWq78v4DamOVeceGEQn/IaY9UOIIhJUFkpcKriLprC4Sd8bPgEx/oHV9T14nQRuYBLC5u7 wKCK7O6BDYFx5MnbWM/kB0AxDZiCocev33+wDCwH2+EOdXvJaV0xVe3B4U7QAL8NXfNA6O b9qBzIxVyKTBcMmUdLRgbD0zBPvtdVM= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=QBs1rv3O; spf=pass (imf18.hostedemail.com: domain of aneesh.kumar@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=aneesh.kumar@linux.ibm.com; dmarc=pass (policy=none) header.from=ibm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1680432206; a=rsa-sha256; cv=none; b=BYjDaa6KlPLKaNPjzP73qohupqikxySo4FzO1KxdcxGWOsMqAMYdKVr9a4WfQKVQsjFzSe YiccajugyaONUXGt2yhhOz0Tglb5qW6fPmKRYXPd5CvGMVXiHyLHXy2a51JD11hZR9nfEX WEUfz2Z47CKPC4sf/49EkrTI5vyump0= Received: from pps.filterd (m0098409.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3327Q4hP019970; Sun, 2 Apr 2023 10:43:21 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=UCxEDFtiGee16VthUhi5zJD9ESLgyFyMYv60OUZLUIE=; b=QBs1rv3OuAwbLnjnVV8ehGbdmeYjluHEJlRgJfNhIR/wJApMNb9q0VuwHQKAWWHbtBF6 E5OpOcB1H+obi8qn4fDkCG65zosgYGxDcGiGLSTxDy3TQ5FzF8SIqgeQZlJ49HjseM6j AUxn8PD5ilqTSJs7FkN0gAF+wSfJcqE8tovhqPrL/mua38IWJGXN2/Q3PINc31bFvTNJ hM4iPC9fsjaMiWdZ9bDo8NV/BOdy9gR+z9y3rWWpJ/3GUeDUePISahhGorHmkWIV/LII 3/NFlt9NKMKuadGw3jA7MzlBRzAtStwOfjDYOcOaziTu6JUvP7yFI6gp82vAJeaoYCra 3g== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3ppxdtfb4c-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sun, 02 Apr 2023 10:43:20 +0000 Received: from m0098409.ppops.net (m0098409.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 332AUIPE020700; Sun, 2 Apr 2023 10:43:20 GMT Received: from ppma03wdc.us.ibm.com (ba.79.3fa9.ip4.static.sl-reverse.com [169.63.121.186]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3ppxdtfb40-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sun, 02 Apr 2023 10:43:20 +0000 Received: from pps.filterd (ppma03wdc.us.ibm.com [127.0.0.1]) by ppma03wdc.us.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 3326iVEs016808; Sun, 2 Apr 2023 10:43:19 GMT Received: from smtprelay06.dal12v.mail.ibm.com ([9.208.130.100]) by ppma03wdc.us.ibm.com (PPS) with ESMTPS id 3ppc87e1q5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sun, 02 Apr 2023 10:43:19 +0000 Received: from smtpav03.dal12v.mail.ibm.com (smtpav03.dal12v.mail.ibm.com [10.241.53.102]) by smtprelay06.dal12v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 332AhI6b11993660 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sun, 2 Apr 2023 10:43:18 GMT Received: from smtpav03.dal12v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 34B7E5805A; Sun, 2 Apr 2023 10:43:18 +0000 (GMT) Received: from smtpav03.dal12v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 89EED58056; Sun, 2 Apr 2023 10:43:14 +0000 (GMT) Received: from skywalker.ibmuc.com (unknown [9.43.8.200]) by smtpav03.dal12v.mail.ibm.com (Postfix) with ESMTP; Sun, 2 Apr 2023 10:43:14 +0000 (GMT) From: "Aneesh Kumar K.V" To: linux-mm@kvack.org, akpm@linux-foundation.org Cc: Dave Hansen , Johannes Weiner , Matthew Wilcox , Mel Gorman , Yu Zhao , Wei Xu , Guru Anbalagane , "Aneesh Kumar K.V" Subject: [RFC PATCH v1 5/7] powerpc/mm: Add page access count support Date: Sun, 2 Apr 2023 16:12:38 +0530 Message-Id: <20230402104240.1734931-6-aneesh.kumar@linux.ibm.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230402104240.1734931-1-aneesh.kumar@linux.ibm.com> References: <20230402104240.1734931-1-aneesh.kumar@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: 1QZYs54kXimLLfCSiyV0y3IvMfi0kZ-P X-Proofpoint-GUID: 4tCiQdgjd1JmyhRQNBnlkyDseqB7aNBk X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-31_07,2023-03-31_01,2023-02-09_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 clxscore=1015 bulkscore=0 spamscore=0 suspectscore=0 adultscore=0 mlxlogscore=999 impostorscore=0 malwarescore=0 mlxscore=0 phishscore=0 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2303200000 definitions=main-2304020092 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: D1B661C0010 X-Rspam-User: X-Stat-Signature: x4o3u41gsmg1xs9pbg7zemcp114iccp1 X-HE-Tag: 1680432205-992629 X-HE-Meta: U2FsdGVkX18BrpZ+HlKmwZKUr0ZMrTVGNgociFGcyJjbz6E3nb+AOZWQuAJHhUI1MeCwWqFn1aMSlktVIsc20FsID4WjfOuCRDUrD9QYXp1lKdXk8GDY0P6fAiLSsNTbr7TOoYs4pfL8gzd6am/NNzQVl88Z01CtXlmHts5FBrwQwUXgQD+R+n0D8A9Rj+sUutDViXcQHyFrKYilP4uF2OKGD1vqe1BS9/tRCsJWE7QAdv+T2AemYd2XXDT+cRiliFhpC3DDLxygoMlVZc1hLh4/cw8TqDdbwhhoqmhPbdtYCq59jUL0cW2P8i8t4mij5470W/9lGJkkoza40RbD7RPzvyAHkeGduc84y+6rC4Cq8QnNiayWLiA0CbdNr01WVoLkkeEq5Pgnr5VzBr62r2+hM1wbvSPz4KuwSoQIui70ujNOcgDZ/caXwX5S+h+vnqVwH5W+wpis3Nm4wpvPfg3U72vLsWAy1ZfjV6Bu+gYmWMN6i+G3ezwvjLsxDooTINzCmoM5y4vPWyi61fjVmBT0easJbST8eYMlb2jyTStjyoC3/XPvB5YOnn2ZCyCQ65jGIF7aU3+x3ci+8FYv6qFMjb+fTtaR3qkbwoLUEcO94gNBWreB1uWvl+Hog9gKJVlCLZT4D0A/CVBCHWozLqxlECmxIQ7hHsGrzpLldehRXaSjQBdllPL8wyZJ0SFM3lYLp4G51wDyVywl5UHlP5ytbKC5LlB/rg5qXW4Em/86B0v4JJp1gz4jnJHHqgBmYIgWlR4fiS8miVLbbd9h0Zigtzo91JzRO4eNRYLxyzj51NAR6Hpf8gPCnb+1CbvRkEF2waGgxeI2n8/Ym5hjJj90gz3szSzaQyBPOFJVy6J5Vt1j0EpUmvwoT3pwsjvyVomYACxLYnNpCxh4usoAHPepF+174DnuDYLfVisUVa4EEs1787MC3bSF5xUfxy9wjjtnxGDwDkRzrqrjtii hMxEN1be Obydy/0rssvmoJPIzCaIpiil4i1ZTTigQ5WhUzNllY0f0SX/vfFMi1CXQfsEbYXgy9iIxrelKbt2/eJR5OwLqowE//8aVkjjXCzl5POdQjn73ND67zj7tjZwwooq8rtvSBtJOSre6mt2T/tafka1fJ2O8NqgAa6wCBJ5i X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hot Cold Affinity engine is a facility provided by POWER10 where each access to a page is counted and the access count value decreased/decayed if not accessed in a time window. There is a 32-bit counter for each page. This patch uses the HCA engine to provide page access count on POWER10 and uses the same with multi-gen LRU to classify the page to correct LRU generation. This uses a simple page classification mechanism where pages are sampled from the youngest and oldest generation to find the max and min page hotness in the lruvec. This value is later used to sort every page to the right generation. The max and min hotness range is established during aging when new generations are created. Not-yet-Signed-off-by: Aneesh Kumar K.V --- arch/powerpc/Kconfig | 10 + arch/powerpc/include/asm/hca.h | 49 +++++ arch/powerpc/include/asm/page_aging.h | 35 ++++ arch/powerpc/mm/Makefile | 1 + arch/powerpc/mm/hca.c | 275 ++++++++++++++++++++++++++ include/linux/mmzone.h | 5 + include/linux/page_aging.h | 5 + mm/Kconfig | 4 + mm/vmscan.c | 5 +- 9 files changed, 387 insertions(+), 2 deletions(-) create mode 100644 arch/powerpc/include/asm/hca.h create mode 100644 arch/powerpc/include/asm/page_aging.h create mode 100644 arch/powerpc/mm/hca.c diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 7a5f8dbfbdd0..71e8f23d9a96 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -1045,6 +1045,16 @@ config PPC_SECVAR_SYSFS read/write operations on these variables. Say Y if you have secure boot enabled and want to expose variables to userspace. +config PPC_HCA_HOTNESS + prompt "PowerPC HCA engine based page hotness" + def_bool y + select ARCH_HAS_PAGE_AGING + depends on PPC_BOOK3S_64 + help + Use HCA engine to find page hotness + + If unsure, say N. + endmenu config ISA_DMA_API diff --git a/arch/powerpc/include/asm/hca.h b/arch/powerpc/include/asm/hca.h new file mode 100644 index 000000000000..c0ed380594ca --- /dev/null +++ b/arch/powerpc/include/asm/hca.h @@ -0,0 +1,49 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ + +/* + * Configuration helpers for the Hot-Cold Affinity helper + */ + +#ifndef _ASM_POWERPC_HCA_H +#define _ASM_POWERPC_HCA_H + +#include + +struct hca_entry { + unsigned long count; + unsigned long prev_count; + uint8_t age; +}; + +static inline unsigned long hotness_score(struct hca_entry *entry) +{ + unsigned long hotness; + +#if 0 + /* + * Give more weightage to the prev_count because it got + * historical values. Take smaller part of count as we + * age more because prev_count would be a better approximation. + * We still need to consider count to accomidate spike in access. + * + 1 with age to handle age == 0. + */ + hotness = entry->prev_count + (entry->count / (entry->age + 1)); +#else + /* Considering we are not finding in real workloads pages with + * very high hotness a decay essentially move count value to prev count. + * At that point we could look at decay as period zeroing of the counter. + * I am finding better results with the below hotness score with real workloads. + */ + hotness = entry->prev_count + entry->count; +#endif + + return hotness; +} + +extern void (*hca_backend_node_debugfs_init)(int numa_node, struct dentry *node_dentry); +extern void (*hca_backend_debugfs_init)(struct dentry *root_dentry); +extern int (*hca_pfn_entry)(unsigned long pfn, struct hca_entry *entry); +extern bool (*hca_node_enabled)(int numa_node); +extern int (*hca_clear_entry)(unsigned long pfn); + +#endif /* _ASM_POWERPC_HCA_H */ diff --git a/arch/powerpc/include/asm/page_aging.h b/arch/powerpc/include/asm/page_aging.h new file mode 100644 index 000000000000..0d98cd877308 --- /dev/null +++ b/arch/powerpc/include/asm/page_aging.h @@ -0,0 +1,35 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ + +#ifndef _ASM_POWERPC_PAGE_AGING_H_ +#define _ASM_POWERPC_PAGE_AGING_H_ + +#ifdef CONFIG_LRU_GEN +extern bool hca_lru_age; +unsigned long hca_map_lru_seq(struct lruvec *lruvec, struct folio *folio); +bool hca_try_to_inc_max_seq(struct lruvec *lruvec, unsigned long max_seq, + int scan_priority, bool can_swap, bool force_scan); + +#define arch_supports_page_access_count arch_supports_page_access_count +static inline bool arch_supports_page_access_count(void) +{ + return hca_lru_age; +} + +#define arch_try_to_inc_max_seq arch_try_to_inc_max_seq +static inline bool arch_try_to_inc_max_seq(struct lruvec *lruvec, unsigned long max_seq, + int scan_priority, bool can_swap, + bool force_scan) +{ + return hca_try_to_inc_max_seq(lruvec, max_seq, scan_priority, + can_swap, force_scan); + +} + +#define arch_get_lru_gen_seq arch_get_lru_gen_seq +static inline unsigned long arch_get_lru_gen_seq(struct lruvec *lruvec, struct folio *folio) +{ + return hca_map_lru_seq(lruvec, folio); +} + +#endif /* CONFIG_LRU_GEN */ +#endif diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile index 503a6e249940..30bd4ad4aff0 100644 --- a/arch/powerpc/mm/Makefile +++ b/arch/powerpc/mm/Makefile @@ -19,3 +19,4 @@ obj-$(CONFIG_NOT_COHERENT_CACHE) += dma-noncoherent.o obj-$(CONFIG_PPC_COPRO_BASE) += copro_fault.o obj-$(CONFIG_PTDUMP_CORE) += ptdump/ obj-$(CONFIG_KASAN) += kasan/ +obj-$(CONFIG_PPC_HCA_HOTNESS) += hca.o diff --git a/arch/powerpc/mm/hca.c b/arch/powerpc/mm/hca.c new file mode 100644 index 000000000000..af6de4492ead --- /dev/null +++ b/arch/powerpc/mm/hca.c @@ -0,0 +1,275 @@ +// SPDX-License-Identifier: GPL-2.0-or-later + +#include +#include +#include +#include +#include + +#include + +bool hca_lru_age; +static struct dentry *hca_debugfs_root; +/* + * percentage of pfns to scan from each lurvec list to determine max/min hotness + */ +static ulong scan_pfn_ratio __read_mostly = 20; +/* + * Millisec to wait/skip before starting another random scan + */ +static ulong scan_skip_msec __read_mostly = 60; + +/* backend callbacks */ +void (*hca_backend_node_debugfs_init)(int numa_node, struct dentry *node_dentry); +void (*hca_backend_debugfs_init)(struct dentry *root_dentry); +int (*hca_pfn_entry)(unsigned long pfn, struct hca_entry *entry); +bool (*hca_node_enabled)(int numa_node); +int (*hca_clear_entry)(unsigned long pfn); + +static int parse_hca_age(char *arg) +{ + return strtobool(arg, &hca_lru_age); +} +early_param("hca_age", parse_hca_age); + +static inline int folio_hca_entry(struct folio *folio, struct hca_entry *entry) +{ + return hca_pfn_entry(folio_pfn(folio), entry); +} + +#ifdef CONFIG_LRU_GEN +static inline int get_nr_gens(struct lruvec *lruvec, int type) +{ + return lruvec->lrugen.max_seq - lruvec->lrugen.min_seq[type] + 1; +} + +/* FIXME!! */ +static inline bool folio_evictable(struct folio *folio) +{ + bool ret; + + /* Prevent address_space of inode and swap cache from being freed */ + rcu_read_lock(); + ret = !mapping_unevictable(folio_mapping(folio)) && + !folio_test_mlocked(folio); + rcu_read_unlock(); + return ret; +} + +static void restablish_hotness_range(struct lruvec *lruvec) +{ + bool youngest = true; + int gen, nr_pages; + unsigned long seq; + int new_scan_pfn_count; + struct lru_gen_struct *lrugen = &lruvec->lrugen; + unsigned long current_hotness, max_hotness = 0, min_hotness = 0; + + if (time_is_after_jiffies64(lrugen->next_span_scan)) + return; + + spin_lock_irq(&lruvec->lru_lock); + +retry: + for (int type = 0; type < ANON_AND_FILE; type++) { + for (int zone = 0; zone < MAX_NR_ZONES; zone++) { + int index = 0; + struct list_head *head; + struct folio *folio; + struct hca_entry entry; + + if (youngest) + seq = lrugen->max_seq; + else + seq = lrugen->min_seq[type]; + gen = lru_gen_from_seq(seq); + nr_pages = lrugen->nr_pages[gen][type][zone]; + + new_scan_pfn_count = nr_pages * scan_pfn_ratio/100; + if (!new_scan_pfn_count) + new_scan_pfn_count = nr_pages; + + head = &lrugen->lists[gen][type][zone]; + list_for_each_entry(folio, head, lru) { + + if (unlikely(!folio_evictable(folio))) + continue; + + if (folio_hca_entry(folio, &entry)) + continue; + + if (index++ > new_scan_pfn_count) + break; + + current_hotness = hotness_score(&entry); + /* If the page didn't see any access, skip it */ + if (!current_hotness) + continue; + /* + * Let's make sure we at least wait 1 decay + * updates before looking at this pfn for + * max/min computation. + */ + if (entry.age < 1) + continue; + + if (current_hotness > max_hotness) + max_hotness = (current_hotness + max_hotness) / 2; + else if ((current_hotness < min_hotness) || !min_hotness) + min_hotness = (current_hotness + min_hotness) / 2; + else if ((current_hotness - min_hotness) < (max_hotness - min_hotness) / 2) + min_hotness = (current_hotness + min_hotness) / 2; + else + max_hotness = (current_hotness + max_hotness) / 2; + + } + + } + } + if (youngest) { + /* compute with oldest generation */ + youngest = false; + goto retry; + } + lrugen->next_span_scan = get_jiffies_64() + msecs_to_jiffies(scan_skip_msec); + if (min_hotness) { + lrugen->max_hotness = max_hotness; + lrugen->min_hotness = min_hotness; + } + + spin_unlock_irq(&lruvec->lru_lock); +} + +/* Return Multigen LRU generation based on folio hotness */ +unsigned long hca_map_lru_seq(struct lruvec *lruvec, struct folio *folio) +{ + unsigned long seq; + int type, nr_gens; + struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct hca_entry folio_entry; + unsigned long hotness, seq_range; + + type = folio_is_file_lru(folio); + if (!hca_lru_age || folio_hca_entry(folio, &folio_entry)) + /* return youngest generation ? */ + return lrugen->min_seq[type]; + + hotness = hotness_score(&folio_entry); + /* The page didn't see any access, return oldest generation */ + if (!hotness) + return lrugen->min_seq[type]; + + /* Also adjust based on current value. */ + if (hotness > lrugen->max_hotness) { + lrugen->max_hotness = (hotness + lrugen->max_hotness) / 2; + return lrugen->max_seq; + } else if (hotness < lrugen->min_hotness) { + lrugen->min_hotness = (hotness + lrugen->min_hotness) / 2; + return lrugen->min_seq[type]; + } + + /* + * Convert the max and min hotness into 4 ranges for sequence. + * Then place our current hotness into one of these range. + * We use the range number as an increment factor for generation. + */ + /* inclusive range min and max */ + seq_range = lrugen->max_hotness - lrugen->min_hotness + 1; + nr_gens = get_nr_gens(lruvec, type); + seq_range = (seq_range + nr_gens - 1)/nr_gens; + + /* higher the hotness younger the generation */ + seq = lrugen->min_seq[type] + ((hotness - lrugen->min_hotness)/seq_range); + + return seq; +} + +bool hca_try_to_inc_max_seq(struct lruvec *lruvec, + unsigned long max_seq, int scan_priority, + bool can_swap, bool force_scan) + +{ + bool success = false; + struct lru_gen_struct *lrugen = &lruvec->lrugen; + + VM_WARN_ON_ONCE(max_seq > READ_ONCE(lrugen->max_seq)); + + /* see the comment in iterate_mm_list() */ + if (lruvec->seq_update_progress) + success = false; + else { + spin_lock_irq(&lruvec->lru_lock); + + if (max_seq != lrugen->max_seq) + goto done; + + if (lruvec->seq_update_progress) + goto done; + + success = true; + lruvec->seq_update_progress = true; +done: + spin_unlock_irq(&lruvec->lru_lock); + } + if (!success) { + if (scan_priority <= DEF_PRIORITY - 2) + wait_event_killable(lruvec->seq_update_wait, + max_seq < READ_ONCE(lrugen->max_seq)); + + return max_seq < READ_ONCE(lrugen->max_seq); + } + + /* + * With hardware aging use the counters to update + * lruvec max and min hotness. + */ + restablish_hotness_range(lruvec); + + VM_WARN_ON_ONCE(max_seq != READ_ONCE(lrugen->max_seq)); + inc_max_seq(lruvec, can_swap, force_scan); + /* either this sees any waiters or they will see updated max_seq */ + if (wq_has_sleeper(&lruvec->seq_update_wait)) + wake_up_all(&lruvec->seq_update_wait); + + return success; +} +#endif /* CONFIG_LRU_GEN */ + +static void hca_debugfs_init(void) +{ + int node; + char name[32]; + struct dentry *node_dentry; + + hca_debugfs_root = debugfs_create_dir("hca", arch_debugfs_dir); + + for_each_online_node(node) { + snprintf(name, sizeof(name), "node%u", node); + node_dentry = debugfs_create_dir(name, hca_debugfs_root); + + hca_backend_node_debugfs_init(node, node_dentry); + } + + debugfs_create_ulong("scan-pfn-ratio", 0600, hca_debugfs_root, + &scan_pfn_ratio); + debugfs_create_ulong("scan-skip-msec", 0600, hca_debugfs_root, + &scan_skip_msec); + debugfs_create_bool("hca_lru_age", 0600, hca_debugfs_root, + &hca_lru_age); + + /* Now create backend debugs */ + hca_backend_debugfs_init(hca_debugfs_root); +} + +static int __init hca_init(void) +{ + if (!hca_backend_debugfs_init) { + pr_info("No HCA device registered. Disabling hca lru gen\n"); + hca_lru_age = false; + } + + hca_debugfs_init(); + return 0; +} + +late_initcall(hca_init); diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 0bcc5d88239a..934ad587a558 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -425,6 +425,11 @@ struct lru_gen_struct { atomic_long_t evicted[NR_HIST_GENS][ANON_AND_FILE][MAX_NR_TIERS]; atomic_long_t refaulted[NR_HIST_GENS][ANON_AND_FILE][MAX_NR_TIERS]; /* whether the multi-gen LRU is enabled */ +#ifndef CONFIG_LRU_TASK_PAGE_AGING + unsigned long max_hotness; + unsigned long min_hotness; + u64 next_span_scan; +#endif bool enabled; }; diff --git a/include/linux/page_aging.h b/include/linux/page_aging.h index d7c63ce0d824..074c876f17e1 100644 --- a/include/linux/page_aging.h +++ b/include/linux/page_aging.h @@ -3,6 +3,10 @@ #ifndef _LINUX_PAGE_AGING_H #define _LINUX_PAGE_AGING_H +#ifdef CONFIG_ARCH_HAS_PAGE_AGING +#include +#endif + #ifndef arch_supports_page_access_count static inline bool arch_supports_page_access_count(void) { @@ -14,6 +18,7 @@ static inline bool arch_supports_page_access_count(void) bool __try_to_inc_max_seq(struct lruvec *lruvec, unsigned long max_seq, int scan_priority, bool can_swap, bool force_scan); +void inc_max_seq(struct lruvec *lruvec, bool can_swap, bool force_scan); #ifndef arch_get_lru_gen_seq static inline unsigned long arch_get_lru_gen_seq(struct lruvec *lruvec, struct folio *folio) diff --git a/mm/Kconfig b/mm/Kconfig index ff7b209dec05..493709ac758e 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1181,6 +1181,10 @@ config LRU_GEN_STATS from evicted generations for debugging purpose. This option has a per-memcg and per-node memory overhead. + +config ARCH_HAS_PAGE_AGING + bool + # } source "mm/damon/Kconfig" diff --git a/mm/vmscan.c b/mm/vmscan.c index c8b98201f0b0..a5f6238b3926 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4362,7 +4362,7 @@ static bool try_to_inc_min_seq(struct lruvec *lruvec, bool can_swap) return success; } -static void inc_max_seq(struct lruvec *lruvec, bool can_swap, bool force_scan) +void inc_max_seq(struct lruvec *lruvec, bool can_swap, bool force_scan) { int prev, next; int type, zone; @@ -4420,6 +4420,7 @@ static void inc_max_seq(struct lruvec *lruvec, bool can_swap, bool force_scan) #endif spin_unlock_irq(&lruvec->lru_lock); } + #ifdef CONFIG_LRU_TASK_PAGE_AGING static bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long max_seq, int scan_priority, bool can_swap, bool force_scan) @@ -5861,7 +5862,7 @@ static int lru_gen_seq_show(struct seq_file *m, void *v) seq_printf(m, "memcg %5hu %s\n", mem_cgroup_id(memcg), path); } - seq_printf(m, " node %5d\n", nid); + seq_printf(m, " node %5d max_hotness %ld min_hotness %ld\n", nid, lrugen->max_hotness, lrugen->min_hotness); if (!full) seq = min_seq[LRU_GEN_ANON]; From patchwork Sun Apr 2 10:42:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 13197368 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 903F0C76196 for ; Sun, 2 Apr 2023 10:43:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2BA666B007E; Sun, 2 Apr 2023 06:43:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 26B496B0080; Sun, 2 Apr 2023 06:43:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0BEC86B0081; Sun, 2 Apr 2023 06:43:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id EF7456B007E for ; Sun, 2 Apr 2023 06:43:30 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 9C6E680794 for ; Sun, 2 Apr 2023 10:43:30 +0000 (UTC) X-FDA: 80636114580.07.8B9518B Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by imf26.hostedemail.com (Postfix) with ESMTP id 6B609140008 for ; Sun, 2 Apr 2023 10:43:28 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=EQvCyUUe; dmarc=pass (policy=none) header.from=ibm.com; spf=pass (imf26.hostedemail.com: domain of aneesh.kumar@linux.ibm.com designates 148.163.158.5 as permitted sender) smtp.mailfrom=aneesh.kumar@linux.ibm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1680432208; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=8i423JRtxVSXXcfEQgFW9GG7VnT0fmrCfVeKml7P7gc=; b=gYxqriuhmyNM1ujL7uo/++3m3WeXIjLkfWTSOSYOcDwQy83ruWVqPINe++mxaheeyU1uhP IY3c28/cXr3WUG/T9Muez6d3wBvAS7EARquyjNHlRFlG3Mo/X7Gxlvob3+w3mJBiy6Uq37 sTk3HzVBsj10GObhy8oBMLw3jFqObVA= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=EQvCyUUe; dmarc=pass (policy=none) header.from=ibm.com; spf=pass (imf26.hostedemail.com: domain of aneesh.kumar@linux.ibm.com designates 148.163.158.5 as permitted sender) smtp.mailfrom=aneesh.kumar@linux.ibm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1680432208; a=rsa-sha256; cv=none; b=F/xnqm776GCXlexPjA5Dx87h8StjR/+7+8Y0nuPu98Fe35DYf/FwJS8yaEalkU1gaoI+PD mkzDJp2JlkCiucnn2l8Kb9E/PeX2dd8rAJsnmARN7vx6b7pL0kqZ78s2E7mo0uQqDbh+Wt hp9cO+KKJRRnzzsVRf8P+IpltM5n52k= Received: from pps.filterd (m0098420.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3326uPYx024561; Sun, 2 Apr 2023 10:43:25 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=8i423JRtxVSXXcfEQgFW9GG7VnT0fmrCfVeKml7P7gc=; b=EQvCyUUeFIt/tB55pmdHIUsrfadx1zRLrUXWr4yb8+DeBbO8PdOgEWfeoX8CNTSdRups Ahx0kTWdqFa398iyTOqf8ZM2CFAwUQv7VIX/CnSEmGN3wmtYKSJPqFaXHV47eS/WlGCL HtqNmOg91OMe7iEaijDLcqDiO99aT8tTkLi5atToIu3rMkKuyK9vj4Nb9e8ZUg3iDGoz O/Lqh6ozn1+gf35cgSK3uWBr9c4knztUB56/pNQBHkKckNo45n0MtTiKkz5arTeJiP7E xv1e7ChdfpwjVKYDv4D+szqjfMZzLiBNusuloBjHwaE3hmHNhv2adMRbCxC4kTEvVFQ1 uQ== Received: from pps.reinject (localhost [127.0.0.1]) by mx0b-001b2d01.pphosted.com (PPS) with ESMTPS id 3ppxj0q5pw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sun, 02 Apr 2023 10:43:25 +0000 Received: from m0098420.ppops.net (m0098420.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 332AeOW4013285; Sun, 2 Apr 2023 10:43:24 GMT Received: from ppma02wdc.us.ibm.com (aa.5b.37a9.ip4.static.sl-reverse.com [169.55.91.170]) by mx0b-001b2d01.pphosted.com (PPS) with ESMTPS id 3ppxj0q5pr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sun, 02 Apr 2023 10:43:24 +0000 Received: from pps.filterd (ppma02wdc.us.ibm.com [127.0.0.1]) by ppma02wdc.us.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 3326iF51024232; Sun, 2 Apr 2023 10:43:23 GMT Received: from smtprelay03.wdc07v.mail.ibm.com ([9.208.129.113]) by ppma02wdc.us.ibm.com (PPS) with ESMTPS id 3ppc87x1uv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sun, 02 Apr 2023 10:43:23 +0000 Received: from smtpav03.dal12v.mail.ibm.com (smtpav03.dal12v.mail.ibm.com [10.241.53.102]) by smtprelay03.wdc07v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 332AhMaR15729368 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sun, 2 Apr 2023 10:43:22 GMT Received: from smtpav03.dal12v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 6EAF15805A; Sun, 2 Apr 2023 10:43:22 +0000 (GMT) Received: from smtpav03.dal12v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id D396258056; Sun, 2 Apr 2023 10:43:18 +0000 (GMT) Received: from skywalker.ibmuc.com (unknown [9.43.8.200]) by smtpav03.dal12v.mail.ibm.com (Postfix) with ESMTP; Sun, 2 Apr 2023 10:43:18 +0000 (GMT) From: "Aneesh Kumar K.V" To: linux-mm@kvack.org, akpm@linux-foundation.org Cc: Dave Hansen , Johannes Weiner , Matthew Wilcox , Mel Gorman , Yu Zhao , Wei Xu , Guru Anbalagane , "Aneesh Kumar K.V" Subject: [RFC PATCH v1 6/7] powerpc/mm: Clear page access count on allocation Date: Sun, 2 Apr 2023 16:12:39 +0530 Message-Id: <20230402104240.1734931-7-aneesh.kumar@linux.ibm.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230402104240.1734931-1-aneesh.kumar@linux.ibm.com> References: <20230402104240.1734931-1-aneesh.kumar@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: dfQIoHO6P2ZTBwTUjveI9zFBTEEDvOFq X-Proofpoint-GUID: LwqAtgGrqRLM0WmSlfo_ns961-s7Tj62 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-31_07,2023-03-31_01,2023-02-09_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 phishscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 suspectscore=0 lowpriorityscore=0 priorityscore=1501 adultscore=0 bulkscore=0 malwarescore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2303200000 definitions=main-2304020092 X-Rspamd-Queue-Id: 6B609140008 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: rdh5n4jc1duu6jbk931ge3un5x97bzuy X-HE-Tag: 1680432208-971725 X-HE-Meta: U2FsdGVkX18jF11m7PKwhgl0IHyeNRLcLUmqJ6d9A2EmsNRKkvvyiinkovcb5B3sEB9KpDzXpBFLD7UjX0uvxXBzAf8OrtDeegaHk5+zcS+fHOubox642dQeokrzbymNt68fpiEQcOpPoxG0wtflOzf2mkEYUyASJvyCPl7ueSIH+JhbUDZhORV+SpbHkEARGC/R5lY+YvI7MJpLAbAuTsHCqLoDKyZtXZm6ITWVgW1H1Bd7mJILUL0Y7FWe25RCgKJm7Uowr5eDBmNhKbdRAqkDjfDJ4AWkj0xB4W1hOUlTPWlqpD03NHfdwSMTzriGW71YSUxdoom32CG041cNKuXSPN+P4P0ZzIZ88ErAhesUhfbAWXW5BRsYEPkLhQsu3DoI/llxs+ZTAR1Kw2R57xz8mCluLXrKh814z6loKEoau84vRVbrt0k3nIOJg+O6xj7cszHbh8utIN8t+yjs4weXOL/liD0M73ZAX6+3M6gdiHuXklIx65XxTpVwmXZEJAx3StKuW6eSspAbahjX96j0UgQmhHcbjZhLaubVDLpBxYPgXNrpRZCVdHXAaVPMtFmRIZSMVbamuraCixnM2tIHUUuu4/0R3TbCeDuuDf1yrrGjWygOmuqQtN45DoaQgK1akKbgqip8SFm1JzHBx1fQWEd5UZDR6/0rnf/QzyDQB9Drt/1lfSD+8tWYJ/wVCH1kf7Ss8MAw+CE4Lt9tPEO1XA2p6wksFRZug0mbQ2BlcgboJqBdf102jKxDF9hGtZF7uRWuH/YXtcsIAlNs/e+xBKTKCbz4FOKR3TRCpR1FXudU40l2O1tI1/QBl4Hz8p0b3RasZJYtjCrMraH0/q4AWAyifruu98MpMgXtbKHTmgDXv43TM3i/09sMyyrqPbfJCUTvQ1CMoYaLueZxxP/f8+j4CQg3K9px8+KK74XccGI6DuoEFWRRe4eRxl/a5viSRIQRZTYQnaIe7Uz 8wFOuMV0 2VXwPN0b1gIOnU5+fVhd7bjyqpdnxaxULVAVNF2i6qVsf+BJfGT0KScl7Mq4oo9ra3QcfIUPbsijML32ABmYKe4SboWqRZ0QHzVHN1JoGQ5YKd2SS9zPlnbAgxBeDtvl4hk1aq7UamFKUMCFtWZuEFLZ1OWkNgCnQ3LKp X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Clear the HCA access count value on allocation. Signed-off-by: Aneesh Kumar K.V --- arch/powerpc/include/asm/page.h | 5 +++++ arch/powerpc/mm/hca.c | 13 +++++++++++++ 2 files changed, 18 insertions(+) diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h index edf1dd1b0ca9..515423744193 100644 --- a/arch/powerpc/include/asm/page.h +++ b/arch/powerpc/include/asm/page.h @@ -319,6 +319,11 @@ void arch_free_page(struct page *page, int order); #define HAVE_ARCH_FREE_PAGE #endif +#ifdef CONFIG_PPC_HCA_HOTNESS +void arch_alloc_page(struct page *page, int order); +#define HAVE_ARCH_ALLOC_PAGE +#endif + struct vm_area_struct; extern unsigned long kernstart_virt_addr; diff --git a/arch/powerpc/mm/hca.c b/arch/powerpc/mm/hca.c index af6de4492ead..1e79ea89df1b 100644 --- a/arch/powerpc/mm/hca.c +++ b/arch/powerpc/mm/hca.c @@ -261,6 +261,19 @@ static void hca_debugfs_init(void) hca_backend_debugfs_init(hca_debugfs_root); } +void arch_alloc_page(struct page *page, int order) +{ + int i; + + if (!hca_clear_entry) + return; + + /* zero the counter value when we allocate the page */ + for (i = 0; i < (1 << order); i++) + hca_clear_entry(page_to_pfn(page + i)); +} +EXPORT_SYMBOL(arch_alloc_page); + static int __init hca_init(void) { if (!hca_backend_debugfs_init) { From patchwork Sun Apr 2 10:42:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 13197369 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40249C7619A for ; Sun, 2 Apr 2023 10:43:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CF25B6B0080; Sun, 2 Apr 2023 06:43:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CA29A6B0081; Sun, 2 Apr 2023 06:43:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B43A56B0082; Sun, 2 Apr 2023 06:43:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id A52EC6B0080 for ; Sun, 2 Apr 2023 06:43:35 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 7B06F160768 for ; Sun, 2 Apr 2023 10:43:35 +0000 (UTC) X-FDA: 80636114790.14.77689F2 Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by imf20.hostedemail.com (Postfix) with ESMTP id 25A721C0005 for ; Sun, 2 Apr 2023 10:43:32 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=ULvxw8cW; spf=pass (imf20.hostedemail.com: domain of aneesh.kumar@linux.ibm.com designates 148.163.158.5 as permitted sender) smtp.mailfrom=aneesh.kumar@linux.ibm.com; dmarc=pass (policy=none) header.from=ibm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1680432213; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=iWLz2c/gJjiIgPXQ1aSY7TJGPEGrwoNldjR8gzjw7to=; b=YkyrCd+uSyNJbbXxxtsadHpM6bhvUJDZ6DJe2DyyBjz2qDlQ9WO4yWlgAeVLu6igLp/t+E 8PlG2a/Y4znQ9EitTYWvYvyg3fpgNj3Jplevjk2R3bo1IIF8bO5IIlfanzoTkUROX79Brk VmfnthY8JDsnWL0N2B9VfJaRNpFh+XA= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=ULvxw8cW; spf=pass (imf20.hostedemail.com: domain of aneesh.kumar@linux.ibm.com designates 148.163.158.5 as permitted sender) smtp.mailfrom=aneesh.kumar@linux.ibm.com; dmarc=pass (policy=none) header.from=ibm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1680432213; a=rsa-sha256; cv=none; b=bbbsLbpt0KnkOTf8RvgeBFlBjO+mae1PR/Xiioc2xgpNmMnh7E+IR65VrsH7VJw4bKmfoW mSl+/LBrzJpKx7g1zKuWvhUcnDujRhh0+sKMiKPRqY2Vxe2U+RNDlejNbSAMhpjIPqvjQ6 mtfB3qXrlsqgXUZL+3KReOkh3B+6CEs= Received: from pps.filterd (m0098417.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3327H0uh028662; Sun, 2 Apr 2023 10:43:30 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=iWLz2c/gJjiIgPXQ1aSY7TJGPEGrwoNldjR8gzjw7to=; b=ULvxw8cWYRt+fMlc49TznFjncrJ9Z9Jr7pi3sRm64GdloEWe8bZTzv38/0WDEjJL5Aga 5R+Oy4yjMmg3JDPfSP5u7U//SH1uPiTTr/83N/7HTK2D6DrJ22UwUVMHCMKhkcnsKCBF Qahg1ROdxB7CViNzo5LJVn7NO2u0f0ZxRSb6l6SQpj0uas78DjF+IMzvvMi37mpSr1p/ UramEXXEDO2soDxpsK3NBlIKAMe3j30TElLu4Ua0Z6+IU6SCQ+Gf9+ocv9h48NfR0os4 LzIX5V5/4diXFYGb6f1QbHQ9QqzLlOye9ulFfhToJT2p7dg352/pPR7WiZRDR7wXNMoz 3g== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3ppx9jfdmq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sun, 02 Apr 2023 10:43:29 +0000 Received: from m0098417.ppops.net (m0098417.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 332AhTgu026533; Sun, 2 Apr 2023 10:43:29 GMT Received: from ppma03dal.us.ibm.com (b.bd.3ea9.ip4.static.sl-reverse.com [169.62.189.11]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3ppx9jfdmh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sun, 02 Apr 2023 10:43:29 +0000 Received: from pps.filterd (ppma03dal.us.ibm.com [127.0.0.1]) by ppma03dal.us.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 3328FLB7022391; Sun, 2 Apr 2023 10:43:28 GMT Received: from smtprelay07.wdc07v.mail.ibm.com ([9.208.129.116]) by ppma03dal.us.ibm.com (PPS) with ESMTPS id 3ppc87yj54-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sun, 02 Apr 2023 10:43:28 +0000 Received: from smtpav03.dal12v.mail.ibm.com (smtpav03.dal12v.mail.ibm.com [10.241.53.102]) by smtprelay07.wdc07v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 332AhQZC38863308 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sun, 2 Apr 2023 10:43:27 GMT Received: from smtpav03.dal12v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id A354458064; Sun, 2 Apr 2023 10:43:26 +0000 (GMT) Received: from smtpav03.dal12v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 0F9B758056; Sun, 2 Apr 2023 10:43:23 +0000 (GMT) Received: from skywalker.ibmuc.com (unknown [9.43.8.200]) by smtpav03.dal12v.mail.ibm.com (Postfix) with ESMTP; Sun, 2 Apr 2023 10:43:22 +0000 (GMT) From: "Aneesh Kumar K.V" To: linux-mm@kvack.org, akpm@linux-foundation.org Cc: Dave Hansen , Johannes Weiner , Matthew Wilcox , Mel Gorman , Yu Zhao , Wei Xu , Guru Anbalagane , "Aneesh Kumar K.V" Subject: [RFC PATCH v1 7/7] mm: multi-gen LRU: Shrink folio list without checking for page table reference Date: Sun, 2 Apr 2023 16:12:40 +0530 Message-Id: <20230402104240.1734931-8-aneesh.kumar@linux.ibm.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230402104240.1734931-1-aneesh.kumar@linux.ibm.com> References: <20230402104240.1734931-1-aneesh.kumar@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: loGqJUH-3JvGeve_OsWoPALQAY3EBr_I X-Proofpoint-GUID: fIqnqiU6yumCrdSyTi5PahbDBS0Oep20 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-31_07,2023-03-31_01,2023-02-09_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 impostorscore=0 phishscore=0 mlxscore=0 malwarescore=0 bulkscore=0 suspectscore=0 clxscore=1015 spamscore=0 adultscore=0 priorityscore=1501 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2303200000 definitions=main-2304020092 X-Stat-Signature: pkhy3ctsw16kbba4keatr4kjei5kida6 X-Rspam-User: X-Rspamd-Queue-Id: 25A721C0005 X-Rspamd-Server: rspam06 X-HE-Tag: 1680432212-325334 X-HE-Meta: U2FsdGVkX19Fpxcuf6sKQC/irP26ALXkSLUrs3VSxaqxP9IBPU9i6PodmK1rqrVUTdaNpsLGnfjfhmkS9/1DHxZFh5WESfdqBBwyS6BO3zVRz4wKObKHn/qXnRotPZzfXrzVT9lZmPchAY/MrerVmxKreG4S+3lB8zZ64eDNndAC8n4TIAdVjgLYez9xDWJudbcpgio2qKqk33JLKjo8FLX1hZb8k/ERqYLQzW/J/QXfZRf0098cUDB+lyFQ+1EjtGApdhK7ZgXQjxwAytp9fp6z4awlN+fdr6lhFzSho+vwSAcSsKydX4DEaQittEYIzWkbl0z+oAvBlx95TNThL4kflGCnnvDZEtIjydzc2ASspfKV7PZkh8HBikWgR4cngxr2w675dC/CETVV0TG974KK7ZhQ939FsuzpfMo9e9ZD7aQ9J+Nhkb+ywsrBjIl/CeJmkx3CtKinURvSt9mxFHkL7i5b28dhteCdALMNfU5c4HLVS7fuykjkRBNMl6yGxZ+jjXMLpbBfk6w+YC6RkfROA9g2o43H0v8OsniU3WljEdnt2V0F8tMaBw271EcEA8a2eB4kkEu9or3pa0YJ8gplLJQjnqRSL+vjUGpSHl6XPD5zNgXWGykL4xelfyrNM42FPxAB7oA6zDx+umw5n39gcZnZgxqY7PV00qHIgiKlRLrwdKBfTdqjdmLMrTda6fqfbIovSsyJGR6AZ0t2CLtlmDcCg777xg/bNwewn+pQ0fmFOPUf9aW8WD6vk+aV1KzSYVDu7m5c9p9lUcbpRs58RZB0tNkIevXWSIQQY1aEG3HwNr63Hr48DoX20RnHqv30dbcYiwkHgNNlY13MkvyWcH5F/X3uM1/hvbGmEiRV8MhEutWE+DLVBW8FcWisPctix2cHEKNjsjXDH4RCL5ErieNvjSblh8W9n9YyefeDf7m9zGWeNbfMaX+r2co20jFGGcBvpbmPCC+FVaS R7UoJZXJ ueOKCrcmtCZMfLcvcz6k64ICM8ec3NdI0wlHLfWRSweiefEsap/RhIfeiZcT/H1+0vQDxcP/UY+vCBTGTxQqcuxVQh2UPUUWHJTfKioI6z0H3IuuPAUQH/NUtDqJrZuNLA5pzybqlikO4zApj2HnKkOrGr14UOhQ8tW2M X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If arch supports page access count, we would have already sorted the lru before collecting reclaimable pages. So unconditionally reclaim them in shrink_folio_list Signed-off-by: Aneesh Kumar K.V --- mm/vmscan.c | 25 ++++++++----------------- 1 file changed, 8 insertions(+), 17 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index a5f6238b3926..d9eb6a4d2975 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -5242,7 +5242,8 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap if (list_empty(&list)) return scanned; retry: - reclaimed = shrink_folio_list(&list, pgdat, sc, &stat, false); + reclaimed = shrink_folio_list(&list, pgdat, sc, + &stat, arch_supports_page_access_count()); sc->nr_reclaimed += reclaimed; list_for_each_entry_safe_reverse(folio, next, &list, lru) { @@ -5477,22 +5478,12 @@ bool lru_gen_add_folio(struct lruvec *lruvec, struct folio *folio, bool reclaimi */ if (folio_test_active(folio)) seq = lrugen->max_seq; - else { - /* - * For a non active folio use the arch based - * aging details to derive the MGLRU generation. - */ - seq = arch_get_lru_gen_seq(lruvec, folio); - - if (seq == lrugen->min_seq[type]) { - if ((type == LRU_GEN_ANON && - !folio_test_swapcache(folio)) || - (folio_test_reclaim(folio) && - (folio_test_dirty(folio) || - folio_test_writeback(folio)))) - seq = lrugen->min_seq[type] + 1; - } - } + else if ((type == LRU_GEN_ANON && !folio_test_swapcache(folio)) || + (folio_test_reclaim(folio) && + (folio_test_dirty(folio) || folio_test_writeback(folio)))) + seq = lrugen->min_seq[type] + 1; + else + seq = lrugen->min_seq[type]; gen = lru_gen_from_seq(seq); flags = (gen + 1UL) << LRU_GEN_PGOFF;