From patchwork Tue Mar 22 21:41:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 12789098 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B67BC433EF for ; Tue, 22 Mar 2022 21:41:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CB6E56B0099; Tue, 22 Mar 2022 17:41:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C662E6B009A; Tue, 22 Mar 2022 17:41:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B7D4F6B009B; Tue, 22 Mar 2022 17:41:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0195.hostedemail.com [216.40.44.195]) by kanga.kvack.org (Postfix) with ESMTP id A85876B0099 for ; Tue, 22 Mar 2022 17:41:37 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 6ECC7182895A5 for ; Tue, 22 Mar 2022 21:41:37 +0000 (UTC) X-FDA: 79273344234.26.C614B96 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf14.hostedemail.com (Postfix) with ESMTP id 75032100025 for ; Tue, 22 Mar 2022 21:41:36 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 4544E6117F; Tue, 22 Mar 2022 21:41:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 90FF6C340EC; Tue, 22 Mar 2022 21:41:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1647985295; bh=CpEDdkiTdOKrDeK8LGm+Eoj1/yCpSJX3PCy3yXJxufA=; h=Date:To:From:In-Reply-To:Subject:From; b=lodacTqqxDt8Su6/1gc99hLNFRgwKIwP2ea1/gr+SvAK4YKzPb8tx7fjGXo2Qv5Mg evfLhJZMVel1zL0l6XBeYFN4m9vtgTG++2m/Tn6ntP7f048MsozaRfs4b1FVu+SnlD 3U2m6SPLpm8XbATFUyhe3Ic6Gc5Z/vgPufQXvwXM= Date: Tue, 22 Mar 2022 14:41:35 -0700 To: zhengqi.arch@bytedance.com,willy@infradead.org,vdavydov.dev@gmail.com,vbabka@suse.cz,tytso@mit.edu,trond.myklebust@hammerspace.com,shy828301@gmail.com,shakeelb@google.com,roman.gushchin@linux.dev,richard.weiyang@gmail.com,mhocko@kernel.org,kari.argillander@gmail.com,jaegeuk@kernel.org,hannes@cmpxchg.org,fam.zheng@bytedance.com,duanxiongchun@bytedance.com,david@fromorbit.com,chao@kernel.org,Anna.Schumaker@Netapp.com,alexs@kernel.org,songmuchun@bytedance.com,akpm@linux-foundation.org,patches@lists.linux.dev,linux-mm@kvack.org,mm-commits@vger.kernel.org,torvalds@linux-foundation.org,akpm@linux-foundation.org From: Andrew Morton In-Reply-To: <20220322143803.04a5e59a07e48284f196a2f9@linux-foundation.org> Subject: [patch 060/227] mm: list_lru: rename list_lru_per_memcg to list_lru_memcg Message-Id: <20220322214135.90FF6C340EC@smtp.kernel.org> Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b=lodacTqq; spf=pass (imf14.hostedemail.com: domain of akpm@linux-foundation.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org; dmarc=none X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 75032100025 X-Stat-Signature: i7b5goqqqwam3so6g73g4x69fo84dm5b X-HE-Tag: 1647985296-59989 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Muchun Song Subject: mm: list_lru: rename list_lru_per_memcg to list_lru_memcg The name of list_lru_memcg was occupied before and became free since last commit. Rename list_lru_per_memcg to list_lru_memcg since the name is brief. Link: https://lkml.kernel.org/r/20220228122126.37293-16-songmuchun@bytedance.com Signed-off-by: Muchun Song Cc: Alex Shi Cc: Anna Schumaker Cc: Chao Yu Cc: Dave Chinner Cc: Fam Zheng Cc: Jaegeuk Kim Cc: Johannes Weiner Cc: Kari Argillander Cc: Matthew Wilcox (Oracle) Cc: Michal Hocko Cc: Qi Zheng Cc: Roman Gushchin Cc: Shakeel Butt Cc: Theodore Ts'o Cc: Trond Myklebust Cc: Vladimir Davydov Cc: Vlastimil Babka Cc: Wei Yang Cc: Xiongchun Duan Cc: Yang Shi Signed-off-by: Andrew Morton --- include/linux/list_lru.h | 2 +- mm/list_lru.c | 18 +++++++++--------- 2 files changed, 10 insertions(+), 10 deletions(-) --- a/include/linux/list_lru.h~mm-list_lru-rename-list_lru_per_memcg-to-list_lru_memcg +++ a/include/linux/list_lru.h @@ -32,7 +32,7 @@ struct list_lru_one { long nr_items; }; -struct list_lru_per_memcg { +struct list_lru_memcg { struct rcu_head rcu; /* array of per cgroup per node lists, indexed by node id */ struct list_lru_one node[]; --- a/mm/list_lru.c~mm-list_lru-rename-list_lru_per_memcg-to-list_lru_memcg +++ a/mm/list_lru.c @@ -53,7 +53,7 @@ static inline struct list_lru_one * list_lru_from_memcg_idx(struct list_lru *lru, int nid, int idx) { if (list_lru_memcg_aware(lru) && idx >= 0) { - struct list_lru_per_memcg *mlru = xa_load(&lru->xa, idx); + struct list_lru_memcg *mlru = xa_load(&lru->xa, idx); return mlru ? &mlru->node[nid] : NULL; } @@ -306,7 +306,7 @@ unsigned long list_lru_walk_node(struct #ifdef CONFIG_MEMCG_KMEM if (*nr_to_walk > 0 && list_lru_memcg_aware(lru)) { - struct list_lru_per_memcg *mlru; + struct list_lru_memcg *mlru; unsigned long index; xa_for_each(&lru->xa, index, mlru) { @@ -335,10 +335,10 @@ static void init_one_lru(struct list_lru } #ifdef CONFIG_MEMCG_KMEM -static struct list_lru_per_memcg *memcg_init_list_lru_one(gfp_t gfp) +static struct list_lru_memcg *memcg_init_list_lru_one(gfp_t gfp) { int nid; - struct list_lru_per_memcg *mlru; + struct list_lru_memcg *mlru; mlru = kmalloc(struct_size(mlru, node, nr_node_ids), gfp); if (!mlru) @@ -352,7 +352,7 @@ static struct list_lru_per_memcg *memcg_ static void memcg_list_lru_free(struct list_lru *lru, int src_idx) { - struct list_lru_per_memcg *mlru = xa_erase_irq(&lru->xa, src_idx); + struct list_lru_memcg *mlru = xa_erase_irq(&lru->xa, src_idx); /* * The __list_lru_walk_one() can walk the list of this node. @@ -374,7 +374,7 @@ static inline void memcg_init_list_lru(s static void memcg_destroy_list_lru(struct list_lru *lru) { XA_STATE(xas, &lru->xa, 0); - struct list_lru_per_memcg *mlru; + struct list_lru_memcg *mlru; if (!list_lru_memcg_aware(lru)) return; @@ -475,7 +475,7 @@ int memcg_list_lru_alloc(struct mem_cgro int i; unsigned long flags; struct list_lru_memcg_table { - struct list_lru_per_memcg *mlru; + struct list_lru_memcg *mlru; struct mem_cgroup *memcg; } *table; XA_STATE(xas, &lru->xa, 0); @@ -491,7 +491,7 @@ int memcg_list_lru_alloc(struct mem_cgro /* * Because the list_lru can be reparented to the parent cgroup's * list_lru, we should make sure that this cgroup and all its - * ancestors have allocated list_lru_per_memcg. + * ancestors have allocated list_lru_memcg. */ for (i = 0; memcg; memcg = parent_mem_cgroup(memcg), i++) { if (memcg_list_lru_allocated(memcg, lru)) @@ -510,7 +510,7 @@ int memcg_list_lru_alloc(struct mem_cgro xas_lock_irqsave(&xas, flags); while (i--) { int index = READ_ONCE(table[i].memcg->kmemcg_id); - struct list_lru_per_memcg *mlru = table[i].mlru; + struct list_lru_memcg *mlru = table[i].mlru; xas_set(&xas, index); retry: