From patchwork Mon Feb 28 12:21:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12763159 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75170C433EF for ; Mon, 28 Feb 2022 12:22:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 17E318D0005; Mon, 28 Feb 2022 07:22:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 12DB78D0001; Mon, 28 Feb 2022 07:22:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F11888D0005; Mon, 28 Feb 2022 07:22:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0186.hostedemail.com [216.40.44.186]) by kanga.kvack.org (Postfix) with ESMTP id E43C98D0001 for ; Mon, 28 Feb 2022 07:22:23 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 9CDE19528E for ; Mon, 28 Feb 2022 12:22:23 +0000 (UTC) X-FDA: 79192101366.25.653ADDC Received: from mail-pg1-f181.google.com (mail-pg1-f181.google.com [209.85.215.181]) by imf07.hostedemail.com (Postfix) with ESMTP id 24DEC4000C for ; Mon, 28 Feb 2022 12:22:23 +0000 (UTC) Received: by mail-pg1-f181.google.com with SMTP id o26so10567003pgb.8 for ; Mon, 28 Feb 2022 04:22:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=19HOHREB3NEbsy4dM9jVcuvJDaSZuxg6+0rHeJjgl+I=; b=o+jT2XLIR7Y7UVKvY8JafceKomqSpgXrMRhIUcYJEMeon2pgAq7w7Uuq6AU2Sz6mcz N0ockYvnCBlCZCqiFw2JXLUem3Zky3vbq36reHdc0d4In6703th6HilA6xP9AiTN7n26 Bn80c4iFAd4JUYBDniNKWxraNAivV0PDCX8gauhtZ3p+IbtPR6Jeetw2at8WknPas5MP dg5TX1SLraM+pGz8sxmn34po3dlRur/2Rdeb3Lmv5eexUZdv2hsZfjVUAo9HQtX5i8By +HmHL1kWTAFRlHbDPah2ymV0bPFfwq+yYJxfpuwRhLf1XpHB05DxgvW1NOiu517gGrCa +UaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=19HOHREB3NEbsy4dM9jVcuvJDaSZuxg6+0rHeJjgl+I=; b=ScF87b+PC7w/sJOXIqxacQKksUbktgI5sUg27L7Az2aCnnM0mezfN2t903dj4PlV6L HnrcJHQcqYTCYpEAQ+hu9lXhiVQ/xDLdk01+KtATeGhLRzvQqo9+LgS5zwLv6bPyId+j YH+aCbpiZJ2WGbD71VCP7otPt2x16C2PvUYf4BFSkKn55qeQ09keXTFkWzBkErjk6Suf N75LugyVi64q2NwxZS2aMUF0MfYaGQafAx3wDGgbbHHrCg8bxps2KsO9+o+aH84r5xtL EyC8BB95uKnRmfUFFq9q9/l+jB9EypjgaRXCtzDTnjnGBCunf5Uajm/ogyS0y0kqaz2d F9Yg== X-Gm-Message-State: AOAM533ZCsJsRZWcUZYh5378/AbVPrCFJ27TwhLUaGTiSgBiDFG4gLGG b7XXkpGnl8044FPwV9An0Z+fjg== X-Google-Smtp-Source: ABdhPJzIC8BNlybGIkNHzucmSZt/c2J6UPZC8cp/U77QOEXiYl5AUGQs6/Cq/PkNCHbZpwqYr/hwAg== X-Received: by 2002:a05:6a00:1c47:b0:4e1:2c3a:ac3d with SMTP id s7-20020a056a001c4700b004e12c3aac3dmr21147863pfw.15.1646050942145; Mon, 28 Feb 2022 04:22:22 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.227]) by smtp.gmail.com with ESMTPSA id ep22-20020a17090ae65600b001b92477db10sm10466753pjb.29.2022.02.28.04.22.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 04:22:21 -0800 (PST) From: Muchun Song To: willy@infradead.org, akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, shakeelb@google.com, roman.gushchin@linux.dev, shy828301@gmail.com, alexs@kernel.org, richard.weiyang@gmail.com, david@fromorbit.com, trond.myklebust@hammerspace.com, anna.schumaker@netapp.com, jaegeuk@kernel.org, chao@kernel.org, kari.argillander@gmail.com, vbabka@suse.cz Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nfs@vger.kernel.org, zhengqi.arch@bytedance.com, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v6 01/16] mm: list_lru: transpose the array of per-node per-memcg lru lists Date: Mon, 28 Feb 2022 20:21:11 +0800 Message-Id: <20220228122126.37293-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220228122126.37293-1-songmuchun@bytedance.com> References: <20220228122126.37293-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 24DEC4000C X-Rspam-User: Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=o+jT2XLI; spf=pass (imf07.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.215.181 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Stat-Signature: c8j98kx4t8gyhnb5muig1cr7fki7m6fy X-HE-Tag: 1646050943-196745 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The current scheme of maintaining per-node per-memcg lru lists looks like: struct list_lru { struct list_lru_node *node; (for each node) struct list_lru_memcg *memcg_lrus; struct list_lru_one *lru[]; (for each memcg) } By effectively transposing the two-dimension array of list_lru_one's structures (per-node per-memcg => per-memcg per-node) it's possible to save some memory and simplify alloc/dealloc paths. The new scheme looks like: struct list_lru { struct list_lru_memcg *mlrus; struct list_lru_per_memcg *mlru[]; (for each memcg) struct list_lru_one node[0]; (for each node) } Memory savings are coming from not only 'struct rcu_head' but also some pointer arrays used to store the pointer to 'struct list_lru_one'. The array is per node and its size is 8 (a pointer) * num_memcgs. So the total size of the arrays is 8 * num_nodes * memcg_nr_cache_ids. After this patch, the size becomes 8 * memcg_nr_cache_ids. Signed-off-by: Muchun Song Acked-by: Johannes Weiner --- include/linux/list_lru.h | 17 ++-- mm/list_lru.c | 206 +++++++++++++++++------------------------------ 2 files changed, 86 insertions(+), 137 deletions(-) diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h index 1b5fceb565df..729a27b6ff53 100644 --- a/include/linux/list_lru.h +++ b/include/linux/list_lru.h @@ -31,10 +31,15 @@ struct list_lru_one { long nr_items; }; +struct list_lru_per_memcg { + /* array of per cgroup per node lists, indexed by node id */ + struct list_lru_one node[0]; +}; + struct list_lru_memcg { - struct rcu_head rcu; + struct rcu_head rcu; /* array of per cgroup lists, indexed by memcg_cache_id */ - struct list_lru_one *lru[]; + struct list_lru_per_memcg *mlru[]; }; struct list_lru_node { @@ -42,11 +47,7 @@ struct list_lru_node { spinlock_t lock; /* global list, used for the root cgroup in cgroup aware lrus */ struct list_lru_one lru; -#ifdef CONFIG_MEMCG_KMEM - /* for cgroup aware lrus points to per cgroup lists, otherwise NULL */ - struct list_lru_memcg __rcu *memcg_lrus; -#endif - long nr_items; + long nr_items; } ____cacheline_aligned_in_smp; struct list_lru { @@ -55,6 +56,8 @@ struct list_lru { struct list_head list; int shrinker_id; bool memcg_aware; + /* for cgroup aware lrus points to per cgroup lists, otherwise NULL */ + struct list_lru_memcg __rcu *mlrus; #endif }; diff --git a/mm/list_lru.c b/mm/list_lru.c index 0cd5e89ca063..7d1356241aa8 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -49,35 +49,37 @@ static int lru_shrinker_id(struct list_lru *lru) } static inline struct list_lru_one * -list_lru_from_memcg_idx(struct list_lru_node *nlru, int idx) +list_lru_from_memcg_idx(struct list_lru *lru, int nid, int idx) { - struct list_lru_memcg *memcg_lrus; + struct list_lru_memcg *mlrus; + struct list_lru_node *nlru = &lru->node[nid]; + /* * Either lock or RCU protects the array of per cgroup lists - * from relocation (see memcg_update_list_lru_node). + * from relocation (see memcg_update_list_lru). */ - memcg_lrus = rcu_dereference_check(nlru->memcg_lrus, - lockdep_is_held(&nlru->lock)); - if (memcg_lrus && idx >= 0) - return memcg_lrus->lru[idx]; + mlrus = rcu_dereference_check(lru->mlrus, lockdep_is_held(&nlru->lock)); + if (mlrus && idx >= 0) + return &mlrus->mlru[idx]->node[nid]; return &nlru->lru; } static inline struct list_lru_one * -list_lru_from_kmem(struct list_lru_node *nlru, void *ptr, +list_lru_from_kmem(struct list_lru *lru, int nid, void *ptr, struct mem_cgroup **memcg_ptr) { + struct list_lru_node *nlru = &lru->node[nid]; struct list_lru_one *l = &nlru->lru; struct mem_cgroup *memcg = NULL; - if (!nlru->memcg_lrus) + if (!lru->mlrus) goto out; memcg = mem_cgroup_from_obj(ptr); if (!memcg) goto out; - l = list_lru_from_memcg_idx(nlru, memcg_cache_id(memcg)); + l = list_lru_from_memcg_idx(lru, nid, memcg_cache_id(memcg)); out: if (memcg_ptr) *memcg_ptr = memcg; @@ -103,18 +105,18 @@ static inline bool list_lru_memcg_aware(struct list_lru *lru) } static inline struct list_lru_one * -list_lru_from_memcg_idx(struct list_lru_node *nlru, int idx) +list_lru_from_memcg_idx(struct list_lru *lru, int nid, int idx) { - return &nlru->lru; + return &lru->node[nid].lru; } static inline struct list_lru_one * -list_lru_from_kmem(struct list_lru_node *nlru, void *ptr, +list_lru_from_kmem(struct list_lru *lru, int nid, void *ptr, struct mem_cgroup **memcg_ptr) { if (memcg_ptr) *memcg_ptr = NULL; - return &nlru->lru; + return &lru->node[nid].lru; } #endif /* CONFIG_MEMCG_KMEM */ @@ -127,7 +129,7 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item) spin_lock(&nlru->lock); if (list_empty(item)) { - l = list_lru_from_kmem(nlru, item, &memcg); + l = list_lru_from_kmem(lru, nid, item, &memcg); list_add_tail(item, &l->list); /* Set shrinker bit if the first element was added */ if (!l->nr_items++) @@ -150,7 +152,7 @@ bool list_lru_del(struct list_lru *lru, struct list_head *item) spin_lock(&nlru->lock); if (!list_empty(item)) { - l = list_lru_from_kmem(nlru, item, NULL); + l = list_lru_from_kmem(lru, nid, item, NULL); list_del_init(item); l->nr_items--; nlru->nr_items--; @@ -180,12 +182,11 @@ EXPORT_SYMBOL_GPL(list_lru_isolate_move); unsigned long list_lru_count_one(struct list_lru *lru, int nid, struct mem_cgroup *memcg) { - struct list_lru_node *nlru = &lru->node[nid]; struct list_lru_one *l; long count; rcu_read_lock(); - l = list_lru_from_memcg_idx(nlru, memcg_cache_id(memcg)); + l = list_lru_from_memcg_idx(lru, nid, memcg_cache_id(memcg)); count = READ_ONCE(l->nr_items); rcu_read_unlock(); @@ -206,16 +207,16 @@ unsigned long list_lru_count_node(struct list_lru *lru, int nid) EXPORT_SYMBOL_GPL(list_lru_count_node); static unsigned long -__list_lru_walk_one(struct list_lru_node *nlru, int memcg_idx, +__list_lru_walk_one(struct list_lru *lru, int nid, int memcg_idx, list_lru_walk_cb isolate, void *cb_arg, unsigned long *nr_to_walk) { - + struct list_lru_node *nlru = &lru->node[nid]; struct list_lru_one *l; struct list_head *item, *n; unsigned long isolated = 0; - l = list_lru_from_memcg_idx(nlru, memcg_idx); + l = list_lru_from_memcg_idx(lru, nid, memcg_idx); restart: list_for_each_safe(item, n, &l->list) { enum lru_status ret; @@ -272,8 +273,8 @@ list_lru_walk_one(struct list_lru *lru, int nid, struct mem_cgroup *memcg, unsigned long ret; spin_lock(&nlru->lock); - ret = __list_lru_walk_one(nlru, memcg_cache_id(memcg), isolate, cb_arg, - nr_to_walk); + ret = __list_lru_walk_one(lru, nid, memcg_cache_id(memcg), isolate, + cb_arg, nr_to_walk); spin_unlock(&nlru->lock); return ret; } @@ -288,8 +289,8 @@ list_lru_walk_one_irq(struct list_lru *lru, int nid, struct mem_cgroup *memcg, unsigned long ret; spin_lock_irq(&nlru->lock); - ret = __list_lru_walk_one(nlru, memcg_cache_id(memcg), isolate, cb_arg, - nr_to_walk); + ret = __list_lru_walk_one(lru, nid, memcg_cache_id(memcg), isolate, + cb_arg, nr_to_walk); spin_unlock_irq(&nlru->lock); return ret; } @@ -308,7 +309,7 @@ unsigned long list_lru_walk_node(struct list_lru *lru, int nid, struct list_lru_node *nlru = &lru->node[nid]; spin_lock(&nlru->lock); - isolated += __list_lru_walk_one(nlru, memcg_idx, + isolated += __list_lru_walk_one(lru, nid, memcg_idx, isolate, cb_arg, nr_to_walk); spin_unlock(&nlru->lock); @@ -328,166 +329,111 @@ static void init_one_lru(struct list_lru_one *l) } #ifdef CONFIG_MEMCG_KMEM -static void __memcg_destroy_list_lru_node(struct list_lru_memcg *memcg_lrus, - int begin, int end) +static void memcg_destroy_list_lru_range(struct list_lru_memcg *mlrus, + int begin, int end) { int i; for (i = begin; i < end; i++) - kfree(memcg_lrus->lru[i]); + kfree(mlrus->mlru[i]); } -static int __memcg_init_list_lru_node(struct list_lru_memcg *memcg_lrus, - int begin, int end) +static int memcg_init_list_lru_range(struct list_lru_memcg *mlrus, + int begin, int end) { int i; for (i = begin; i < end; i++) { - struct list_lru_one *l; + int nid; + struct list_lru_per_memcg *mlru; - l = kmalloc(sizeof(struct list_lru_one), GFP_KERNEL); - if (!l) + mlru = kmalloc(struct_size(mlru, node, nr_node_ids), GFP_KERNEL); + if (!mlru) goto fail; - init_one_lru(l); - memcg_lrus->lru[i] = l; + for_each_node(nid) + init_one_lru(&mlru->node[nid]); + mlrus->mlru[i] = mlru; } return 0; fail: - __memcg_destroy_list_lru_node(memcg_lrus, begin, i); + memcg_destroy_list_lru_range(mlrus, begin, i); return -ENOMEM; } -static int memcg_init_list_lru_node(struct list_lru_node *nlru) +static int memcg_init_list_lru(struct list_lru *lru, bool memcg_aware) { - struct list_lru_memcg *memcg_lrus; + struct list_lru_memcg *mlrus; int size = memcg_nr_cache_ids; - memcg_lrus = kvmalloc(struct_size(memcg_lrus, lru, size), GFP_KERNEL); - if (!memcg_lrus) + lru->memcg_aware = memcg_aware; + if (!memcg_aware) + return 0; + + mlrus = kvmalloc(struct_size(mlrus, mlru, size), GFP_KERNEL); + if (!mlrus) return -ENOMEM; - if (__memcg_init_list_lru_node(memcg_lrus, 0, size)) { - kvfree(memcg_lrus); + if (memcg_init_list_lru_range(mlrus, 0, size)) { + kvfree(mlrus); return -ENOMEM; } - RCU_INIT_POINTER(nlru->memcg_lrus, memcg_lrus); + RCU_INIT_POINTER(lru->mlrus, mlrus); return 0; } -static void memcg_destroy_list_lru_node(struct list_lru_node *nlru) +static void memcg_destroy_list_lru(struct list_lru *lru) { - struct list_lru_memcg *memcg_lrus; + struct list_lru_memcg *mlrus; + + if (!list_lru_memcg_aware(lru)) + return; + /* * This is called when shrinker has already been unregistered, * and nobody can use it. So, there is no need to use kvfree_rcu(). */ - memcg_lrus = rcu_dereference_protected(nlru->memcg_lrus, true); - __memcg_destroy_list_lru_node(memcg_lrus, 0, memcg_nr_cache_ids); - kvfree(memcg_lrus); + mlrus = rcu_dereference_protected(lru->mlrus, true); + memcg_destroy_list_lru_range(mlrus, 0, memcg_nr_cache_ids); + kvfree(mlrus); } -static int memcg_update_list_lru_node(struct list_lru_node *nlru, - int old_size, int new_size) +static int memcg_update_list_lru(struct list_lru *lru, int old_size, int new_size) { struct list_lru_memcg *old, *new; BUG_ON(old_size > new_size); - old = rcu_dereference_protected(nlru->memcg_lrus, + old = rcu_dereference_protected(lru->mlrus, lockdep_is_held(&list_lrus_mutex)); - new = kvmalloc(struct_size(new, lru, new_size), GFP_KERNEL); + new = kvmalloc(struct_size(new, mlru, new_size), GFP_KERNEL); if (!new) return -ENOMEM; - if (__memcg_init_list_lru_node(new, old_size, new_size)) { + if (memcg_init_list_lru_range(new, old_size, new_size)) { kvfree(new); return -ENOMEM; } - memcpy(&new->lru, &old->lru, flex_array_size(new, lru, old_size)); - rcu_assign_pointer(nlru->memcg_lrus, new); + memcpy(&new->mlru, &old->mlru, flex_array_size(new, mlru, old_size)); + rcu_assign_pointer(lru->mlrus, new); kvfree_rcu(old, rcu); return 0; } -static void memcg_cancel_update_list_lru_node(struct list_lru_node *nlru, - int old_size, int new_size) -{ - struct list_lru_memcg *memcg_lrus; - - memcg_lrus = rcu_dereference_protected(nlru->memcg_lrus, - lockdep_is_held(&list_lrus_mutex)); - /* do not bother shrinking the array back to the old size, because we - * cannot handle allocation failures here */ - __memcg_destroy_list_lru_node(memcg_lrus, old_size, new_size); -} - -static int memcg_init_list_lru(struct list_lru *lru, bool memcg_aware) -{ - int i; - - lru->memcg_aware = memcg_aware; - - if (!memcg_aware) - return 0; - - for_each_node(i) { - if (memcg_init_list_lru_node(&lru->node[i])) - goto fail; - } - return 0; -fail: - for (i = i - 1; i >= 0; i--) { - if (!lru->node[i].memcg_lrus) - continue; - memcg_destroy_list_lru_node(&lru->node[i]); - } - return -ENOMEM; -} - -static void memcg_destroy_list_lru(struct list_lru *lru) -{ - int i; - - if (!list_lru_memcg_aware(lru)) - return; - - for_each_node(i) - memcg_destroy_list_lru_node(&lru->node[i]); -} - -static int memcg_update_list_lru(struct list_lru *lru, - int old_size, int new_size) -{ - int i; - - for_each_node(i) { - if (memcg_update_list_lru_node(&lru->node[i], - old_size, new_size)) - goto fail; - } - return 0; -fail: - for (i = i - 1; i >= 0; i--) { - if (!lru->node[i].memcg_lrus) - continue; - - memcg_cancel_update_list_lru_node(&lru->node[i], - old_size, new_size); - } - return -ENOMEM; -} - static void memcg_cancel_update_list_lru(struct list_lru *lru, int old_size, int new_size) { - int i; + struct list_lru_memcg *mlrus; - for_each_node(i) - memcg_cancel_update_list_lru_node(&lru->node[i], - old_size, new_size); + mlrus = rcu_dereference_protected(lru->mlrus, + lockdep_is_held(&list_lrus_mutex)); + /* + * Do not bother shrinking the array back to the old size, because we + * cannot handle allocation failures here. + */ + memcg_destroy_list_lru_range(mlrus, old_size, new_size); } int memcg_update_all_list_lrus(int new_size) @@ -524,8 +470,8 @@ static void memcg_drain_list_lru_node(struct list_lru *lru, int nid, */ spin_lock_irq(&nlru->lock); - src = list_lru_from_memcg_idx(nlru, src_idx); - dst = list_lru_from_memcg_idx(nlru, dst_idx); + src = list_lru_from_memcg_idx(lru, nid, src_idx); + dst = list_lru_from_memcg_idx(lru, nid, dst_idx); list_splice_init(&src->list, &dst->list); From patchwork Mon Feb 28 12:21:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12763160 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F473C433F5 for ; Mon, 28 Feb 2022 12:22:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AF2EF8D0006; Mon, 28 Feb 2022 07:22:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AA3018D0001; Mon, 28 Feb 2022 07:22:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 943E18D0006; Mon, 28 Feb 2022 07:22:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0233.hostedemail.com [216.40.44.233]) by kanga.kvack.org (Postfix) with ESMTP id 8583C8D0001 for ; Mon, 28 Feb 2022 07:22:32 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 410A695C9A for ; Mon, 28 Feb 2022 12:22:32 +0000 (UTC) X-FDA: 79192101744.25.02977EB Received: from mail-pj1-f43.google.com (mail-pj1-f43.google.com [209.85.216.43]) by imf12.hostedemail.com (Postfix) with ESMTP id D6AEC4000D for ; Mon, 28 Feb 2022 12:22:31 +0000 (UTC) Received: by mail-pj1-f43.google.com with SMTP id h17-20020a17090acf1100b001bc68ecce4aso14611752pju.4 for ; Mon, 28 Feb 2022 04:22:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=DRNqXfjEE8EbF/PX/yNqJZjs4ngtvwLuRBxmQsgavJg=; b=YUUNDNVipcqQJITv+ague57WXiJel5qSjANMsuTL6NEa/kw0WpqayzC5qrwrgJCvii aOU2wSBDa/ONkBcjG4Ibh34N0Tpf+KWJF+QT3324qxnS9b5hgk8x5YxxvGwFzCB6e0TB LbuTSCZfc9EjsE0FhK2dnKLKbC5MXl9XUaMGsOCYI6xowPjEshE8Zl1nFUxY7qS5dmwb +ARDbcrXYFGnz9SeRPuYgAwIUptL6iP8T7ZA5r6iQG+MFlyiWSj3HbYWOCIFjHAWBj8J sm720SKrDzxPNP6WDSkwbSp3z0JfiD2oj9jzDeufwNvqdgyadKYXtDyz3issme8qUnOn zfTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=DRNqXfjEE8EbF/PX/yNqJZjs4ngtvwLuRBxmQsgavJg=; b=S4EF/UEXgRezzMXQA6Tg/Av+DiXoq78mdnXz+r78vlm4vCXkp1L/Z3Vb1Eop2A2kx3 7AHqdVxM1UAn9CdZ1YBU1TcxGyoQE53ZtIDvX6kGLw/8H6M9XjiY/Qr+C75zwfsD3oRi PeMaDgUtt/n1YMSoOj2ybmpnD5ckT3ZPcYePQox4omvZ+y4mDtqiLGTg2KBd24qlZa/r IyrNu+JMdC0wP0vEGboC9E4NYhWjNx2L4ka9IyfBUGnYLAWTFRp6ZrgLMUI2N2v5Tqm3 +8JKsqPO5W6DU86nGN7yCW5wnDgNWpMRN5ttMfdG3pLr5xJnHYsEjrYYewGBMsj/HPz3 ANjA== X-Gm-Message-State: AOAM532TGhjvKfkdpgnUiRfT82e8OcHL2c6Wj1IUPlsqDhKip6zwgZoX VXOJIUOchQVnMzRXtwasWOYe5Q== X-Google-Smtp-Source: ABdhPJxqbSMhWSWJdpm515bJtXosZbDIWgJJ2BkA23gcdQDYT0Us8MSiqpB8C3tUWYUDR6Xnc+C0hA== X-Received: by 2002:a17:902:d103:b0:14f:a3a5:3974 with SMTP id w3-20020a170902d10300b0014fa3a53974mr20369935plw.74.1646050950694; Mon, 28 Feb 2022 04:22:30 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.227]) by smtp.gmail.com with ESMTPSA id ep22-20020a17090ae65600b001b92477db10sm10466753pjb.29.2022.02.28.04.22.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 04:22:30 -0800 (PST) From: Muchun Song To: willy@infradead.org, akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, shakeelb@google.com, roman.gushchin@linux.dev, shy828301@gmail.com, alexs@kernel.org, richard.weiyang@gmail.com, david@fromorbit.com, trond.myklebust@hammerspace.com, anna.schumaker@netapp.com, jaegeuk@kernel.org, chao@kernel.org, kari.argillander@gmail.com, vbabka@suse.cz Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nfs@vger.kernel.org, zhengqi.arch@bytedance.com, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v6 02/16] mm: introduce kmem_cache_alloc_lru Date: Mon, 28 Feb 2022 20:21:12 +0800 Message-Id: <20220228122126.37293-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220228122126.37293-1-songmuchun@bytedance.com> References: <20220228122126.37293-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: D6AEC4000D X-Stat-Signature: aid7a7b7jbgb6moyuy988xoes9dij57z Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=YUUNDNVi; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf12.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.43 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-HE-Tag: 1646050951-900638 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We currently allocate scope for every memcg to be able to tracked on every superblock instantiated in the system, regardless of whether that superblock is even accessible to that memcg. These huge memcg counts come from container hosts where memcgs are confined to just a small subset of the total number of superblocks that instantiated at any given point in time. For these systems with huge container counts, list_lru does not need the capability of tracking every memcg on every superblock. What it comes down to is that adding the memcg to the list_lru at the first insert. So introduce kmem_cache_alloc_lru to allocate objects and its list_lru. In the later patch, we will convert all inode and dentry allocation from kmem_cache_alloc to kmem_cache_alloc_lru. Signed-off-by: Muchun Song --- include/linux/list_lru.h | 4 ++ include/linux/memcontrol.h | 14 ++++++ include/linux/slab.h | 3 ++ mm/list_lru.c | 104 +++++++++++++++++++++++++++++++++++++++++---- mm/memcontrol.c | 14 ------ mm/slab.c | 39 +++++++++++------ mm/slab.h | 25 +++++++++-- mm/slob.c | 6 +++ mm/slub.c | 42 ++++++++++++------ 9 files changed, 198 insertions(+), 53 deletions(-) diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h index 729a27b6ff53..ab912c49334f 100644 --- a/include/linux/list_lru.h +++ b/include/linux/list_lru.h @@ -56,6 +56,8 @@ struct list_lru { struct list_head list; int shrinker_id; bool memcg_aware; + /* protects ->mlrus->mlru[i] */ + spinlock_t lock; /* for cgroup aware lrus points to per cgroup lists, otherwise NULL */ struct list_lru_memcg __rcu *mlrus; #endif @@ -72,6 +74,8 @@ int __list_lru_init(struct list_lru *lru, bool memcg_aware, #define list_lru_init_memcg(lru, shrinker) \ __list_lru_init((lru), true, NULL, shrinker) +int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru, + gfp_t gfp); int memcg_update_all_list_lrus(int num_memcgs); void memcg_drain_all_list_lrus(int src_idx, struct mem_cgroup *dst_memcg); diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 0abbd685703b..0d6d838cadfd 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -523,6 +523,20 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page) return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); } +static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg) +{ + struct mem_cgroup *memcg; + + rcu_read_lock(); +retry: + memcg = obj_cgroup_memcg(objcg); + if (unlikely(!css_tryget(&memcg->css))) + goto retry; + rcu_read_unlock(); + + return memcg; +} + #ifdef CONFIG_MEMCG_KMEM /* * folio_memcg_kmem - Check if the folio has the memcg_kmem flag set. diff --git a/include/linux/slab.h b/include/linux/slab.h index 37bde99b74af..7ef2e14ccca1 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -135,6 +135,7 @@ #include +struct list_lru; struct mem_cgroup; /* * struct kmem_cache related prototypes @@ -416,6 +417,8 @@ static __always_inline unsigned int __kmalloc_index(size_t size, void *__kmalloc(size_t size, gfp_t flags) __assume_kmalloc_alignment __alloc_size(1); void *kmem_cache_alloc(struct kmem_cache *s, gfp_t flags) __assume_slab_alignment __malloc; +void *kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, + gfp_t gfpflags) __assume_slab_alignment __malloc; void kmem_cache_free(struct kmem_cache *s, void *objp); /* diff --git a/mm/list_lru.c b/mm/list_lru.c index 7d1356241aa8..bffa80527723 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -13,6 +13,7 @@ #include #include #include "slab.h" +#include "internal.h" #ifdef CONFIG_MEMCG_KMEM static LIST_HEAD(memcg_list_lrus); @@ -338,22 +339,30 @@ static void memcg_destroy_list_lru_range(struct list_lru_memcg *mlrus, kfree(mlrus->mlru[i]); } +static struct list_lru_per_memcg *memcg_init_list_lru_one(gfp_t gfp) +{ + int nid; + struct list_lru_per_memcg *mlru; + + mlru = kmalloc(struct_size(mlru, node, nr_node_ids), gfp); + if (!mlru) + return NULL; + + for_each_node(nid) + init_one_lru(&mlru->node[nid]); + + return mlru; +} + static int memcg_init_list_lru_range(struct list_lru_memcg *mlrus, int begin, int end) { int i; for (i = begin; i < end; i++) { - int nid; - struct list_lru_per_memcg *mlru; - - mlru = kmalloc(struct_size(mlru, node, nr_node_ids), GFP_KERNEL); - if (!mlru) + mlrus->mlru[i] = memcg_init_list_lru_one(GFP_KERNEL); + if (!mlrus->mlru[i]) goto fail; - - for_each_node(nid) - init_one_lru(&mlru->node[nid]); - mlrus->mlru[i] = mlru; } return 0; fail: @@ -370,6 +379,8 @@ static int memcg_init_list_lru(struct list_lru *lru, bool memcg_aware) if (!memcg_aware) return 0; + spin_lock_init(&lru->lock); + mlrus = kvmalloc(struct_size(mlrus, mlru, size), GFP_KERNEL); if (!mlrus) return -ENOMEM; @@ -416,8 +427,11 @@ static int memcg_update_list_lru(struct list_lru *lru, int old_size, int new_siz return -ENOMEM; } + spin_lock_irq(&lru->lock); memcpy(&new->mlru, &old->mlru, flex_array_size(new, mlru, old_size)); rcu_assign_pointer(lru->mlrus, new); + spin_unlock_irq(&lru->lock); + kvfree_rcu(old, rcu); return 0; } @@ -502,6 +516,78 @@ void memcg_drain_all_list_lrus(int src_idx, struct mem_cgroup *dst_memcg) memcg_drain_list_lru(lru, src_idx, dst_memcg); mutex_unlock(&list_lrus_mutex); } + +static bool memcg_list_lru_allocated(struct mem_cgroup *memcg, + struct list_lru *lru) +{ + bool allocated; + int idx; + + idx = memcg->kmemcg_id; + if (unlikely(idx < 0)) + return true; + + rcu_read_lock(); + allocated = !!rcu_dereference(lru->mlrus)->mlru[idx]; + rcu_read_unlock(); + + return allocated; +} + +int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru, + gfp_t gfp) +{ + int i; + unsigned long flags; + struct list_lru_memcg *mlrus; + struct list_lru_memcg_table { + struct list_lru_per_memcg *mlru; + struct mem_cgroup *memcg; + } *table; + + if (!list_lru_memcg_aware(lru) || memcg_list_lru_allocated(memcg, lru)) + return 0; + + gfp &= GFP_RECLAIM_MASK; + table = kmalloc_array(memcg->css.cgroup->level, sizeof(*table), gfp); + if (!table) + return -ENOMEM; + + /* + * Because the list_lru can be reparented to the parent cgroup's + * list_lru, we should make sure that this cgroup and all its + * ancestors have allocated list_lru_per_memcg. + */ + for (i = 0; memcg; memcg = parent_mem_cgroup(memcg), i++) { + if (memcg_list_lru_allocated(memcg, lru)) + break; + + table[i].memcg = memcg; + table[i].mlru = memcg_init_list_lru_one(gfp); + if (!table[i].mlru) { + while (i--) + kfree(table[i].mlru); + kfree(table); + return -ENOMEM; + } + } + + spin_lock_irqsave(&lru->lock, flags); + mlrus = rcu_dereference_protected(lru->mlrus, true); + while (i--) { + int index = table[i].memcg->kmemcg_id; + + if (mlrus->mlru[index]) + kfree(table[i].mlru); + else + mlrus->mlru[index] = table[i].mlru; + } + spin_unlock_irqrestore(&lru->lock, flags); + + kfree(table); + + return 0; +} #else static int memcg_init_list_lru(struct list_lru *lru, bool memcg_aware) { diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 36e9f38c919d..ffc5f0798de1 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2748,20 +2748,6 @@ static void commit_charge(struct folio *folio, struct mem_cgroup *memcg) folio->memcg_data = (unsigned long)memcg; } -static struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg) -{ - struct mem_cgroup *memcg; - - rcu_read_lock(); -retry: - memcg = obj_cgroup_memcg(objcg); - if (unlikely(!css_tryget(&memcg->css))) - goto retry; - rcu_read_unlock(); - - return memcg; -} - #ifdef CONFIG_MEMCG_KMEM /* * The allocated objcg pointers array is not accounted directly. diff --git a/mm/slab.c b/mm/slab.c index ddf5737c63d9..d9dec7a8fd79 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3211,7 +3211,7 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_ bool init = false; flags &= gfp_allowed_mask; - cachep = slab_pre_alloc_hook(cachep, &objcg, 1, flags); + cachep = slab_pre_alloc_hook(cachep, NULL, &objcg, 1, flags); if (unlikely(!cachep)) return NULL; @@ -3287,7 +3287,8 @@ __do_cache_alloc(struct kmem_cache *cachep, gfp_t flags) #endif /* CONFIG_NUMA */ static __always_inline void * -slab_alloc(struct kmem_cache *cachep, gfp_t flags, size_t orig_size, unsigned long caller) +slab_alloc(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags, + size_t orig_size, unsigned long caller) { unsigned long save_flags; void *objp; @@ -3295,7 +3296,7 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, size_t orig_size, unsigned lo bool init = false; flags &= gfp_allowed_mask; - cachep = slab_pre_alloc_hook(cachep, &objcg, 1, flags); + cachep = slab_pre_alloc_hook(cachep, lru, &objcg, 1, flags); if (unlikely(!cachep)) return NULL; @@ -3484,6 +3485,18 @@ void ___cache_free(struct kmem_cache *cachep, void *objp, __free_one(ac, objp); } +static __always_inline +void *__kmem_cache_alloc_lru(struct kmem_cache *cachep, struct list_lru *lru, + gfp_t flags) +{ + void *ret = slab_alloc(cachep, lru, flags, cachep->object_size, _RET_IP_); + + trace_kmem_cache_alloc(_RET_IP_, ret, + cachep->object_size, cachep->size, flags); + + return ret; +} + /** * kmem_cache_alloc - Allocate an object * @cachep: The cache to allocate from. @@ -3496,15 +3509,17 @@ void ___cache_free(struct kmem_cache *cachep, void *objp, */ void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags) { - void *ret = slab_alloc(cachep, flags, cachep->object_size, _RET_IP_); - - trace_kmem_cache_alloc(_RET_IP_, ret, - cachep->object_size, cachep->size, flags); - - return ret; + return __kmem_cache_alloc_lru(cachep, NULL, flags); } EXPORT_SYMBOL(kmem_cache_alloc); +void *kmem_cache_alloc_lru(struct kmem_cache *cachep, struct list_lru *lru, + gfp_t flags) +{ + return __kmem_cache_alloc_lru(cachep, lru, flags); +} +EXPORT_SYMBOL(kmem_cache_alloc_lru); + static __always_inline void cache_alloc_debugcheck_after_bulk(struct kmem_cache *s, gfp_t flags, size_t size, void **p, unsigned long caller) @@ -3521,7 +3536,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, size_t i; struct obj_cgroup *objcg = NULL; - s = slab_pre_alloc_hook(s, &objcg, size, flags); + s = slab_pre_alloc_hook(s, NULL, &objcg, size, flags); if (!s) return 0; @@ -3562,7 +3577,7 @@ kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp_t flags, size_t size) { void *ret; - ret = slab_alloc(cachep, flags, size, _RET_IP_); + ret = slab_alloc(cachep, NULL, flags, size, _RET_IP_); ret = kasan_kmalloc(cachep, ret, size, flags); trace_kmalloc(_RET_IP_, ret, @@ -3689,7 +3704,7 @@ static __always_inline void *__do_kmalloc(size_t size, gfp_t flags, cachep = kmalloc_slab(size, flags); if (unlikely(ZERO_OR_NULL_PTR(cachep))) return cachep; - ret = slab_alloc(cachep, flags, size, caller); + ret = slab_alloc(cachep, NULL, flags, size, caller); ret = kasan_kmalloc(cachep, ret, size, flags); trace_kmalloc(caller, ret, diff --git a/mm/slab.h b/mm/slab.h index c7f2abc2b154..fd7ae2024897 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -231,6 +231,7 @@ struct kmem_cache { #include #include #include +#include /* * State of the slab allocator. @@ -472,6 +473,7 @@ static inline size_t obj_full_size(struct kmem_cache *s) * Returns false if the allocation should fail. */ static inline bool memcg_slab_pre_alloc_hook(struct kmem_cache *s, + struct list_lru *lru, struct obj_cgroup **objcgp, size_t objects, gfp_t flags) { @@ -487,13 +489,26 @@ static inline bool memcg_slab_pre_alloc_hook(struct kmem_cache *s, if (!objcg) return true; - if (obj_cgroup_charge(objcg, flags, objects * obj_full_size(s))) { - obj_cgroup_put(objcg); - return false; + if (lru) { + int ret; + struct mem_cgroup *memcg; + + memcg = get_mem_cgroup_from_objcg(objcg); + ret = memcg_list_lru_alloc(memcg, lru, flags); + css_put(&memcg->css); + + if (ret) + goto out; } + if (obj_cgroup_charge(objcg, flags, objects * obj_full_size(s))) + goto out; + *objcgp = objcg; return true; +out: + obj_cgroup_put(objcg); + return false; } static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s, @@ -598,6 +613,7 @@ static inline void memcg_free_slab_cgroups(struct slab *slab) } static inline bool memcg_slab_pre_alloc_hook(struct kmem_cache *s, + struct list_lru *lru, struct obj_cgroup **objcgp, size_t objects, gfp_t flags) { @@ -697,6 +713,7 @@ static inline size_t slab_ksize(const struct kmem_cache *s) } static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, + struct list_lru *lru, struct obj_cgroup **objcgp, size_t size, gfp_t flags) { @@ -707,7 +724,7 @@ static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, if (should_failslab(s, flags)) return NULL; - if (!memcg_slab_pre_alloc_hook(s, objcgp, size, flags)) + if (!memcg_slab_pre_alloc_hook(s, lru, objcgp, size, flags)) return NULL; return s; diff --git a/mm/slob.c b/mm/slob.c index 60c5842215f1..8a8795520361 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -635,6 +635,12 @@ void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags) } EXPORT_SYMBOL(kmem_cache_alloc); + +void *kmem_cache_alloc_lru(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags) +{ + return slob_alloc_node(cachep, flags, NUMA_NO_NODE); +} +EXPORT_SYMBOL(kmem_cache_alloc_lru); #ifdef CONFIG_NUMA void *__kmalloc_node(size_t size, gfp_t gfp, int node) { diff --git a/mm/slub.c b/mm/slub.c index 261474092e43..07cdd999c3fe 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3131,7 +3131,7 @@ static __always_inline void maybe_wipe_obj_freeptr(struct kmem_cache *s, * * Otherwise we can simply pick the next object from the lockless free list. */ -static __always_inline void *slab_alloc_node(struct kmem_cache *s, +static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_lru *lru, gfp_t gfpflags, int node, unsigned long addr, size_t orig_size) { void *object; @@ -3141,7 +3141,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct obj_cgroup *objcg = NULL; bool init = false; - s = slab_pre_alloc_hook(s, &objcg, 1, gfpflags); + s = slab_pre_alloc_hook(s, lru, &objcg, 1, gfpflags); if (!s) return NULL; @@ -3232,27 +3232,41 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, return object; } -static __always_inline void *slab_alloc(struct kmem_cache *s, +static __always_inline void *slab_alloc(struct kmem_cache *s, struct list_lru *lru, gfp_t gfpflags, unsigned long addr, size_t orig_size) { - return slab_alloc_node(s, gfpflags, NUMA_NO_NODE, addr, orig_size); + return slab_alloc_node(s, lru, gfpflags, NUMA_NO_NODE, addr, orig_size); } -void *kmem_cache_alloc(struct kmem_cache *s, gfp_t gfpflags) +static __always_inline +void *__kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, + gfp_t gfpflags) { - void *ret = slab_alloc(s, gfpflags, _RET_IP_, s->object_size); + void *ret = slab_alloc(s, lru, gfpflags, _RET_IP_, s->object_size); trace_kmem_cache_alloc(_RET_IP_, ret, s->object_size, s->size, gfpflags); return ret; } + +void *kmem_cache_alloc(struct kmem_cache *s, gfp_t gfpflags) +{ + return __kmem_cache_alloc_lru(s, NULL, gfpflags); +} EXPORT_SYMBOL(kmem_cache_alloc); +void *kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, + gfp_t gfpflags) +{ + return __kmem_cache_alloc_lru(s, lru, gfpflags); +} +EXPORT_SYMBOL(kmem_cache_alloc_lru); + #ifdef CONFIG_TRACING void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) { - void *ret = slab_alloc(s, gfpflags, _RET_IP_, size); + void *ret = slab_alloc(s, NULL, gfpflags, _RET_IP_, size); trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags); ret = kasan_kmalloc(s, ret, size, gfpflags); return ret; @@ -3263,7 +3277,7 @@ EXPORT_SYMBOL(kmem_cache_alloc_trace); #ifdef CONFIG_NUMA void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) { - void *ret = slab_alloc_node(s, gfpflags, node, _RET_IP_, s->object_size); + void *ret = slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, s->object_size); trace_kmem_cache_alloc_node(_RET_IP_, ret, s->object_size, s->size, gfpflags, node); @@ -3277,7 +3291,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, int node, size_t size) { - void *ret = slab_alloc_node(s, gfpflags, node, _RET_IP_, size); + void *ret = slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, size); trace_kmalloc_node(_RET_IP_, ret, size, s->size, gfpflags, node); @@ -3667,7 +3681,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, struct obj_cgroup *objcg = NULL; /* memcg and kmem_cache debug support */ - s = slab_pre_alloc_hook(s, &objcg, size, flags); + s = slab_pre_alloc_hook(s, NULL, &objcg, size, flags); if (unlikely(!s)) return false; /* @@ -4417,7 +4431,7 @@ void *__kmalloc(size_t size, gfp_t flags) if (unlikely(ZERO_OR_NULL_PTR(s))) return s; - ret = slab_alloc(s, flags, _RET_IP_, size); + ret = slab_alloc(s, NULL, flags, _RET_IP_, size); trace_kmalloc(_RET_IP_, ret, size, s->size, flags); @@ -4465,7 +4479,7 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node) if (unlikely(ZERO_OR_NULL_PTR(s))) return s; - ret = slab_alloc_node(s, flags, node, _RET_IP_, size); + ret = slab_alloc_node(s, NULL, flags, node, _RET_IP_, size); trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node); @@ -4923,7 +4937,7 @@ void *__kmalloc_track_caller(size_t size, gfp_t gfpflags, unsigned long caller) if (unlikely(ZERO_OR_NULL_PTR(s))) return s; - ret = slab_alloc(s, gfpflags, caller, size); + ret = slab_alloc(s, NULL, gfpflags, caller, size); /* Honor the call site pointer we received. */ trace_kmalloc(caller, ret, size, s->size, gfpflags); @@ -4954,7 +4968,7 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, if (unlikely(ZERO_OR_NULL_PTR(s))) return s; - ret = slab_alloc_node(s, gfpflags, node, caller, size); + ret = slab_alloc_node(s, NULL, gfpflags, node, caller, size); /* Honor the call site pointer we received. */ trace_kmalloc_node(caller, ret, size, s->size, gfpflags, node); From patchwork Mon Feb 28 12:21:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12763161 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D46FFC433EF for ; Mon, 28 Feb 2022 12:22:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 724118D0007; Mon, 28 Feb 2022 07:22:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6D2758D0001; Mon, 28 Feb 2022 07:22:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5C2028D0007; Mon, 28 Feb 2022 07:22:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 4F9868D0001 for ; Mon, 28 Feb 2022 07:22:41 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 233FD22D03 for ; Mon, 28 Feb 2022 12:22:41 +0000 (UTC) X-FDA: 79192102122.03.2F3510F Received: from mail-pf1-f173.google.com (mail-pf1-f173.google.com [209.85.210.173]) by imf21.hostedemail.com (Postfix) with ESMTP id A46691C0010 for ; Mon, 28 Feb 2022 12:22:40 +0000 (UTC) Received: by mail-pf1-f173.google.com with SMTP id p8so10990044pfh.8 for ; Mon, 28 Feb 2022 04:22:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=U9UWVHk9T3adw1sq6yWmWHY1AtUm3NeB5gtAS4YuIjA=; b=Cn/oKj8L8a2dI0vhpvOfrqdYlnnnQMWKP0fdx6Ho/fmH8C0d7VvKvifAir3lgu3LVj zz9QL/xaZlkDC2s5B5rU/4uGZNT98wniOY72Lt3Ow33/E6S2XahmvKMN6USM/WhsvF1+ YBroEYTZLzHYU0HIeNU5hxk33UZ88dgj9h2q14okLwJw3JekYIXdt2XBuBLOhhJaAtHO gaxo3vFnwodKb6yAgxJtK4M2WPFQ4TLm9uOBoAvZEw3JSKk72pi9ybJTRN05IQ1smpzO x9PC6t9Xz50iGUX/p/xv2TGVNV6UgqRLl/8qFl0xmBvqEuFEj0c1s8eJk+Qls9DQt3Qj QwHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=U9UWVHk9T3adw1sq6yWmWHY1AtUm3NeB5gtAS4YuIjA=; b=b7pMJjk5CqogkpVjFpKJ49TlVwxzAhY9pXzVr4uMfotK3KBE0TmzpWLs8WaKq6NMIt AA3sapjVJJIFQt/HshTRxQEjt3KpndkXAThmeRL82B01qne4M2gbXCyPti8tjRaA179r mEn12nmDkA0mgwnHkr4vu1HqyUzbpwW6L9xeeZXBK6j9z2qsYEXUWBXmx0ki+7X/gj2Q ZYDEgO3iYr6654ciPsBs5Ru+m6ln2bSLV2oANORbUTgCjJKL7YfLxSC6HI/tZDjEJgmQ txpirvS0QJ/B1rCBPAqt7jCROvh6HVJDnvOcL+1dmWpYTMnmE/F+u47FwcGbI1MZ9AWU t87g== X-Gm-Message-State: AOAM531GzmO+YtSwpSD1FuFYIB0S2sIDQNvXMhm/QS82YmJOML7hD2tq OAvWjIXm2YgeDTaVUV7hr0a44g== X-Google-Smtp-Source: ABdhPJyHMGIpgjsEIefHIQeJS8Q4+ZmcMcAzf58DyeCmeti88YHU6keHhHtsuwJ2/dY8kE2J7Pgp1A== X-Received: by 2002:a63:a545:0:b0:34c:9ba5:6125 with SMTP id r5-20020a63a545000000b0034c9ba56125mr16895096pgu.392.1646050959745; Mon, 28 Feb 2022 04:22:39 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.227]) by smtp.gmail.com with ESMTPSA id ep22-20020a17090ae65600b001b92477db10sm10466753pjb.29.2022.02.28.04.22.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 04:22:39 -0800 (PST) From: Muchun Song To: willy@infradead.org, akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, shakeelb@google.com, roman.gushchin@linux.dev, shy828301@gmail.com, alexs@kernel.org, richard.weiyang@gmail.com, david@fromorbit.com, trond.myklebust@hammerspace.com, anna.schumaker@netapp.com, jaegeuk@kernel.org, chao@kernel.org, kari.argillander@gmail.com, vbabka@suse.cz Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nfs@vger.kernel.org, zhengqi.arch@bytedance.com, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v6 03/16] fs: introduce alloc_inode_sb() to allocate filesystems specific inode Date: Mon, 28 Feb 2022 20:21:13 +0800 Message-Id: <20220228122126.37293-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220228122126.37293-1-songmuchun@bytedance.com> References: <20220228122126.37293-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: A46691C0010 X-Stat-Signature: bqxdd1ehbh5ats5sykxtzzzu5ezq4iw6 X-Rspam-User: Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b="Cn/oKj8L"; spf=pass (imf21.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.210.173 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspamd-Server: rspam07 X-HE-Tag: 1646050960-112347 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The allocated inode cache is supposed to be added to its memcg list_lru which should be allocated as well in advance. That can be done by kmem_cache_alloc_lru() which allocates object and list_lru. The file systems is main user of it. So introduce alloc_inode_sb() to allocate file system specific inodes and set up the inode reclaim context properly. The file system is supposed to use alloc_inode_sb() to allocate inodes. In the later patches, we will convert all users to the new API. Signed-off-by: Muchun Song Reviewed-by: Roman Gushchin --- Documentation/filesystems/porting.rst | 6 ++++++ fs/inode.c | 2 +- include/linux/fs.h | 11 +++++++++++ 3 files changed, 18 insertions(+), 1 deletion(-) diff --git a/Documentation/filesystems/porting.rst b/Documentation/filesystems/porting.rst index bf19fd6b86e7..7c1583dbeb59 100644 --- a/Documentation/filesystems/porting.rst +++ b/Documentation/filesystems/porting.rst @@ -45,6 +45,12 @@ typically between calling iget_locked() and unlocking the inode. At some point that will become mandatory. +**mandatory** + +The foo_inode_info should always be allocated through alloc_inode_sb() rather +than kmem_cache_alloc() or kmalloc() related to set up the inode reclaim context +correctly. + --- **mandatory** diff --git a/fs/inode.c b/fs/inode.c index 63324df6fa27..9d9b422504d1 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -259,7 +259,7 @@ static struct inode *alloc_inode(struct super_block *sb) if (ops->alloc_inode) inode = ops->alloc_inode(sb); else - inode = kmem_cache_alloc(inode_cachep, GFP_KERNEL); + inode = alloc_inode_sb(sb, inode_cachep, GFP_KERNEL); if (!inode) return NULL; diff --git a/include/linux/fs.h b/include/linux/fs.h index e2d892b201b0..3d56b02667f4 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -42,6 +42,7 @@ #include #include #include +#include #include #include @@ -3108,6 +3109,16 @@ extern void free_inode_nonrcu(struct inode *inode); extern int should_remove_suid(struct dentry *); extern int file_remove_privs(struct file *); +/* + * This must be used for allocating filesystems specific inodes to set + * up the inode reclaim context correctly. + */ +static inline void * +alloc_inode_sb(struct super_block *sb, struct kmem_cache *cache, gfp_t gfp) +{ + return kmem_cache_alloc_lru(cache, &sb->s_inode_lru, gfp); +} + extern void __insert_inode_hash(struct inode *, unsigned long hashval); static inline void insert_inode_hash(struct inode *inode) { From patchwork Mon Feb 28 12:21:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12763162 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0498CC433EF for ; Mon, 28 Feb 2022 12:22:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8F7068D0003; Mon, 28 Feb 2022 07:22:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8A7668D0001; Mon, 28 Feb 2022 07:22:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 747E98D0003; Mon, 28 Feb 2022 07:22:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 63F948D0001 for ; Mon, 28 Feb 2022 07:22:51 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 381F6613B8 for ; Mon, 28 Feb 2022 12:22:51 +0000 (UTC) X-FDA: 79192102542.09.59D386E Received: from mail-pg1-f173.google.com (mail-pg1-f173.google.com [209.85.215.173]) by imf10.hostedemail.com (Postfix) with ESMTP id 7AFCEC0003 for ; Mon, 28 Feb 2022 12:22:50 +0000 (UTC) Received: by mail-pg1-f173.google.com with SMTP id o26so10567874pgb.8 for ; Mon, 28 Feb 2022 04:22:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5T59WpaR94U7+J0uNR9kO0qgGbtZPBIFmJegKtdHOf8=; b=RNrYyU2M/9pvwCKd6lMfZtTP4mFWNqfdqFV99qwFmXToPNOEusrosinPbcfi1FE5V6 1eMT71wzn5zhYbJN3eRBGjvZOgwcuwi3h4mcE4122VBul24uwqfVI1kKkCp7+9Tttg0z GL6tjA4V1dTT+jymVEKRSerzd4qSPQ9J5qv6SLpLMZc2BEKVbCfDfBuOWcVHzs6atLgB n4x/WWVItK+aU6KBtq3WxVCMtvdQkzC1T5DinqAwoqpOSbHsWP8OEJ4BDCF4/BcrzxCY JMmsAJUDW/fTrVKd6kplw/VZOwZjEc16ka+ry5v3hcvMO7WLvOpKpEEzon09D8jrrtsl lhsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5T59WpaR94U7+J0uNR9kO0qgGbtZPBIFmJegKtdHOf8=; b=kmuaa+UcitlKaawxXePK4OHuPx0asw1KFXshdkATS0JBTy9tv5AjUmvf7wpQQ767Ei mc4MPIkyjcz2qDP6veWNhml3fw7OMo6xXOfFE9I52fTg0xoOtMY7NVnphtwB55/JeClK SNt6nR00rzKnWd7iBxDDVPla4qdw7ifunz8P5CboauTHexurJ4/8X6RgDj6vQuYUVUyl Pip9rqr8FsaBgkqJwxyN9z3fGaNtALXk8esFWJMAA90+D4JBHD6TE3KMPqcKKk+dLpLM qlTwd0kGDPZXoZwZAL2KSYCxsIGTLA4ubJYAcN8W6aSTAXrqo6m5spRuLTFJXHXeD5t0 Bo8w== X-Gm-Message-State: AOAM533XkNkZFkMVumu44SKyksFfGf0L4E4Es4yN3Zj7YdhCqlDeKvAm +cCsDTIrCR84aUE2chTXihO03Q== X-Google-Smtp-Source: ABdhPJy+CbpT6oLQ7wqxAU88R5/BvL6JKPOVmvM0o8Gt55wp+6BUp61R5gpEMAE8l7/2AKPcW9tnsg== X-Received: by 2002:a05:6a00:8ca:b0:4e0:2ed3:5630 with SMTP id s10-20020a056a0008ca00b004e02ed35630mr21579929pfu.3.1646050969368; Mon, 28 Feb 2022 04:22:49 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.227]) by smtp.gmail.com with ESMTPSA id ep22-20020a17090ae65600b001b92477db10sm10466753pjb.29.2022.02.28.04.22.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 04:22:49 -0800 (PST) From: Muchun Song To: willy@infradead.org, akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, shakeelb@google.com, roman.gushchin@linux.dev, shy828301@gmail.com, alexs@kernel.org, richard.weiyang@gmail.com, david@fromorbit.com, trond.myklebust@hammerspace.com, anna.schumaker@netapp.com, jaegeuk@kernel.org, chao@kernel.org, kari.argillander@gmail.com, vbabka@suse.cz Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nfs@vger.kernel.org, zhengqi.arch@bytedance.com, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, smuchun@gmail.com, Muchun Song , Theodore Ts'o Subject: [PATCH v6 04/16] fs: allocate inode by using alloc_inode_sb() Date: Mon, 28 Feb 2022 20:21:14 +0800 Message-Id: <20220228122126.37293-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220228122126.37293-1-songmuchun@bytedance.com> References: <20220228122126.37293-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 7AFCEC0003 X-Stat-Signature: ecdzoyckgue65c8buwgjhyizcdm4czdq Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=RNrYyU2M; spf=pass (imf10.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.215.173 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspam-User: X-HE-Tag: 1646050970-790192 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The inode allocation is supposed to use alloc_inode_sb(), so convert kmem_cache_alloc() of all filesystems to alloc_inode_sb(). Signed-off-by: Muchun Song Acked-by: Theodore Ts'o [ext4] Acked-by: Roman Gushchin --- block/bdev.c | 2 +- drivers/dax/super.c | 2 +- fs/9p/vfs_inode.c | 2 +- fs/adfs/super.c | 2 +- fs/affs/super.c | 2 +- fs/afs/super.c | 2 +- fs/befs/linuxvfs.c | 2 +- fs/bfs/inode.c | 2 +- fs/btrfs/inode.c | 2 +- fs/ceph/inode.c | 2 +- fs/cifs/cifsfs.c | 2 +- fs/coda/inode.c | 2 +- fs/ecryptfs/super.c | 2 +- fs/efs/super.c | 2 +- fs/erofs/super.c | 2 +- fs/exfat/super.c | 2 +- fs/ext2/super.c | 2 +- fs/ext4/super.c | 2 +- fs/fat/inode.c | 2 +- fs/freevxfs/vxfs_super.c | 2 +- fs/fuse/inode.c | 2 +- fs/gfs2/super.c | 2 +- fs/hfs/super.c | 2 +- fs/hfsplus/super.c | 2 +- fs/hostfs/hostfs_kern.c | 2 +- fs/hpfs/super.c | 2 +- fs/hugetlbfs/inode.c | 2 +- fs/isofs/inode.c | 2 +- fs/jffs2/super.c | 2 +- fs/jfs/super.c | 2 +- fs/minix/inode.c | 2 +- fs/nfs/inode.c | 2 +- fs/nilfs2/super.c | 2 +- fs/ntfs/inode.c | 2 +- fs/ntfs3/super.c | 2 +- fs/ocfs2/dlmfs/dlmfs.c | 2 +- fs/ocfs2/super.c | 2 +- fs/openpromfs/inode.c | 2 +- fs/orangefs/super.c | 2 +- fs/overlayfs/super.c | 2 +- fs/proc/inode.c | 2 +- fs/qnx4/inode.c | 2 +- fs/qnx6/inode.c | 2 +- fs/reiserfs/super.c | 2 +- fs/romfs/super.c | 2 +- fs/squashfs/super.c | 2 +- fs/sysv/inode.c | 2 +- fs/ubifs/super.c | 2 +- fs/udf/super.c | 2 +- fs/ufs/super.c | 2 +- fs/vboxsf/super.c | 2 +- fs/xfs/xfs_icache.c | 2 +- fs/zonefs/super.c | 2 +- ipc/mqueue.c | 2 +- mm/shmem.c | 2 +- net/socket.c | 2 +- net/sunrpc/rpc_pipe.c | 2 +- 57 files changed, 57 insertions(+), 57 deletions(-) diff --git a/block/bdev.c b/block/bdev.c index 102837a37051..ac05343d410b 100644 --- a/block/bdev.c +++ b/block/bdev.c @@ -385,7 +385,7 @@ static struct kmem_cache * bdev_cachep __read_mostly; static struct inode *bdev_alloc_inode(struct super_block *sb) { - struct bdev_inode *ei = kmem_cache_alloc(bdev_cachep, GFP_KERNEL); + struct bdev_inode *ei = alloc_inode_sb(sb, bdev_cachep, GFP_KERNEL); if (!ei) return NULL; diff --git a/drivers/dax/super.c b/drivers/dax/super.c index e3029389d809..aedc15eabeac 100644 --- a/drivers/dax/super.c +++ b/drivers/dax/super.c @@ -282,7 +282,7 @@ static struct inode *dax_alloc_inode(struct super_block *sb) struct dax_device *dax_dev; struct inode *inode; - dax_dev = kmem_cache_alloc(dax_cache, GFP_KERNEL); + dax_dev = alloc_inode_sb(sb, dax_cache, GFP_KERNEL); if (!dax_dev) return NULL; diff --git a/fs/9p/vfs_inode.c b/fs/9p/vfs_inode.c index 2a10242c79c7..84c3cf7dffa5 100644 --- a/fs/9p/vfs_inode.c +++ b/fs/9p/vfs_inode.c @@ -228,7 +228,7 @@ struct inode *v9fs_alloc_inode(struct super_block *sb) { struct v9fs_inode *v9inode; - v9inode = kmem_cache_alloc(v9fs_inode_cache, GFP_KERNEL); + v9inode = alloc_inode_sb(sb, v9fs_inode_cache, GFP_KERNEL); if (!v9inode) return NULL; #ifdef CONFIG_9P_FSCACHE diff --git a/fs/adfs/super.c b/fs/adfs/super.c index bdbd26e571ed..e8bfc38239cd 100644 --- a/fs/adfs/super.c +++ b/fs/adfs/super.c @@ -220,7 +220,7 @@ static struct kmem_cache *adfs_inode_cachep; static struct inode *adfs_alloc_inode(struct super_block *sb) { struct adfs_inode_info *ei; - ei = kmem_cache_alloc(adfs_inode_cachep, GFP_KERNEL); + ei = alloc_inode_sb(sb, adfs_inode_cachep, GFP_KERNEL); if (!ei) return NULL; return &ei->vfs_inode; diff --git a/fs/affs/super.c b/fs/affs/super.c index c609005a9eaa..4c5f30a83336 100644 --- a/fs/affs/super.c +++ b/fs/affs/super.c @@ -100,7 +100,7 @@ static struct inode *affs_alloc_inode(struct super_block *sb) { struct affs_inode_info *i; - i = kmem_cache_alloc(affs_inode_cachep, GFP_KERNEL); + i = alloc_inode_sb(sb, affs_inode_cachep, GFP_KERNEL); if (!i) return NULL; diff --git a/fs/afs/super.c b/fs/afs/super.c index 5ec9fd97eccc..7592c0f469f1 100644 --- a/fs/afs/super.c +++ b/fs/afs/super.c @@ -679,7 +679,7 @@ static struct inode *afs_alloc_inode(struct super_block *sb) { struct afs_vnode *vnode; - vnode = kmem_cache_alloc(afs_inode_cachep, GFP_KERNEL); + vnode = alloc_inode_sb(sb, afs_inode_cachep, GFP_KERNEL); if (!vnode) return NULL; diff --git a/fs/befs/linuxvfs.c b/fs/befs/linuxvfs.c index c1ba13d19024..b4b3567ac655 100644 --- a/fs/befs/linuxvfs.c +++ b/fs/befs/linuxvfs.c @@ -277,7 +277,7 @@ befs_alloc_inode(struct super_block *sb) { struct befs_inode_info *bi; - bi = kmem_cache_alloc(befs_inode_cachep, GFP_KERNEL); + bi = alloc_inode_sb(sb, befs_inode_cachep, GFP_KERNEL); if (!bi) return NULL; return &bi->vfs_inode; diff --git a/fs/bfs/inode.c b/fs/bfs/inode.c index fd691e4815c5..1926bec2c850 100644 --- a/fs/bfs/inode.c +++ b/fs/bfs/inode.c @@ -239,7 +239,7 @@ static struct kmem_cache *bfs_inode_cachep; static struct inode *bfs_alloc_inode(struct super_block *sb) { struct bfs_inode_info *bi; - bi = kmem_cache_alloc(bfs_inode_cachep, GFP_KERNEL); + bi = alloc_inode_sb(sb, bfs_inode_cachep, GFP_KERNEL); if (!bi) return NULL; return &bi->vfs_inode; diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 3b2403b6127f..3c30e1030fbb 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -8759,7 +8759,7 @@ struct inode *btrfs_alloc_inode(struct super_block *sb) struct btrfs_inode *ei; struct inode *inode; - ei = kmem_cache_alloc(btrfs_inode_cachep, GFP_KERNEL); + ei = alloc_inode_sb(sb, btrfs_inode_cachep, GFP_KERNEL); if (!ei) return NULL; diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c index ef4a980a7bf3..9cfa6c730519 100644 --- a/fs/ceph/inode.c +++ b/fs/ceph/inode.c @@ -447,7 +447,7 @@ struct inode *ceph_alloc_inode(struct super_block *sb) struct ceph_inode_info *ci; int i; - ci = kmem_cache_alloc(ceph_inode_cachep, GFP_NOFS); + ci = alloc_inode_sb(sb, ceph_inode_cachep, GFP_NOFS); if (!ci) return NULL; diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c index 082c21478686..74ddece5b533 100644 --- a/fs/cifs/cifsfs.c +++ b/fs/cifs/cifsfs.c @@ -354,7 +354,7 @@ static struct inode * cifs_alloc_inode(struct super_block *sb) { struct cifsInodeInfo *cifs_inode; - cifs_inode = kmem_cache_alloc(cifs_inode_cachep, GFP_KERNEL); + cifs_inode = alloc_inode_sb(sb, cifs_inode_cachep, GFP_KERNEL); if (!cifs_inode) return NULL; cifs_inode->cifsAttrs = 0x20; /* default */ diff --git a/fs/coda/inode.c b/fs/coda/inode.c index d9f1bd7153df..2185328b65c7 100644 --- a/fs/coda/inode.c +++ b/fs/coda/inode.c @@ -43,7 +43,7 @@ static struct kmem_cache * coda_inode_cachep; static struct inode *coda_alloc_inode(struct super_block *sb) { struct coda_inode_info *ei; - ei = kmem_cache_alloc(coda_inode_cachep, GFP_KERNEL); + ei = alloc_inode_sb(sb, coda_inode_cachep, GFP_KERNEL); if (!ei) return NULL; memset(&ei->c_fid, 0, sizeof(struct CodaFid)); diff --git a/fs/ecryptfs/super.c b/fs/ecryptfs/super.c index 39116af0390f..0b1c878317ab 100644 --- a/fs/ecryptfs/super.c +++ b/fs/ecryptfs/super.c @@ -38,7 +38,7 @@ static struct inode *ecryptfs_alloc_inode(struct super_block *sb) struct ecryptfs_inode_info *inode_info; struct inode *inode = NULL; - inode_info = kmem_cache_alloc(ecryptfs_inode_info_cache, GFP_KERNEL); + inode_info = alloc_inode_sb(sb, ecryptfs_inode_info_cache, GFP_KERNEL); if (unlikely(!inode_info)) goto out; if (ecryptfs_init_crypt_stat(&inode_info->crypt_stat)) { diff --git a/fs/efs/super.c b/fs/efs/super.c index 62b155b9366b..b287f47c165b 100644 --- a/fs/efs/super.c +++ b/fs/efs/super.c @@ -69,7 +69,7 @@ static struct kmem_cache * efs_inode_cachep; static struct inode *efs_alloc_inode(struct super_block *sb) { struct efs_inode_info *ei; - ei = kmem_cache_alloc(efs_inode_cachep, GFP_KERNEL); + ei = alloc_inode_sb(sb, efs_inode_cachep, GFP_KERNEL); if (!ei) return NULL; return &ei->vfs_inode; diff --git a/fs/erofs/super.c b/fs/erofs/super.c index 915eefe0d7e2..a9ffffc62c25 100644 --- a/fs/erofs/super.c +++ b/fs/erofs/super.c @@ -84,7 +84,7 @@ static void erofs_inode_init_once(void *ptr) static struct inode *erofs_alloc_inode(struct super_block *sb) { struct erofs_inode *vi = - kmem_cache_alloc(erofs_inode_cachep, GFP_KERNEL); + alloc_inode_sb(sb, erofs_inode_cachep, GFP_KERNEL); if (!vi) return NULL; diff --git a/fs/exfat/super.c b/fs/exfat/super.c index 8c9fb7dcec16..9f892903419a 100644 --- a/fs/exfat/super.c +++ b/fs/exfat/super.c @@ -183,7 +183,7 @@ static struct inode *exfat_alloc_inode(struct super_block *sb) { struct exfat_inode_info *ei; - ei = kmem_cache_alloc(exfat_inode_cachep, GFP_NOFS); + ei = alloc_inode_sb(sb, exfat_inode_cachep, GFP_NOFS); if (!ei) return NULL; diff --git a/fs/ext2/super.c b/fs/ext2/super.c index 94f1fbd7d3ac..e48aaf52ce93 100644 --- a/fs/ext2/super.c +++ b/fs/ext2/super.c @@ -180,7 +180,7 @@ static struct kmem_cache * ext2_inode_cachep; static struct inode *ext2_alloc_inode(struct super_block *sb) { struct ext2_inode_info *ei; - ei = kmem_cache_alloc(ext2_inode_cachep, GFP_KERNEL); + ei = alloc_inode_sb(sb, ext2_inode_cachep, GFP_KERNEL); if (!ei) return NULL; ei->i_block_alloc_info = NULL; diff --git a/fs/ext4/super.c b/fs/ext4/super.c index c5021ca0a28a..774eef9f7913 100644 --- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -1316,7 +1316,7 @@ static struct inode *ext4_alloc_inode(struct super_block *sb) { struct ext4_inode_info *ei; - ei = kmem_cache_alloc(ext4_inode_cachep, GFP_NOFS); + ei = alloc_inode_sb(sb, ext4_inode_cachep, GFP_NOFS); if (!ei) return NULL; diff --git a/fs/fat/inode.c b/fs/fat/inode.c index a6f1c6d426d1..38796681cc86 100644 --- a/fs/fat/inode.c +++ b/fs/fat/inode.c @@ -745,7 +745,7 @@ static struct kmem_cache *fat_inode_cachep; static struct inode *fat_alloc_inode(struct super_block *sb) { struct msdos_inode_info *ei; - ei = kmem_cache_alloc(fat_inode_cachep, GFP_NOFS); + ei = alloc_inode_sb(sb, fat_inode_cachep, GFP_NOFS); if (!ei) return NULL; diff --git a/fs/freevxfs/vxfs_super.c b/fs/freevxfs/vxfs_super.c index 578a5062706e..22eed5a73ac2 100644 --- a/fs/freevxfs/vxfs_super.c +++ b/fs/freevxfs/vxfs_super.c @@ -124,7 +124,7 @@ static struct inode *vxfs_alloc_inode(struct super_block *sb) { struct vxfs_inode_info *vi; - vi = kmem_cache_alloc(vxfs_inode_cachep, GFP_KERNEL); + vi = alloc_inode_sb(sb, vxfs_inode_cachep, GFP_KERNEL); if (!vi) return NULL; inode_init_once(&vi->vfs_inode); diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c index ee846ce371d8..2bfbd0557028 100644 --- a/fs/fuse/inode.c +++ b/fs/fuse/inode.c @@ -73,7 +73,7 @@ static struct inode *fuse_alloc_inode(struct super_block *sb) { struct fuse_inode *fi; - fi = kmem_cache_alloc(fuse_inode_cachep, GFP_KERNEL); + fi = alloc_inode_sb(sb, fuse_inode_cachep, GFP_KERNEL); if (!fi) return NULL; diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c index 64c67090f503..cf9cf66522b3 100644 --- a/fs/gfs2/super.c +++ b/fs/gfs2/super.c @@ -1425,7 +1425,7 @@ static struct inode *gfs2_alloc_inode(struct super_block *sb) { struct gfs2_inode *ip; - ip = kmem_cache_alloc(gfs2_inode_cachep, GFP_KERNEL); + ip = alloc_inode_sb(sb, gfs2_inode_cachep, GFP_KERNEL); if (!ip) return NULL; ip->i_flags = 0; diff --git a/fs/hfs/super.c b/fs/hfs/super.c index 12d9bae39363..6764afa98a6f 100644 --- a/fs/hfs/super.c +++ b/fs/hfs/super.c @@ -162,7 +162,7 @@ static struct inode *hfs_alloc_inode(struct super_block *sb) { struct hfs_inode_info *i; - i = kmem_cache_alloc(hfs_inode_cachep, GFP_KERNEL); + i = alloc_inode_sb(sb, hfs_inode_cachep, GFP_KERNEL); return i ? &i->vfs_inode : NULL; } diff --git a/fs/hfsplus/super.c b/fs/hfsplus/super.c index b9e3db3f855f..8479add998b5 100644 --- a/fs/hfsplus/super.c +++ b/fs/hfsplus/super.c @@ -624,7 +624,7 @@ static struct inode *hfsplus_alloc_inode(struct super_block *sb) { struct hfsplus_inode_info *i; - i = kmem_cache_alloc(hfsplus_inode_cachep, GFP_KERNEL); + i = alloc_inode_sb(sb, hfsplus_inode_cachep, GFP_KERNEL); return i ? &i->vfs_inode : NULL; } diff --git a/fs/hostfs/hostfs_kern.c b/fs/hostfs/hostfs_kern.c index ef481c3d9019..458300390c51 100644 --- a/fs/hostfs/hostfs_kern.c +++ b/fs/hostfs/hostfs_kern.c @@ -222,7 +222,7 @@ static struct inode *hostfs_alloc_inode(struct super_block *sb) { struct hostfs_inode_info *hi; - hi = kmem_cache_alloc(hostfs_inode_cache, GFP_KERNEL_ACCOUNT); + hi = alloc_inode_sb(sb, hostfs_inode_cache, GFP_KERNEL_ACCOUNT); if (hi == NULL) return NULL; hi->fd = -1; diff --git a/fs/hpfs/super.c b/fs/hpfs/super.c index a7dbfc892022..1cb89595b875 100644 --- a/fs/hpfs/super.c +++ b/fs/hpfs/super.c @@ -232,7 +232,7 @@ static struct kmem_cache * hpfs_inode_cachep; static struct inode *hpfs_alloc_inode(struct super_block *sb) { struct hpfs_inode_info *ei; - ei = kmem_cache_alloc(hpfs_inode_cachep, GFP_NOFS); + ei = alloc_inode_sb(sb, hpfs_inode_cachep, GFP_NOFS); if (!ei) return NULL; return &ei->vfs_inode; diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index a7c6c7498be0..171212bdaae6 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -1110,7 +1110,7 @@ static struct inode *hugetlbfs_alloc_inode(struct super_block *sb) if (unlikely(!hugetlbfs_dec_free_inodes(sbinfo))) return NULL; - p = kmem_cache_alloc(hugetlbfs_inode_cachep, GFP_KERNEL); + p = alloc_inode_sb(sb, hugetlbfs_inode_cachep, GFP_KERNEL); if (unlikely(!p)) { hugetlbfs_inc_free_inodes(sbinfo); return NULL; diff --git a/fs/isofs/inode.c b/fs/isofs/inode.c index 0c6eacfcbeef..d7491692aea3 100644 --- a/fs/isofs/inode.c +++ b/fs/isofs/inode.c @@ -70,7 +70,7 @@ static struct kmem_cache *isofs_inode_cachep; static struct inode *isofs_alloc_inode(struct super_block *sb) { struct iso_inode_info *ei; - ei = kmem_cache_alloc(isofs_inode_cachep, GFP_KERNEL); + ei = alloc_inode_sb(sb, isofs_inode_cachep, GFP_KERNEL); if (!ei) return NULL; return &ei->vfs_inode; diff --git a/fs/jffs2/super.c b/fs/jffs2/super.c index 81ca58c10b72..7ea37f49f1e1 100644 --- a/fs/jffs2/super.c +++ b/fs/jffs2/super.c @@ -39,7 +39,7 @@ static struct inode *jffs2_alloc_inode(struct super_block *sb) { struct jffs2_inode_info *f; - f = kmem_cache_alloc(jffs2_inode_cachep, GFP_KERNEL); + f = alloc_inode_sb(sb, jffs2_inode_cachep, GFP_KERNEL); if (!f) return NULL; return &f->vfs_inode; diff --git a/fs/jfs/super.c b/fs/jfs/super.c index 24cbc9946e01..f1a13a74cddf 100644 --- a/fs/jfs/super.c +++ b/fs/jfs/super.c @@ -102,7 +102,7 @@ static struct inode *jfs_alloc_inode(struct super_block *sb) { struct jfs_inode_info *jfs_inode; - jfs_inode = kmem_cache_alloc(jfs_inode_cachep, GFP_NOFS); + jfs_inode = alloc_inode_sb(sb, jfs_inode_cachep, GFP_NOFS); if (!jfs_inode) return NULL; #ifdef CONFIG_QUOTA diff --git a/fs/minix/inode.c b/fs/minix/inode.c index a71f1cf894b9..8a0af80741b5 100644 --- a/fs/minix/inode.c +++ b/fs/minix/inode.c @@ -63,7 +63,7 @@ static struct kmem_cache * minix_inode_cachep; static struct inode *minix_alloc_inode(struct super_block *sb) { struct minix_inode_info *ei; - ei = kmem_cache_alloc(minix_inode_cachep, GFP_KERNEL); + ei = alloc_inode_sb(sb, minix_inode_cachep, GFP_KERNEL); if (!ei) return NULL; return &ei->vfs_inode; diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c index d96baa4450e3..3351c2de3e08 100644 --- a/fs/nfs/inode.c +++ b/fs/nfs/inode.c @@ -2238,7 +2238,7 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr) struct inode *nfs_alloc_inode(struct super_block *sb) { struct nfs_inode *nfsi; - nfsi = kmem_cache_alloc(nfs_inode_cachep, GFP_KERNEL); + nfsi = alloc_inode_sb(sb, nfs_inode_cachep, GFP_KERNEL); if (!nfsi) return NULL; nfsi->flags = 0UL; diff --git a/fs/nilfs2/super.c b/fs/nilfs2/super.c index 63e5fa74016c..3e05c98631ec 100644 --- a/fs/nilfs2/super.c +++ b/fs/nilfs2/super.c @@ -151,7 +151,7 @@ struct inode *nilfs_alloc_inode(struct super_block *sb) { struct nilfs_inode_info *ii; - ii = kmem_cache_alloc(nilfs_inode_cachep, GFP_NOFS); + ii = alloc_inode_sb(sb, nilfs_inode_cachep, GFP_NOFS); if (!ii) return NULL; ii->i_bh = NULL; diff --git a/fs/ntfs/inode.c b/fs/ntfs/inode.c index 4474adb393ca..fca18ac72b4f 100644 --- a/fs/ntfs/inode.c +++ b/fs/ntfs/inode.c @@ -310,7 +310,7 @@ struct inode *ntfs_alloc_big_inode(struct super_block *sb) ntfs_inode *ni; ntfs_debug("Entering."); - ni = kmem_cache_alloc(ntfs_big_inode_cache, GFP_NOFS); + ni = alloc_inode_sb(sb, ntfs_big_inode_cache, GFP_NOFS); if (likely(ni != NULL)) { ni->state = 0; return VFS_I(ni); diff --git a/fs/ntfs3/super.c b/fs/ntfs3/super.c index 29813200c7af..278dcf502410 100644 --- a/fs/ntfs3/super.c +++ b/fs/ntfs3/super.c @@ -399,7 +399,7 @@ static struct kmem_cache *ntfs_inode_cachep; static struct inode *ntfs_alloc_inode(struct super_block *sb) { - struct ntfs_inode *ni = kmem_cache_alloc(ntfs_inode_cachep, GFP_NOFS); + struct ntfs_inode *ni = alloc_inode_sb(sb, ntfs_inode_cachep, GFP_NOFS); if (!ni) return NULL; diff --git a/fs/ocfs2/dlmfs/dlmfs.c b/fs/ocfs2/dlmfs/dlmfs.c index fa0a14f199eb..e360543ad7e7 100644 --- a/fs/ocfs2/dlmfs/dlmfs.c +++ b/fs/ocfs2/dlmfs/dlmfs.c @@ -280,7 +280,7 @@ static struct inode *dlmfs_alloc_inode(struct super_block *sb) { struct dlmfs_inode_private *ip; - ip = kmem_cache_alloc(dlmfs_inode_cache, GFP_NOFS); + ip = alloc_inode_sb(sb, dlmfs_inode_cache, GFP_NOFS); if (!ip) return NULL; diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c index 2772dec9dcea..754633754cc9 100644 --- a/fs/ocfs2/super.c +++ b/fs/ocfs2/super.c @@ -548,7 +548,7 @@ static struct inode *ocfs2_alloc_inode(struct super_block *sb) { struct ocfs2_inode_info *oi; - oi = kmem_cache_alloc(ocfs2_inode_cachep, GFP_NOFS); + oi = alloc_inode_sb(sb, ocfs2_inode_cachep, GFP_NOFS); if (!oi) return NULL; diff --git a/fs/openpromfs/inode.c b/fs/openpromfs/inode.c index f825176ff4ed..f0b7f4d51a17 100644 --- a/fs/openpromfs/inode.c +++ b/fs/openpromfs/inode.c @@ -335,7 +335,7 @@ static struct inode *openprom_alloc_inode(struct super_block *sb) { struct op_inode_info *oi; - oi = kmem_cache_alloc(op_inode_cachep, GFP_KERNEL); + oi = alloc_inode_sb(sb, op_inode_cachep, GFP_KERNEL); if (!oi) return NULL; diff --git a/fs/orangefs/super.c b/fs/orangefs/super.c index d90d8addbfc2..5254256a224d 100644 --- a/fs/orangefs/super.c +++ b/fs/orangefs/super.c @@ -107,7 +107,7 @@ static struct inode *orangefs_alloc_inode(struct super_block *sb) { struct orangefs_inode_s *orangefs_inode; - orangefs_inode = kmem_cache_alloc(orangefs_inode_cache, GFP_KERNEL); + orangefs_inode = alloc_inode_sb(sb, orangefs_inode_cache, GFP_KERNEL); if (!orangefs_inode) return NULL; diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c index 7bb0a47cb615..001cdbb8f015 100644 --- a/fs/overlayfs/super.c +++ b/fs/overlayfs/super.c @@ -174,7 +174,7 @@ static struct kmem_cache *ovl_inode_cachep; static struct inode *ovl_alloc_inode(struct super_block *sb) { - struct ovl_inode *oi = kmem_cache_alloc(ovl_inode_cachep, GFP_KERNEL); + struct ovl_inode *oi = alloc_inode_sb(sb, ovl_inode_cachep, GFP_KERNEL); if (!oi) return NULL; diff --git a/fs/proc/inode.c b/fs/proc/inode.c index f84355c5a36d..73aeb4e6d32e 100644 --- a/fs/proc/inode.c +++ b/fs/proc/inode.c @@ -66,7 +66,7 @@ static struct inode *proc_alloc_inode(struct super_block *sb) { struct proc_inode *ei; - ei = kmem_cache_alloc(proc_inode_cachep, GFP_KERNEL); + ei = alloc_inode_sb(sb, proc_inode_cachep, GFP_KERNEL); if (!ei) return NULL; ei->pid = NULL; diff --git a/fs/qnx4/inode.c b/fs/qnx4/inode.c index 3fb7fc819b4f..a635bb6615e9 100644 --- a/fs/qnx4/inode.c +++ b/fs/qnx4/inode.c @@ -338,7 +338,7 @@ static struct kmem_cache *qnx4_inode_cachep; static struct inode *qnx4_alloc_inode(struct super_block *sb) { struct qnx4_inode_info *ei; - ei = kmem_cache_alloc(qnx4_inode_cachep, GFP_KERNEL); + ei = alloc_inode_sb(sb, qnx4_inode_cachep, GFP_KERNEL); if (!ei) return NULL; return &ei->vfs_inode; diff --git a/fs/qnx6/inode.c b/fs/qnx6/inode.c index 61191f7bdf62..9d8e7e9788a1 100644 --- a/fs/qnx6/inode.c +++ b/fs/qnx6/inode.c @@ -597,7 +597,7 @@ static struct kmem_cache *qnx6_inode_cachep; static struct inode *qnx6_alloc_inode(struct super_block *sb) { struct qnx6_inode_info *ei; - ei = kmem_cache_alloc(qnx6_inode_cachep, GFP_KERNEL); + ei = alloc_inode_sb(sb, qnx6_inode_cachep, GFP_KERNEL); if (!ei) return NULL; return &ei->vfs_inode; diff --git a/fs/reiserfs/super.c b/fs/reiserfs/super.c index 82e09901462e..756c4916c7ae 100644 --- a/fs/reiserfs/super.c +++ b/fs/reiserfs/super.c @@ -639,7 +639,7 @@ static struct kmem_cache *reiserfs_inode_cachep; static struct inode *reiserfs_alloc_inode(struct super_block *sb) { struct reiserfs_inode_info *ei; - ei = kmem_cache_alloc(reiserfs_inode_cachep, GFP_KERNEL); + ei = alloc_inode_sb(sb, reiserfs_inode_cachep, GFP_KERNEL); if (!ei) return NULL; atomic_set(&ei->openers, 0); diff --git a/fs/romfs/super.c b/fs/romfs/super.c index 259f684d9236..9e6bbb4219de 100644 --- a/fs/romfs/super.c +++ b/fs/romfs/super.c @@ -375,7 +375,7 @@ static struct inode *romfs_alloc_inode(struct super_block *sb) { struct romfs_inode_info *inode; - inode = kmem_cache_alloc(romfs_inode_cachep, GFP_KERNEL); + inode = alloc_inode_sb(sb, romfs_inode_cachep, GFP_KERNEL); return inode ? &inode->vfs_inode : NULL; } diff --git a/fs/squashfs/super.c b/fs/squashfs/super.c index b1b556dbce12..4f74abbc1a54 100644 --- a/fs/squashfs/super.c +++ b/fs/squashfs/super.c @@ -584,7 +584,7 @@ static void __exit exit_squashfs_fs(void) static struct inode *squashfs_alloc_inode(struct super_block *sb) { struct squashfs_inode_info *ei = - kmem_cache_alloc(squashfs_inode_cachep, GFP_KERNEL); + alloc_inode_sb(sb, squashfs_inode_cachep, GFP_KERNEL); return ei ? &ei->vfs_inode : NULL; } diff --git a/fs/sysv/inode.c b/fs/sysv/inode.c index be47263b8605..9e8d4a6fb2f3 100644 --- a/fs/sysv/inode.c +++ b/fs/sysv/inode.c @@ -306,7 +306,7 @@ static struct inode *sysv_alloc_inode(struct super_block *sb) { struct sysv_inode_info *si; - si = kmem_cache_alloc(sysv_inode_cachep, GFP_KERNEL); + si = alloc_inode_sb(sb, sysv_inode_cachep, GFP_KERNEL); if (!si) return NULL; return &si->vfs_inode; diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c index aa7a1381c457..bad67455215f 100644 --- a/fs/ubifs/super.c +++ b/fs/ubifs/super.c @@ -268,7 +268,7 @@ static struct inode *ubifs_alloc_inode(struct super_block *sb) { struct ubifs_inode *ui; - ui = kmem_cache_alloc(ubifs_inode_slab, GFP_NOFS); + ui = alloc_inode_sb(sb, ubifs_inode_slab, GFP_NOFS); if (!ui) return NULL; diff --git a/fs/udf/super.c b/fs/udf/super.c index f26b5e0b84b6..48871615e489 100644 --- a/fs/udf/super.c +++ b/fs/udf/super.c @@ -136,7 +136,7 @@ static struct kmem_cache *udf_inode_cachep; static struct inode *udf_alloc_inode(struct super_block *sb) { struct udf_inode_info *ei; - ei = kmem_cache_alloc(udf_inode_cachep, GFP_KERNEL); + ei = alloc_inode_sb(sb, udf_inode_cachep, GFP_KERNEL); if (!ei) return NULL; diff --git a/fs/ufs/super.c b/fs/ufs/super.c index 00a01471ea05..23377c1baed9 100644 --- a/fs/ufs/super.c +++ b/fs/ufs/super.c @@ -1443,7 +1443,7 @@ static struct inode *ufs_alloc_inode(struct super_block *sb) { struct ufs_inode_info *ei; - ei = kmem_cache_alloc(ufs_inode_cachep, GFP_NOFS); + ei = alloc_inode_sb(sb, ufs_inode_cachep, GFP_NOFS); if (!ei) return NULL; diff --git a/fs/vboxsf/super.c b/fs/vboxsf/super.c index 37dd3fe5b1e9..d2f6df69f611 100644 --- a/fs/vboxsf/super.c +++ b/fs/vboxsf/super.c @@ -241,7 +241,7 @@ static struct inode *vboxsf_alloc_inode(struct super_block *sb) { struct vboxsf_inode *sf_i; - sf_i = kmem_cache_alloc(vboxsf_inode_cachep, GFP_NOFS); + sf_i = alloc_inode_sb(sb, vboxsf_inode_cachep, GFP_NOFS); if (!sf_i) return NULL; diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c index 9644f938990c..9e434cb41e48 100644 --- a/fs/xfs/xfs_icache.c +++ b/fs/xfs/xfs_icache.c @@ -77,7 +77,7 @@ xfs_inode_alloc( * XXX: If this didn't occur in transactions, we could drop GFP_NOFAIL * and return NULL here on ENOMEM. */ - ip = kmem_cache_alloc(xfs_inode_cache, GFP_KERNEL | __GFP_NOFAIL); + ip = alloc_inode_sb(mp->m_super, xfs_inode_cache, GFP_KERNEL | __GFP_NOFAIL); if (inode_init_always(mp->m_super, VFS_I(ip))) { kmem_cache_free(xfs_inode_cache, ip); diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c index b76dfb310ab6..11682cd7045d 100644 --- a/fs/zonefs/super.c +++ b/fs/zonefs/super.c @@ -1137,7 +1137,7 @@ static struct inode *zonefs_alloc_inode(struct super_block *sb) { struct zonefs_inode_info *zi; - zi = kmem_cache_alloc(zonefs_inode_cachep, GFP_KERNEL); + zi = alloc_inode_sb(sb, zonefs_inode_cachep, GFP_KERNEL); if (!zi) return NULL; diff --git a/ipc/mqueue.c b/ipc/mqueue.c index 5becca9be867..7c08eb3c258d 100644 --- a/ipc/mqueue.c +++ b/ipc/mqueue.c @@ -486,7 +486,7 @@ static struct inode *mqueue_alloc_inode(struct super_block *sb) { struct mqueue_inode_info *ei; - ei = kmem_cache_alloc(mqueue_inode_cachep, GFP_KERNEL); + ei = alloc_inode_sb(sb, mqueue_inode_cachep, GFP_KERNEL); if (!ei) return NULL; return &ei->vfs_inode; diff --git a/mm/shmem.c b/mm/shmem.c index a09b29ec2b45..33aa848af84b 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -3707,7 +3707,7 @@ static struct kmem_cache *shmem_inode_cachep; static struct inode *shmem_alloc_inode(struct super_block *sb) { struct shmem_inode_info *info; - info = kmem_cache_alloc(shmem_inode_cachep, GFP_KERNEL); + info = alloc_inode_sb(sb, shmem_inode_cachep, GFP_KERNEL); if (!info) return NULL; return &info->vfs_inode; diff --git a/net/socket.c b/net/socket.c index 982eecad464c..6887840682bb 100644 --- a/net/socket.c +++ b/net/socket.c @@ -301,7 +301,7 @@ static struct inode *sock_alloc_inode(struct super_block *sb) { struct socket_alloc *ei; - ei = kmem_cache_alloc(sock_inode_cachep, GFP_KERNEL); + ei = alloc_inode_sb(sb, sock_inode_cachep, GFP_KERNEL); if (!ei) return NULL; init_waitqueue_head(&ei->socket.wq.wait); diff --git a/net/sunrpc/rpc_pipe.c b/net/sunrpc/rpc_pipe.c index 35588f0afa86..0b6034fab9ab 100644 --- a/net/sunrpc/rpc_pipe.c +++ b/net/sunrpc/rpc_pipe.c @@ -197,7 +197,7 @@ static struct inode * rpc_alloc_inode(struct super_block *sb) { struct rpc_inode *rpci; - rpci = kmem_cache_alloc(rpc_inode_cachep, GFP_KERNEL); + rpci = alloc_inode_sb(sb, rpc_inode_cachep, GFP_KERNEL); if (!rpci) return NULL; return &rpci->vfs_inode; From patchwork Mon Feb 28 12:21:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12763163 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B73EC433F5 for ; Mon, 28 Feb 2022 12:23:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2407F8D0008; Mon, 28 Feb 2022 07:23:00 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1A2698D0001; Mon, 28 Feb 2022 07:23:00 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 06A028D0008; Mon, 28 Feb 2022 07:23:00 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id E7B8C8D0001 for ; Mon, 28 Feb 2022 07:22:59 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id AC47A613B9 for ; Mon, 28 Feb 2022 12:22:59 +0000 (UTC) X-FDA: 79192102878.15.B749451 Received: from mail-pg1-f178.google.com (mail-pg1-f178.google.com [209.85.215.178]) by imf12.hostedemail.com (Postfix) with ESMTP id 4088C4000D for ; Mon, 28 Feb 2022 12:22:59 +0000 (UTC) Received: by mail-pg1-f178.google.com with SMTP id 139so11248285pge.1 for ; Mon, 28 Feb 2022 04:22:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7RQGHmSpLUsBWZUQYPpcdOjVs87EIcOAL9fVyXOtIRQ=; b=8OKYvg3uY+0MhQo9Sqp4PHlfDHE8n5JHFkh0d4IPnyiTC0hPfVIt2xxcAbqCQqdjsw 92pF22xqGKb/Zrti0Hx41FWRNgFKxWIH8k7lmpuJTDWGL2jtVtg0hG5Bo1e26N00+Pdi Oqc524Bk6FBDUhr6GhV9xxi5TqJ13Igp/dPXx2cSS1lREZebqOGdvplA9VRbeohCo44V YUS5ig1qtXpGqzkxqWdrFWV0UhIkmcderji7aXysd68MOjjHiT4F/keQdesOWTeHVtRY HAO9FL0jJcoSvj/VubwpRqOyoeA/VkFZ3Ltb6A6wxOW19eajDaa8a3cedjQw620HUJ/i hbKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7RQGHmSpLUsBWZUQYPpcdOjVs87EIcOAL9fVyXOtIRQ=; b=mo5shY1U5BL4y9MbVEaDJQk98dFCskIcldLTXzvC54xmmSrzCk576AsznaVbXeeLPP FTV55Cj/Ib6JEcg764RyFiRnYY1TsdGgzNzi3sIEX73tOJumkq2Xlzs9XNr6IZJHhr5Z ndnSzVelvfPSQA6hraFiEuo6pmJQtYP6NQuYV8QyTRPdGzTHK1NVed/43jPyBcO8VF5X 3oHH9uMeWx6l6LmSyFDvw7e33WbXArd/w0brpe7++qoxbuJPgYBEHxPd0OSwHrDWHqiF 792Pv8WttnFLoTVMOwJz03HYMzJ9yOq3I56tFsebbmS3oneGKK6PWuVZazwyZ5X3DvNm E7Fg== X-Gm-Message-State: AOAM531MhO6DFn7fbEoBiDdVTwDXudU4VdsEcv/hTwtrE27iAjw5mGft cJhjWWRa5HU1xDojUbfeNijyng== X-Google-Smtp-Source: ABdhPJzgAE0yHpfYS+vzCxbO7K/OSkp2rCJIamxAE5RBODPYQ4mTFqO8oVK9gFGTCNJgkd0+YSRxHA== X-Received: by 2002:a05:6a00:278d:b0:4e1:4534:8801 with SMTP id bd13-20020a056a00278d00b004e145348801mr21153306pfb.78.1646050978317; Mon, 28 Feb 2022 04:22:58 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.227]) by smtp.gmail.com with ESMTPSA id ep22-20020a17090ae65600b001b92477db10sm10466753pjb.29.2022.02.28.04.22.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 04:22:58 -0800 (PST) From: Muchun Song To: willy@infradead.org, akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, shakeelb@google.com, roman.gushchin@linux.dev, shy828301@gmail.com, alexs@kernel.org, richard.weiyang@gmail.com, david@fromorbit.com, trond.myklebust@hammerspace.com, anna.schumaker@netapp.com, jaegeuk@kernel.org, chao@kernel.org, kari.argillander@gmail.com, vbabka@suse.cz Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nfs@vger.kernel.org, zhengqi.arch@bytedance.com, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v6 05/16] f2fs: allocate inode by using alloc_inode_sb() Date: Mon, 28 Feb 2022 20:21:15 +0800 Message-Id: <20220228122126.37293-6-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220228122126.37293-1-songmuchun@bytedance.com> References: <20220228122126.37293-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 4088C4000D X-Stat-Signature: onn6e7y4zrbkdcdnufuofx9y9uc56r9h Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=8OKYvg3u; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf12.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.215.178 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-HE-Tag: 1646050979-313096 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The inode allocation is supposed to use alloc_inode_sb(), so convert kmem_cache_alloc() to alloc_inode_sb(). Signed-off-by: Muchun Song Acked-by: Roman Gushchin --- fs/f2fs/super.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c index baefd398ec1a..9cc753a3944d 100644 --- a/fs/f2fs/super.c +++ b/fs/f2fs/super.c @@ -1345,8 +1345,12 @@ static struct inode *f2fs_alloc_inode(struct super_block *sb) { struct f2fs_inode_info *fi; - fi = f2fs_kmem_cache_alloc(f2fs_inode_cachep, - GFP_F2FS_ZERO, false, F2FS_SB(sb)); + if (time_to_inject(F2FS_SB(sb), FAULT_SLAB_ALLOC)) { + f2fs_show_injection_info(F2FS_SB(sb), FAULT_SLAB_ALLOC); + return NULL; + } + + fi = alloc_inode_sb(sb, f2fs_inode_cachep, GFP_F2FS_ZERO); if (!fi) return NULL; From patchwork Mon Feb 28 12:21:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12763164 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B58FEC4332F for ; Mon, 28 Feb 2022 12:23:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4B08E8D0009; Mon, 28 Feb 2022 07:23:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 439C78D0001; Mon, 28 Feb 2022 07:23:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2B4108D0009; Mon, 28 Feb 2022 07:23:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 149288D0001 for ; Mon, 28 Feb 2022 07:23:09 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id D011E120FEF for ; Mon, 28 Feb 2022 12:23:08 +0000 (UTC) X-FDA: 79192103256.09.1D18D64 Received: from mail-pj1-f48.google.com (mail-pj1-f48.google.com [209.85.216.48]) by imf15.hostedemail.com (Postfix) with ESMTP id 3AA7EA000C for ; Mon, 28 Feb 2022 12:23:08 +0000 (UTC) Received: by mail-pj1-f48.google.com with SMTP id m22so11018271pja.0 for ; Mon, 28 Feb 2022 04:23:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=W7N2ct9tvwpvEjrUOuEEAnTDqNZXNpAiEvvqkroYr+g=; b=KkiGb7Pdw1ydl11Dl3qV5P0aGMZh9PcwBwSXXbMOqJ1IVhP7TYoPH6Ir6xqn+/3aEp DuSqdDLytjLHGFKPt03t23ihHcjWxTFYlX2aXol49Yvjgx8yjokCxShrlEPO+/3ChyLP LEGnEl20CzF078t+1B0Oi9vuk9MfZ2LQM5dTW5ZQhfciSvzOuoehO+IfG8q4UukWi2nX QEEOzgbeN82NGwiBEM8hSi2o0Y3QBlj5zyBxklZMRXDi4b7iQLkj/LGMu+32vEKc7aOu syIJw5XXba+Zd5ZGMjkZP/Vna94ZPQ0ORPgYsOm+I9TBx9PcUhpF+GqyJY7i1XHl0Ida FLOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=W7N2ct9tvwpvEjrUOuEEAnTDqNZXNpAiEvvqkroYr+g=; b=O6sscOMzWL1RpK8VMGpDInEAzvBr8dH09dm3U6alHdioHEH5gNbRTisUBTFacdAJY+ COilzNlei81y1FIYq1zoVlp0Tzju8jqkkX3Rl8tNB9K9TajpgYKNsggTOQKzngmfpB3C yhaO95GqrJGeo6vGLRE+7gjHdxgAjDxJUxMDwgZ1vKSmRER5v3Q9h+g1fFwLCiDTOoDI 3juXijMIVVqJYSNX08aNAVAzex4YozvL8qf3715OtCUdWWSdgn6h+MFRHh5J3AJtlYwV nQuBZAcUUnTIYhx+4KuXuTzYQ1yCCyL7U72JSMdcr/68dQ+1du7Wzj00JhXxRus+54ET XdLg== X-Gm-Message-State: AOAM53118m1dbNripJ5BmdCCgc/n6kgopwswnlNiN+ACb4z1e4rMekaY XydtuupQyge9OEyXwK0zdtodcg== X-Google-Smtp-Source: ABdhPJy2O/UK2533uxkLDGlF5W6RsWluHD1/MUiuZcqt+i/pTIK5YRHC2V4KQvLpyDf0rDuWQAJu6A== X-Received: by 2002:a17:902:be14:b0:14f:ce67:d0a1 with SMTP id r20-20020a170902be1400b0014fce67d0a1mr20044235pls.29.1646050987224; Mon, 28 Feb 2022 04:23:07 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.227]) by smtp.gmail.com with ESMTPSA id ep22-20020a17090ae65600b001b92477db10sm10466753pjb.29.2022.02.28.04.22.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 04:23:06 -0800 (PST) From: Muchun Song To: willy@infradead.org, akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, shakeelb@google.com, roman.gushchin@linux.dev, shy828301@gmail.com, alexs@kernel.org, richard.weiyang@gmail.com, david@fromorbit.com, trond.myklebust@hammerspace.com, anna.schumaker@netapp.com, jaegeuk@kernel.org, chao@kernel.org, kari.argillander@gmail.com, vbabka@suse.cz Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nfs@vger.kernel.org, zhengqi.arch@bytedance.com, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v6 06/16] nfs42: use a specific kmem_cache to allocate nfs4_xattr_entry Date: Mon, 28 Feb 2022 20:21:16 +0800 Message-Id: <20220228122126.37293-7-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220228122126.37293-1-songmuchun@bytedance.com> References: <20220228122126.37293-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 3AA7EA000C X-Stat-Signature: hdi9nsbzr9hxdk19ohd1opkdjwsgfjtj Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=KkiGb7Pd; spf=pass (imf15.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.48 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspam-User: X-HE-Tag: 1646050988-981755 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If we want to add the allocated objects to its list_lru, we should use kmem_cache_alloc_lru() to allocate objects. So intruduce nfs4_xattr_entry_cachep which is used to allocate nfs4_xattr_entry. Signed-off-by: Muchun Song --- fs/nfs/nfs42xattr.c | 95 ++++++++++++++++++++++++++--------------------------- 1 file changed, 47 insertions(+), 48 deletions(-) diff --git a/fs/nfs/nfs42xattr.c b/fs/nfs/nfs42xattr.c index 1c4d2a05b401..5b7af9080db0 100644 --- a/fs/nfs/nfs42xattr.c +++ b/fs/nfs/nfs42xattr.c @@ -81,7 +81,7 @@ struct nfs4_xattr_entry { struct hlist_node hnode; struct list_head lru; struct list_head dispose; - char *xattr_name; + const char *xattr_name; void *xattr_value; size_t xattr_size; struct nfs4_xattr_bucket *bucket; @@ -98,6 +98,7 @@ static struct list_lru nfs4_xattr_entry_lru; static struct list_lru nfs4_xattr_large_entry_lru; static struct kmem_cache *nfs4_xattr_cache_cachep; +static struct kmem_cache *nfs4_xattr_entry_cachep; /* * Hashing helper functions. @@ -177,49 +178,27 @@ nfs4_xattr_alloc_entry(const char *name, const void *value, { struct nfs4_xattr_entry *entry; void *valp; - char *namep; - size_t alloclen, slen; - char *buf; - uint32_t flags; + const char *namep; + uint32_t flags = len > PAGE_SIZE ? NFS4_XATTR_ENTRY_EXTVAL : 0; + gfp_t gfp = GFP_KERNEL_ACCOUNT | GFP_NOFS; + struct list_lru *lru; BUILD_BUG_ON(sizeof(struct nfs4_xattr_entry) + XATTR_NAME_MAX + 1 > PAGE_SIZE); - alloclen = sizeof(struct nfs4_xattr_entry); - if (name != NULL) { - slen = strlen(name) + 1; - alloclen += slen; - } else - slen = 0; - - if (alloclen + len <= PAGE_SIZE) { - alloclen += len; - flags = 0; - } else { - flags = NFS4_XATTR_ENTRY_EXTVAL; - } - - buf = kmalloc(alloclen, GFP_KERNEL_ACCOUNT | GFP_NOFS); - if (buf == NULL) + lru = flags & NFS4_XATTR_ENTRY_EXTVAL ? &nfs4_xattr_large_entry_lru : + &nfs4_xattr_entry_lru; + entry = kmem_cache_alloc_lru(nfs4_xattr_entry_cachep, lru, gfp); + if (!entry) return NULL; - entry = (struct nfs4_xattr_entry *)buf; - - if (name != NULL) { - namep = buf + sizeof(struct nfs4_xattr_entry); - memcpy(namep, name, slen); - } else { - namep = NULL; - } - - - if (flags & NFS4_XATTR_ENTRY_EXTVAL) { - valp = kvmalloc(len, GFP_KERNEL_ACCOUNT | GFP_NOFS); - if (valp == NULL) { - kfree(buf); - return NULL; - } - } else if (len != 0) { - valp = buf + sizeof(struct nfs4_xattr_entry) + slen; + namep = kstrdup_const(name, gfp); + if (!namep && name) + goto free_buf; + + if (len != 0) { + valp = kvmalloc(len, gfp); + if (!valp) + goto free_name; } else valp = NULL; @@ -232,23 +211,23 @@ nfs4_xattr_alloc_entry(const char *name, const void *value, entry->flags = flags; entry->xattr_value = valp; - kref_init(&entry->ref); entry->xattr_name = namep; entry->xattr_size = len; - entry->bucket = NULL; - INIT_LIST_HEAD(&entry->lru); - INIT_LIST_HEAD(&entry->dispose); - INIT_HLIST_NODE(&entry->hnode); return entry; +free_name: + kfree_const(namep); +free_buf: + kmem_cache_free(nfs4_xattr_entry_cachep, entry); + return NULL; } static void nfs4_xattr_free_entry(struct nfs4_xattr_entry *entry) { - if (entry->flags & NFS4_XATTR_ENTRY_EXTVAL) - kvfree(entry->xattr_value); - kfree(entry); + kvfree(entry->xattr_value); + kfree_const(entry->xattr_name); + kmem_cache_free(nfs4_xattr_entry_cachep, entry); } static void @@ -289,7 +268,7 @@ nfs4_xattr_alloc_cache(void) { struct nfs4_xattr_cache *cache; - cache = kmem_cache_alloc(nfs4_xattr_cache_cachep, + cache = kmem_cache_alloc_lru(nfs4_xattr_cache_cachep, &nfs4_xattr_cache_lru, GFP_KERNEL_ACCOUNT | GFP_NOFS); if (cache == NULL) return NULL; @@ -992,6 +971,17 @@ static void nfs4_xattr_cache_init_once(void *p) INIT_LIST_HEAD(&cache->dispose); } +static void nfs4_xattr_entry_init_once(void *p) +{ + struct nfs4_xattr_entry *entry = p; + + kref_init(&entry->ref); + entry->bucket = NULL; + INIT_LIST_HEAD(&entry->lru); + INIT_LIST_HEAD(&entry->dispose); + INIT_HLIST_NODE(&entry->hnode); +} + int __init nfs4_xattr_cache_init(void) { int ret = 0; @@ -1003,6 +993,13 @@ int __init nfs4_xattr_cache_init(void) if (nfs4_xattr_cache_cachep == NULL) return -ENOMEM; + nfs4_xattr_entry_cachep = kmem_cache_create("nfs4_xattr_entry", + sizeof(struct nfs4_xattr_entry), 0, + (SLAB_RECLAIM_ACCOUNT | SLAB_MEM_SPREAD | SLAB_ACCOUNT), + nfs4_xattr_entry_init_once); + if (!nfs4_xattr_entry_cachep) + goto out5; + ret = list_lru_init_memcg(&nfs4_xattr_large_entry_lru, &nfs4_xattr_large_entry_shrinker); if (ret) @@ -1040,6 +1037,8 @@ int __init nfs4_xattr_cache_init(void) out3: list_lru_destroy(&nfs4_xattr_large_entry_lru); out4: + kmem_cache_destroy(nfs4_xattr_entry_cachep); +out5: kmem_cache_destroy(nfs4_xattr_cache_cachep); return ret; From patchwork Mon Feb 28 12:21:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12763165 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5743C433F5 for ; Mon, 28 Feb 2022 12:23:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7E5A18D000A; Mon, 28 Feb 2022 07:23:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 76EC28D0001; Mon, 28 Feb 2022 07:23:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 60F268D000A; Mon, 28 Feb 2022 07:23:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 4C7CD8D0001 for ; Mon, 28 Feb 2022 07:23:18 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 0E9BD2342E for ; Mon, 28 Feb 2022 12:23:18 +0000 (UTC) X-FDA: 79192103676.06.F561F30 Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) by imf30.hostedemail.com (Postfix) with ESMTP id A25BA8000B for ; Mon, 28 Feb 2022 12:23:17 +0000 (UTC) Received: by mail-pj1-f47.google.com with SMTP id gb21so10967718pjb.5 for ; Mon, 28 Feb 2022 04:23:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=3CsekL525kd4wMUq3hGH48YYiOw10AxsnYHSRiavRRo=; b=yWb0o30Ai7jt2ic9aWpnmT0w6NOd++UYjPe4/U84uEfZ4Yi+dBC0OL9YaYP0QDWrod 9S+z16kZh3BplAN3pFakq2BGTolIjPo9Lt/XyrYHdTnDcbNQnRQH9Yp/Q59ikL6eGeFK 1ToP4tWaSgcD2SSmZuXcyjqtjf6qI5+uGQo4TjkT2REeZ+EYjQdcH9YZcm9ZfdAML+A/ HihZUnPVgpAr0P5UzPoK8MZfq38wH/2HORkBwF92tUJcmlqe1tlMJYVnoHH1YJVNVIF8 r2sMqsZMli/jOXygwlCZVLWU3lXaZ8PVWgjqzW1rrLElxuBjdOHU/h7JDB570GKrNzwo ZTmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3CsekL525kd4wMUq3hGH48YYiOw10AxsnYHSRiavRRo=; b=ZKkxrEi9ugopBtP/hy/QD0dANvl9R9d+NhwNsVTqCXEIjr2z+LE4dpMZDhkgTZmLNu h50bHz7a/q4zZa4OZlZFJntMIoei9Gmfuj813Wfn8Xhe7K3+Kls39q8zp1Xqc3FUWu12 4tro/iPoPbh8oKi34NRX1Z1oR8/1/RV8eEAo6J/jvHq5t6gmZlfWnkVOOLTWTevCaphH YMMHT0+gFfoFUwqNYSStpSbgPQOphhobFmOOSz+EEmtCWyJmyZa9QMyPJgqAUw7Dp0eD NfGl1+OXHx3XzPQTdwGPuHWZeN5zqy5G8DZV2OvVhe4gAhxhby0uVPzbvZFsjpZbKFTu XFIA== X-Gm-Message-State: AOAM532dzw/Ztj92g4nLfVHqK5I6TqCXdNSql1fRfvJP3CqvucaWujRg KmEOFj3UoZ4gkocgzPPy8LPG0g== X-Google-Smtp-Source: ABdhPJz2mJtVFcGCpLkGoeKpHWerUiwrRW/dint6TVNOtnn8bqEp9SVeaF46LPLwo1CJnR1wfgQ1LQ== X-Received: by 2002:a17:90a:bb0d:b0:1bd:3baf:c8b4 with SMTP id u13-20020a17090abb0d00b001bd3bafc8b4mr6631856pjr.15.1646050996207; Mon, 28 Feb 2022 04:23:16 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.227]) by smtp.gmail.com with ESMTPSA id ep22-20020a17090ae65600b001b92477db10sm10466753pjb.29.2022.02.28.04.23.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 04:23:15 -0800 (PST) From: Muchun Song To: willy@infradead.org, akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, shakeelb@google.com, roman.gushchin@linux.dev, shy828301@gmail.com, alexs@kernel.org, richard.weiyang@gmail.com, david@fromorbit.com, trond.myklebust@hammerspace.com, anna.schumaker@netapp.com, jaegeuk@kernel.org, chao@kernel.org, kari.argillander@gmail.com, vbabka@suse.cz Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nfs@vger.kernel.org, zhengqi.arch@bytedance.com, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v6 07/16] mm: dcache: use kmem_cache_alloc_lru() to allocate dentry Date: Mon, 28 Feb 2022 20:21:17 +0800 Message-Id: <20220228122126.37293-8-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220228122126.37293-1-songmuchun@bytedance.com> References: <20220228122126.37293-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: A25BA8000B X-Rspam-User: Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=yWb0o30A; spf=pass (imf30.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.47 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Stat-Signature: g5ibaqeutw5gekhmx7dune8hoa3ohzbo X-HE-Tag: 1646050997-680061 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Like inode cache, the dentry will also be added to its memcg list_lru. So replace kmem_cache_alloc() with kmem_cache_alloc_lru() to allocate dentry. Signed-off-by: Muchun Song Acked-by: Roman Gushchin --- fs/dcache.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/fs/dcache.c b/fs/dcache.c index c84269c6e8bf..93f4f5ee07bf 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -1766,7 +1766,8 @@ static struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name) char *dname; int err; - dentry = kmem_cache_alloc(dentry_cache, GFP_KERNEL); + dentry = kmem_cache_alloc_lru(dentry_cache, &sb->s_dentry_lru, + GFP_KERNEL); if (!dentry) return NULL; From patchwork Mon Feb 28 12:21:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12763168 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19132C433F5 for ; Mon, 28 Feb 2022 12:23:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8F62F8D0006; Mon, 28 Feb 2022 07:23:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 87F068D0001; Mon, 28 Feb 2022 07:23:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7200B8D0006; Mon, 28 Feb 2022 07:23:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0106.hostedemail.com [216.40.44.106]) by kanga.kvack.org (Postfix) with ESMTP id 5C3AC8D0001 for ; Mon, 28 Feb 2022 07:23:49 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 1BB7118091996 for ; Mon, 28 Feb 2022 12:23:49 +0000 (UTC) X-FDA: 79192104978.26.61E3D5A Received: from mail-pf1-f172.google.com (mail-pf1-f172.google.com [209.85.210.172]) by imf26.hostedemail.com (Postfix) with ESMTP id D99C2140005 for ; Mon, 28 Feb 2022 12:23:25 +0000 (UTC) Received: by mail-pf1-f172.google.com with SMTP id d187so10984214pfa.10 for ; Mon, 28 Feb 2022 04:23:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+Fd7li1iOMpK7jvkCUIxllzXaVUUg0Vj1c4/CWRZnB0=; b=OmTVqBVEoCQsCJh5cQrW3YGi2ivgre7SGfWjAQT5JfZdvx7QYprKZ4R5EBpIxBheXj pW2QdcSq7+hCU4EsfxTF0uLKINSLfyDEj0ttE6P2mZ0IJwUfVQmxb8lE94HF1rWAMQit Wi/KaoKx5S+S/k6QX39c56uWjPQiadjRv/cdAFU/XkHBxtjMjicPztddBQaokGYfegQk zPRCgiAo8KCReehGB4RxMRc2/S2PrTwF7z4y37Lii2hFaOQ5G8A9+/zKxWmSO0CrIChq +NJoPkyZ5VvL5jqjUTLg5+s/rh0xqe8am/ASAjV1zkHmrbwLs/st+bfqrOXh/I03TEig 48sA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+Fd7li1iOMpK7jvkCUIxllzXaVUUg0Vj1c4/CWRZnB0=; b=JUICa/Fv77XJBLFCw2QBRk17MRYK5O/wVFGBexxIdk0n55GF/3IHf3QK+gvAJOYFfP 1z9tznpQh5KsyCim7QanmaKZZd7F94oXGjoHsJo3ulZEPkjhLeZePUX+5cvAuNgFFjuR 5lUA9ATxdl8D7BPYd10Tq/9wufRfNRk9n9ZsppZKnQIV76LsNcCD8GyyrXaKVWjJoQai ebD5e45858cImMqlat9eD/JVoCu8HmdByUMVBL/FPYuLZvgskh/Imjhy5L6bOoJPRkCW vaa1AsIDBSto2nJ+CZEGCcWTMqySCZzBk//mnla7lhVeBLEUS2c7nytLVIf2WX34P171 M1+w== X-Gm-Message-State: AOAM530eXlGfOgx9R9VuzlSrTVWKW/w3CuDWur1BFYkHSCoGq9obG+U1 tz/FPm7PhWr13I2E+OHA/x0Dbw== X-Google-Smtp-Source: ABdhPJyTtLogBLlA8TUxa8YlltGEoXIBGc16lzGJxC0sfwr8X6oVevbD/+aMp2kL+7EUsF9EGu9vLw== X-Received: by 2002:a05:6a00:1706:b0:4df:7fe0:841a with SMTP id h6-20020a056a00170600b004df7fe0841amr21287409pfc.64.1646051004721; Mon, 28 Feb 2022 04:23:24 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.227]) by smtp.gmail.com with ESMTPSA id ep22-20020a17090ae65600b001b92477db10sm10466753pjb.29.2022.02.28.04.23.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 04:23:24 -0800 (PST) From: Muchun Song To: willy@infradead.org, akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, shakeelb@google.com, roman.gushchin@linux.dev, shy828301@gmail.com, alexs@kernel.org, richard.weiyang@gmail.com, david@fromorbit.com, trond.myklebust@hammerspace.com, anna.schumaker@netapp.com, jaegeuk@kernel.org, chao@kernel.org, kari.argillander@gmail.com, vbabka@suse.cz Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nfs@vger.kernel.org, zhengqi.arch@bytedance.com, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v6 08/16] xarray: use kmem_cache_alloc_lru to allocate xa_node Date: Mon, 28 Feb 2022 20:21:18 +0800 Message-Id: <20220228122126.37293-9-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220228122126.37293-1-songmuchun@bytedance.com> References: <20220228122126.37293-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: D99C2140005 X-Stat-Signature: n5pepcjib5qhgk7pofzrfc7do7huyobj Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=OmTVqBVE; spf=pass (imf26.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.210.172 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1646051005-101380 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The workingset will add the xa_node to the shadow_nodes list. So the allocation of xa_node should be done by kmem_cache_alloc_lru(). Using xas_set_lru() to pass the list_lru which we want to insert xa_node into to set up the xa_node reclaim context correctly. Signed-off-by: Muchun Song Acked-by: Johannes Weiner Acked-by: Roman Gushchin --- include/linux/swap.h | 5 ++++- include/linux/xarray.h | 9 ++++++++- lib/xarray.c | 10 +++++----- mm/workingset.c | 2 +- 4 files changed, 18 insertions(+), 8 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 1d38d9475c4d..3db431276d82 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -334,9 +334,12 @@ void workingset_activation(struct folio *folio); /* Only track the nodes of mappings with shadow entries */ void workingset_update_node(struct xa_node *node); +extern struct list_lru shadow_nodes; #define mapping_set_update(xas, mapping) do { \ - if (!dax_mapping(mapping) && !shmem_mapping(mapping)) \ + if (!dax_mapping(mapping) && !shmem_mapping(mapping)) { \ xas_set_update(xas, workingset_update_node); \ + xas_set_lru(xas, &shadow_nodes); \ + } \ } while (0) /* linux/mm/page_alloc.c */ diff --git a/include/linux/xarray.h b/include/linux/xarray.h index d6d5da6ed735..bb52b786be1b 100644 --- a/include/linux/xarray.h +++ b/include/linux/xarray.h @@ -1317,6 +1317,7 @@ struct xa_state { struct xa_node *xa_node; struct xa_node *xa_alloc; xa_update_node_t xa_update; + struct list_lru *xa_lru; }; /* @@ -1336,7 +1337,8 @@ struct xa_state { .xa_pad = 0, \ .xa_node = XAS_RESTART, \ .xa_alloc = NULL, \ - .xa_update = NULL \ + .xa_update = NULL, \ + .xa_lru = NULL, \ } /** @@ -1631,6 +1633,11 @@ static inline void xas_set_update(struct xa_state *xas, xa_update_node_t update) xas->xa_update = update; } +static inline void xas_set_lru(struct xa_state *xas, struct list_lru *lru) +{ + xas->xa_lru = lru; +} + /** * xas_next_entry() - Advance iterator to next present entry. * @xas: XArray operation state. diff --git a/lib/xarray.c b/lib/xarray.c index 6f47f6375808..b95e92598b9c 100644 --- a/lib/xarray.c +++ b/lib/xarray.c @@ -302,7 +302,7 @@ bool xas_nomem(struct xa_state *xas, gfp_t gfp) } if (xas->xa->xa_flags & XA_FLAGS_ACCOUNT) gfp |= __GFP_ACCOUNT; - xas->xa_alloc = kmem_cache_alloc(radix_tree_node_cachep, gfp); + xas->xa_alloc = kmem_cache_alloc_lru(radix_tree_node_cachep, xas->xa_lru, gfp); if (!xas->xa_alloc) return false; xas->xa_alloc->parent = NULL; @@ -334,10 +334,10 @@ static bool __xas_nomem(struct xa_state *xas, gfp_t gfp) gfp |= __GFP_ACCOUNT; if (gfpflags_allow_blocking(gfp)) { xas_unlock_type(xas, lock_type); - xas->xa_alloc = kmem_cache_alloc(radix_tree_node_cachep, gfp); + xas->xa_alloc = kmem_cache_alloc_lru(radix_tree_node_cachep, xas->xa_lru, gfp); xas_lock_type(xas, lock_type); } else { - xas->xa_alloc = kmem_cache_alloc(radix_tree_node_cachep, gfp); + xas->xa_alloc = kmem_cache_alloc_lru(radix_tree_node_cachep, xas->xa_lru, gfp); } if (!xas->xa_alloc) return false; @@ -371,7 +371,7 @@ static void *xas_alloc(struct xa_state *xas, unsigned int shift) if (xas->xa->xa_flags & XA_FLAGS_ACCOUNT) gfp |= __GFP_ACCOUNT; - node = kmem_cache_alloc(radix_tree_node_cachep, gfp); + node = kmem_cache_alloc_lru(radix_tree_node_cachep, xas->xa_lru, gfp); if (!node) { xas_set_err(xas, -ENOMEM); return NULL; @@ -1014,7 +1014,7 @@ void xas_split_alloc(struct xa_state *xas, void *entry, unsigned int order, void *sibling = NULL; struct xa_node *node; - node = kmem_cache_alloc(radix_tree_node_cachep, gfp); + node = kmem_cache_alloc_lru(radix_tree_node_cachep, xas->xa_lru, gfp); if (!node) goto nomem; node->array = xas->xa; diff --git a/mm/workingset.c b/mm/workingset.c index 8c03afe1d67c..979c7130c266 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -429,7 +429,7 @@ void workingset_activation(struct folio *folio) * point where they would still be useful. */ -static struct list_lru shadow_nodes; +struct list_lru shadow_nodes; void workingset_update_node(struct xa_node *node) { From patchwork Mon Feb 28 12:21:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12763166 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D131C43219 for ; Mon, 28 Feb 2022 12:23:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A7D4B8D0002; Mon, 28 Feb 2022 07:23:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A05878D0001; Mon, 28 Feb 2022 07:23:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8801C8D0002; Mon, 28 Feb 2022 07:23:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 75EF78D0001 for ; Mon, 28 Feb 2022 07:23:35 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 4CC2C213AF for ; Mon, 28 Feb 2022 12:23:35 +0000 (UTC) X-FDA: 79192104390.13.E816104 Received: from mail-pg1-f180.google.com (mail-pg1-f180.google.com [209.85.215.180]) by imf27.hostedemail.com (Postfix) with ESMTP id C1AE340005 for ; Mon, 28 Feb 2022 12:23:34 +0000 (UTC) Received: by mail-pg1-f180.google.com with SMTP id 195so11228512pgc.6 for ; Mon, 28 Feb 2022 04:23:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4weE4kqhVNNelpninfKVs0Z5lAWaMF5vJ9tpJi1vKqM=; b=jNIurO4J5jzzg6tw4FkyFriQIvcP76IRCFSaYIPHcK6WypzpBeQpxT2wVTd235Rx+H zIM3T3a8rwrhBykl8ohYk5Rc989WTu+s+jPwkike3SU1b3etv0KAPcS0b4SrXMmRau5V 0XbYaQddQw2cwYkWhjXYHjmGO10fEHgAzCMhMqJ2gkZ3249Xgb3P6nEgYgKIL7C8Fduc PlmhdyUHJ1/oOtXztvv1x2qiybg82kKYTC7ZYo3Wp4hJOrsH08TwJq1UFHRkMxviVlee kHCyRPQBH+81w9gv77XtRh/AUjRq2gsCar9rXwYlvYrfL/BuBWxcTVZaCJTztmsk1L4F 5z6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4weE4kqhVNNelpninfKVs0Z5lAWaMF5vJ9tpJi1vKqM=; b=lRbWy8DMtna4cDe9P7GlP3AAw6CsjLB7dGjUvJxbgt+iCDp0zBTwN2XwtciCn97Azs c+B3wUB15u4Tyh7pukihFw1vTL/I5hGwuTmUVaa3yz5hBZODeb3KaJwRiOumWH/9uGC9 +oAKswXmfzJKpyj9tiF1c2iOygi6MGLYjGq/4gM/dpI1iDS4wgpeJoMYEkxJciS48rKv YhG+cD1Efnh7RMq7S7zl9H7eHOXjxel/c5foHpLn2JLi3mz1B2EoMIbUHuITZoR8cgqG uLR3oNd8cEHlgM/kjHqEaygzNcxxK7mYC+wTMVB2LnrdlF6vwDIojzQOMvQeniZaeyju peUQ== X-Gm-Message-State: AOAM530shJcafKqmMIEDfs2H6Ce8YiK4JEbGQRo2stOyCxbGGaVB2Iij j4VjmMvtRbRwP6lBr01KK0y3nQ== X-Google-Smtp-Source: ABdhPJy0Y3sz8zYIu7FfTE85icoaySMSdHhDkvZcfxDVS79w35TAOVMTZcKVPjThbCUvuMwVu0O9uQ== X-Received: by 2002:a65:554e:0:b0:34d:f721:7fef with SMTP id t14-20020a65554e000000b0034df7217fefmr7749911pgr.476.1646051013796; Mon, 28 Feb 2022 04:23:33 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.227]) by smtp.gmail.com with ESMTPSA id ep22-20020a17090ae65600b001b92477db10sm10466753pjb.29.2022.02.28.04.23.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 04:23:33 -0800 (PST) From: Muchun Song To: willy@infradead.org, akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, shakeelb@google.com, roman.gushchin@linux.dev, shy828301@gmail.com, alexs@kernel.org, richard.weiyang@gmail.com, david@fromorbit.com, trond.myklebust@hammerspace.com, anna.schumaker@netapp.com, jaegeuk@kernel.org, chao@kernel.org, kari.argillander@gmail.com, vbabka@suse.cz Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nfs@vger.kernel.org, zhengqi.arch@bytedance.com, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v6 09/16] mm: memcontrol: move memcg_online_kmem() to mem_cgroup_css_online() Date: Mon, 28 Feb 2022 20:21:19 +0800 Message-Id: <20220228122126.37293-10-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220228122126.37293-1-songmuchun@bytedance.com> References: <20220228122126.37293-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: C1AE340005 X-Stat-Signature: zq3df1jfau65e9hu4f6ges8718oidfbw Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=jNIurO4J; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf27.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.215.180 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-HE-Tag: 1646051014-87336 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: It will simplify the code if moving memcg_online_kmem() to mem_cgroup_css_online() and do not need to set ->kmemcg_id to -1 to indicate the memcg is offline. In the next patch, ->kmemcg_id will be used to sync list lru reparenting which requires not to change ->kmemcg_id. Signed-off-by: Muchun Song Acked-by: Roman Gushchin --- mm/memcontrol.c | 37 ++++++++++++++++--------------------- 1 file changed, 16 insertions(+), 21 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index ffc5f0798de1..5bdf5184681c 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3616,7 +3616,8 @@ static int memcg_online_kmem(struct mem_cgroup *memcg) if (cgroup_memory_nokmem) return 0; - BUG_ON(memcg->kmemcg_id >= 0); + if (unlikely(mem_cgroup_is_root(memcg))) + return 0; memcg_id = memcg_alloc_cache_id(); if (memcg_id < 0) @@ -3642,7 +3643,10 @@ static void memcg_offline_kmem(struct mem_cgroup *memcg) struct mem_cgroup *parent; int kmemcg_id; - if (memcg->kmemcg_id == -1) + if (cgroup_memory_nokmem) + return; + + if (unlikely(mem_cgroup_is_root(memcg))) return; parent = parent_mem_cgroup(memcg); @@ -3652,7 +3656,6 @@ static void memcg_offline_kmem(struct mem_cgroup *memcg) memcg_reparent_objcgs(memcg, parent); kmemcg_id = memcg->kmemcg_id; - BUG_ON(kmemcg_id < 0); /* * After we have finished memcg_reparent_objcgs(), all list_lrus @@ -3663,7 +3666,6 @@ static void memcg_offline_kmem(struct mem_cgroup *memcg) memcg_drain_all_list_lrus(kmemcg_id, parent); memcg_free_cache_id(kmemcg_id); - memcg->kmemcg_id = -1; } #else static int memcg_online_kmem(struct mem_cgroup *memcg) @@ -5178,7 +5180,6 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css) { struct mem_cgroup *parent = mem_cgroup_from_css(parent_css); struct mem_cgroup *memcg, *old_memcg; - long error = -ENOMEM; old_memcg = set_active_memcg(parent); memcg = mem_cgroup_alloc(); @@ -5207,34 +5208,26 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css) return &memcg->css; } - /* The following stuff does not apply to the root */ - error = memcg_online_kmem(memcg); - if (error) - goto fail; - if (cgroup_subsys_on_dfl(memory_cgrp_subsys) && !cgroup_memory_nosocket) static_branch_inc(&memcg_sockets_enabled_key); return &memcg->css; -fail: - mem_cgroup_id_remove(memcg); - mem_cgroup_free(memcg); - return ERR_PTR(error); } static int mem_cgroup_css_online(struct cgroup_subsys_state *css) { struct mem_cgroup *memcg = mem_cgroup_from_css(css); + if (memcg_online_kmem(memcg)) + goto remove_id; + /* * A memcg must be visible for expand_shrinker_info() * by the time the maps are allocated. So, we allocate maps * here, when for_each_mem_cgroup() can't skip it. */ - if (alloc_shrinker_info(memcg)) { - mem_cgroup_id_remove(memcg); - return -ENOMEM; - } + if (alloc_shrinker_info(memcg)) + goto offline_kmem; /* Online state pins memcg ID, memcg ID pins CSS */ refcount_set(&memcg->id.ref, 1); @@ -5244,6 +5237,11 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css) queue_delayed_work(system_unbound_wq, &stats_flush_dwork, 2UL*HZ); return 0; +offline_kmem: + memcg_offline_kmem(memcg); +remove_id: + mem_cgroup_id_remove(memcg); + return -ENOMEM; } static void mem_cgroup_css_offline(struct cgroup_subsys_state *css) @@ -5301,9 +5299,6 @@ static void mem_cgroup_css_free(struct cgroup_subsys_state *css) cancel_work_sync(&memcg->high_work); mem_cgroup_remove_from_trees(memcg); free_shrinker_info(memcg); - - /* Need to offline kmem if online_css() fails */ - memcg_offline_kmem(memcg); mem_cgroup_free(memcg); } From patchwork Mon Feb 28 12:21:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12763167 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4730C43217 for ; Mon, 28 Feb 2022 12:23:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 832A38D0005; Mon, 28 Feb 2022 07:23:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7BBBD8D0001; Mon, 28 Feb 2022 07:23:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 636718D0005; Mon, 28 Feb 2022 07:23:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0181.hostedemail.com [216.40.44.181]) by kanga.kvack.org (Postfix) with ESMTP id 4D2AF8D0001 for ; Mon, 28 Feb 2022 07:23:44 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id EE3EB18091996 for ; Mon, 28 Feb 2022 12:23:43 +0000 (UTC) X-FDA: 79192104726.25.F501A3A Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) by imf23.hostedemail.com (Postfix) with ESMTP id 4E84914000C for ; Mon, 28 Feb 2022 12:23:43 +0000 (UTC) Received: by mail-pl1-f171.google.com with SMTP id bd1so10533726plb.13 for ; Mon, 28 Feb 2022 04:23:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=DPduYZNuH2LeQ+4I4Un4vr4KBtJTEFjv8pV2NSnkp+k=; b=5N5c2FHFLz0xZnPyJoJgkrqFicmCzd/wc3HjC1BAvo3IxgD8PXVx0RmdAYi90tZ5+q mbItK9Gv9HnkZ4q19O8L7saz1gV283CAw8mdLnCbRQdLk8G7QKWURocIIJV+/sSPKzPu NL6OZOVKCwdzC2xnBawYe8kTlJiDv9oCZYCDST0/CisVZEWw1N1Wd0WOGFjHNWTnhSx6 kuYHPSx/fD1PWZkfMaeLnB5EO1xkJyMAE/7gRJI7XpPkfvvl5dOm9JMjJbOnNCOI8304 4shO9MsyTTAaJvMD4mBJWit0s6dHFq2AFxWMSlKr9+yWG1urgir3Ol5GVVkJp93QRNtW yGcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=DPduYZNuH2LeQ+4I4Un4vr4KBtJTEFjv8pV2NSnkp+k=; b=2VWHpNnJLGTdk9DNrwydfLgpEpNf0gRkooTZnD3FUMWldlyfysgVriALMKdQrUd9pF 97DO5bJKNstItWYGES4HR5bOZOSOVCFmS2KXHymNnsoj8eBG9oBmE153lfEjiyjipYwe mfhiB05vwgteY4VFreZh11sXL7S3xE0DXWROIUrmIaH97jwT6xBbCAmUznG2eIMUyGfH DQEKdcznVmRTBnhNT0JFcWMzm5XNBxA7MvdFhHXXz3V56pNoz4juuBrHop13aZwoghDg QxJRvvMeJIJBAYzX4lVf87HCbNS06TxpRZbC6K3PTyp0s75oloTNd7ky5vomgjdGv34k xwGQ== X-Gm-Message-State: AOAM532sz8ZetwTWD15P56RRQuu710bg9dkyblMbkZgfU4YJvyCl+Nd+ fecKX28ZKiZ0E3lnSj6hfERQSgKNSg4UmQtH X-Google-Smtp-Source: ABdhPJw2wR/nODJVHzJwh92P8d7ZUF4eXRNnfA+tb9OvOfSgYD6XG0GxdF0tqCjFAGoepfvGj5r42Q== X-Received: by 2002:a17:90a:f0cb:b0:1bd:2372:c992 with SMTP id fa11-20020a17090af0cb00b001bd2372c992mr9640250pjb.71.1646051022289; Mon, 28 Feb 2022 04:23:42 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.227]) by smtp.gmail.com with ESMTPSA id ep22-20020a17090ae65600b001b92477db10sm10466753pjb.29.2022.02.28.04.23.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 04:23:41 -0800 (PST) From: Muchun Song To: willy@infradead.org, akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, shakeelb@google.com, roman.gushchin@linux.dev, shy828301@gmail.com, alexs@kernel.org, richard.weiyang@gmail.com, david@fromorbit.com, trond.myklebust@hammerspace.com, anna.schumaker@netapp.com, jaegeuk@kernel.org, chao@kernel.org, kari.argillander@gmail.com, vbabka@suse.cz Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nfs@vger.kernel.org, zhengqi.arch@bytedance.com, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v6 10/16] mm: list_lru: allocate list_lru_one only when needed Date: Mon, 28 Feb 2022 20:21:20 +0800 Message-Id: <20220228122126.37293-11-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220228122126.37293-1-songmuchun@bytedance.com> References: <20220228122126.37293-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 4E84914000C X-Stat-Signature: ncwxd555chu51cpju8eajsmezzep7ong Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=5N5c2FHF; spf=pass (imf23.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.171 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspam-User: X-HE-Tag: 1646051023-903901 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In our server, we found a suspected memory leak problem. The kmalloc-32 consumes more than 6GB of memory. Other kmem_caches consume less than 2GB memory. After our in-depth analysis, the memory consumption of kmalloc-32 slab cache is the cause of list_lru_one allocation. crash> p memcg_nr_cache_ids memcg_nr_cache_ids = $2 = 24574 memcg_nr_cache_ids is very large and memory consumption of each list_lru can be calculated with the following formula. num_numa_node * memcg_nr_cache_ids * 32 (kmalloc-32) There are 4 numa nodes in our system, so each list_lru consumes ~3MB. crash> list super_blocks | wc -l 952 Every mount will register 2 list lrus, one is for inode, another is for dentry. There are 952 super_blocks. So the total memory is 952 * 2 * 3 MB (~5.6GB). But the number of memory cgroup is less than 500. So I guess more than 12286 containers have been deployed on this machine (I do not know why there are so many containers, it may be a user's bug or the user really want to do that). And memcg_nr_cache_ids has not been reduced to a suitable value. This can waste a lot of memory. Now the infrastructure for dynamic list_lru_one allocation is ready, so remove statically allocated memory code to save memory. Signed-off-by: Muchun Song --- include/linux/list_lru.h | 7 +-- mm/list_lru.c | 121 ++++++++++++++++++++++++++--------------------- mm/memcontrol.c | 6 ++- 3 files changed, 77 insertions(+), 57 deletions(-) diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h index ab912c49334f..c36db6dc2a65 100644 --- a/include/linux/list_lru.h +++ b/include/linux/list_lru.h @@ -32,14 +32,15 @@ struct list_lru_one { }; struct list_lru_per_memcg { + struct rcu_head rcu; /* array of per cgroup per node lists, indexed by node id */ - struct list_lru_one node[0]; + struct list_lru_one node[]; }; struct list_lru_memcg { struct rcu_head rcu; /* array of per cgroup lists, indexed by memcg_cache_id */ - struct list_lru_per_memcg *mlru[]; + struct list_lru_per_memcg __rcu *mlru[]; }; struct list_lru_node { @@ -77,7 +78,7 @@ int __list_lru_init(struct list_lru *lru, bool memcg_aware, int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru, gfp_t gfp); int memcg_update_all_list_lrus(int num_memcgs); -void memcg_drain_all_list_lrus(int src_idx, struct mem_cgroup *dst_memcg); +void memcg_drain_all_list_lrus(struct mem_cgroup *src, struct mem_cgroup *dst); /** * list_lru_add: add an element to the lru list's tail diff --git a/mm/list_lru.c b/mm/list_lru.c index bffa80527723..fc938d8ff48f 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -60,8 +60,12 @@ list_lru_from_memcg_idx(struct list_lru *lru, int nid, int idx) * from relocation (see memcg_update_list_lru). */ mlrus = rcu_dereference_check(lru->mlrus, lockdep_is_held(&nlru->lock)); - if (mlrus && idx >= 0) - return &mlrus->mlru[idx]->node[nid]; + if (mlrus && idx >= 0) { + struct list_lru_per_memcg *mlru; + + mlru = rcu_dereference_check(mlrus->mlru[idx], true); + return mlru ? &mlru->node[nid] : NULL; + } return &nlru->lru; } @@ -188,7 +192,7 @@ unsigned long list_lru_count_one(struct list_lru *lru, rcu_read_lock(); l = list_lru_from_memcg_idx(lru, nid, memcg_cache_id(memcg)); - count = READ_ONCE(l->nr_items); + count = l ? READ_ONCE(l->nr_items) : 0; rcu_read_unlock(); if (unlikely(count < 0)) @@ -217,8 +221,11 @@ __list_lru_walk_one(struct list_lru *lru, int nid, int memcg_idx, struct list_head *item, *n; unsigned long isolated = 0; - l = list_lru_from_memcg_idx(lru, nid, memcg_idx); restart: + l = list_lru_from_memcg_idx(lru, nid, memcg_idx); + if (!l) + goto out; + list_for_each_safe(item, n, &l->list) { enum lru_status ret; @@ -262,6 +269,7 @@ __list_lru_walk_one(struct list_lru *lru, int nid, int memcg_idx, BUG(); } } +out: return isolated; } @@ -354,20 +362,25 @@ static struct list_lru_per_memcg *memcg_init_list_lru_one(gfp_t gfp) return mlru; } -static int memcg_init_list_lru_range(struct list_lru_memcg *mlrus, - int begin, int end) +static void memcg_list_lru_free(struct list_lru *lru, int src_idx) { - int i; + struct list_lru_memcg *mlrus; + struct list_lru_per_memcg *mlru; - for (i = begin; i < end; i++) { - mlrus->mlru[i] = memcg_init_list_lru_one(GFP_KERNEL); - if (!mlrus->mlru[i]) - goto fail; - } - return 0; -fail: - memcg_destroy_list_lru_range(mlrus, begin, i); - return -ENOMEM; + spin_lock_irq(&lru->lock); + mlrus = rcu_dereference_protected(lru->mlrus, true); + mlru = rcu_dereference_protected(mlrus->mlru[src_idx], true); + rcu_assign_pointer(mlrus->mlru[src_idx], NULL); + spin_unlock_irq(&lru->lock); + + /* + * The __list_lru_walk_one() can walk the list of this node. + * We need kvfree_rcu() here. And the walking of the list + * is under lru->node[nid]->lock, which can serve as a RCU + * read-side critical section. + */ + if (mlru) + kvfree_rcu(mlru, rcu); } static int memcg_init_list_lru(struct list_lru *lru, bool memcg_aware) @@ -381,14 +394,10 @@ static int memcg_init_list_lru(struct list_lru *lru, bool memcg_aware) spin_lock_init(&lru->lock); - mlrus = kvmalloc(struct_size(mlrus, mlru, size), GFP_KERNEL); + mlrus = kvzalloc(struct_size(mlrus, mlru, size), GFP_KERNEL); if (!mlrus) return -ENOMEM; - if (memcg_init_list_lru_range(mlrus, 0, size)) { - kvfree(mlrus); - return -ENOMEM; - } RCU_INIT_POINTER(lru->mlrus, mlrus); return 0; @@ -422,13 +431,9 @@ static int memcg_update_list_lru(struct list_lru *lru, int old_size, int new_siz if (!new) return -ENOMEM; - if (memcg_init_list_lru_range(new, old_size, new_size)) { - kvfree(new); - return -ENOMEM; - } - spin_lock_irq(&lru->lock); memcpy(&new->mlru, &old->mlru, flex_array_size(new, mlru, old_size)); + memset(&new->mlru[old_size], 0, flex_array_size(new, mlru, new_size - old_size)); rcu_assign_pointer(lru->mlrus, new); spin_unlock_irq(&lru->lock); @@ -436,20 +441,6 @@ static int memcg_update_list_lru(struct list_lru *lru, int old_size, int new_siz return 0; } -static void memcg_cancel_update_list_lru(struct list_lru *lru, - int old_size, int new_size) -{ - struct list_lru_memcg *mlrus; - - mlrus = rcu_dereference_protected(lru->mlrus, - lockdep_is_held(&list_lrus_mutex)); - /* - * Do not bother shrinking the array back to the old size, because we - * cannot handle allocation failures here. - */ - memcg_destroy_list_lru_range(mlrus, old_size, new_size); -} - int memcg_update_all_list_lrus(int new_size) { int ret = 0; @@ -460,15 +451,10 @@ int memcg_update_all_list_lrus(int new_size) list_for_each_entry(lru, &memcg_list_lrus, list) { ret = memcg_update_list_lru(lru, old_size, new_size); if (ret) - goto fail; + break; } -out: mutex_unlock(&list_lrus_mutex); return ret; -fail: - list_for_each_entry_continue_reverse(lru, &memcg_list_lrus, list) - memcg_cancel_update_list_lru(lru, old_size, new_size); - goto out; } static void memcg_drain_list_lru_node(struct list_lru *lru, int nid, @@ -485,6 +471,8 @@ static void memcg_drain_list_lru_node(struct list_lru *lru, int nid, spin_lock_irq(&nlru->lock); src = list_lru_from_memcg_idx(lru, nid, src_idx); + if (!src) + goto out; dst = list_lru_from_memcg_idx(lru, nid, dst_idx); list_splice_init(&src->list, &dst->list); @@ -494,7 +482,7 @@ static void memcg_drain_list_lru_node(struct list_lru *lru, int nid, set_shrinker_bit(dst_memcg, nid, lru_shrinker_id(lru)); src->nr_items = 0; } - +out: spin_unlock_irq(&nlru->lock); } @@ -505,15 +493,41 @@ static void memcg_drain_list_lru(struct list_lru *lru, for_each_node(i) memcg_drain_list_lru_node(lru, i, src_idx, dst_memcg); + + memcg_list_lru_free(lru, src_idx); } -void memcg_drain_all_list_lrus(int src_idx, struct mem_cgroup *dst_memcg) +void memcg_drain_all_list_lrus(struct mem_cgroup *src, struct mem_cgroup *dst) { + struct cgroup_subsys_state *css; struct list_lru *lru; + int src_idx = src->kmemcg_id; + + /* + * Change kmemcg_id of this cgroup and all its descendants to the + * parent's id, and then move all entries from this cgroup's list_lrus + * to ones of the parent. + * + * After we have finished, all list_lrus corresponding to this cgroup + * are guaranteed to remain empty. So we can safely free this cgroup's + * list lrus in memcg_list_lru_free(). + * + * Changing ->kmemcg_id to the parent can prevent memcg_list_lru_alloc() + * from allocating list lrus for this cgroup after memcg_list_lru_free() + * call. + */ + rcu_read_lock(); + css_for_each_descendant_pre(css, &src->css) { + struct mem_cgroup *memcg; + + memcg = mem_cgroup_from_css(css); + memcg->kmemcg_id = dst->kmemcg_id; + } + rcu_read_unlock(); mutex_lock(&list_lrus_mutex); list_for_each_entry(lru, &memcg_list_lrus, list) - memcg_drain_list_lru(lru, src_idx, dst_memcg); + memcg_drain_list_lru(lru, src_idx, dst); mutex_unlock(&list_lrus_mutex); } @@ -528,7 +542,7 @@ static bool memcg_list_lru_allocated(struct mem_cgroup *memcg, return true; rcu_read_lock(); - allocated = !!rcu_dereference(lru->mlrus)->mlru[idx]; + allocated = !!rcu_access_pointer(rcu_dereference(lru->mlrus)->mlru[idx]); rcu_read_unlock(); return allocated; @@ -576,11 +590,12 @@ int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru, mlrus = rcu_dereference_protected(lru->mlrus, true); while (i--) { int index = table[i].memcg->kmemcg_id; + struct list_lru_per_memcg *mlru = table[i].mlru; - if (mlrus->mlru[index]) - kfree(table[i].mlru); + if (index < 0 || rcu_dereference_protected(mlrus->mlru[index], true)) + kfree(mlru); else - mlrus->mlru[index] = table[i].mlru; + rcu_assign_pointer(mlrus->mlru[index], mlru); } spin_unlock_irqrestore(&lru->lock, flags); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 5bdf5184681c..c8d39d7fc2e5 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3655,6 +3655,10 @@ static void memcg_offline_kmem(struct mem_cgroup *memcg) memcg_reparent_objcgs(memcg, parent); + /* + * memcg_drain_all_list_lrus() can change memcg->kmemcg_id. + * Cache it to local @kmemcg_id. + */ kmemcg_id = memcg->kmemcg_id; /* @@ -3663,7 +3667,7 @@ static void memcg_offline_kmem(struct mem_cgroup *memcg) * The ordering is imposed by list_lru_node->lock taken by * memcg_drain_all_list_lrus(). */ - memcg_drain_all_list_lrus(kmemcg_id, parent); + memcg_drain_all_list_lrus(memcg, parent); memcg_free_cache_id(kmemcg_id); } From patchwork Mon Feb 28 12:21:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12763169 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41C86C433FE for ; Mon, 28 Feb 2022 12:23:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CEE3F8D0007; Mon, 28 Feb 2022 07:23:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C779A8D0001; Mon, 28 Feb 2022 07:23:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B17FA8D0007; Mon, 28 Feb 2022 07:23:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0182.hostedemail.com [216.40.44.182]) by kanga.kvack.org (Postfix) with ESMTP id 9F4078D0001 for ; Mon, 28 Feb 2022 07:23:52 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 6831618091996 for ; Mon, 28 Feb 2022 12:23:52 +0000 (UTC) X-FDA: 79192105104.22.DA521E7 Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) by imf01.hostedemail.com (Postfix) with ESMTP id EB4EA4000E for ; Mon, 28 Feb 2022 12:23:51 +0000 (UTC) Received: by mail-pj1-f47.google.com with SMTP id ev16-20020a17090aead000b001bc3835fea8so11236223pjb.0 for ; Mon, 28 Feb 2022 04:23:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=L18k1O4fFtjGd/n9qxx+Uv18sX+SI8eOaMzIeceQpJ0=; b=PmFMDGtfDLfoL0LYcB+wwlGmX5lNFPAWEUcyKAA3TBBWTyfSMKsewRAH6DHEH5oNwv unnGYp2Q9ZxjgMKNhLxjZXRojT2/Nfgz7TTxcZqK5oOcORPluVooyG/7e9XT+tP+QX+t n4c6USEjWXsjWCK4vbV6+30WN7lTTlDGR/jFMcbF8uDf/fppUYQ3vIQtbyGNV+ZbAsBN lZ2QPNzQJ8Z4AZlYwYnJ1O4Sl2p3CJUE/aqHZIDzfbMoL3auURjakiGtLdAZTNBbqM+O RYkaKJVmvCVeZ8kgCiN3q5nn5ZLdau97E16oPngfZRfx7quFDjH8C7LYWPywYfztdKZo hsDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=L18k1O4fFtjGd/n9qxx+Uv18sX+SI8eOaMzIeceQpJ0=; b=ZuB8eNFNGhSE5q/mtRXrVN0Pu1ithCLHp3ZQRHvvfuP5bqZlZhwjx/LKIlC9YAUHkm IDJNEIA/UeCFbYhh4jJHhzvAukrSiaSucF8MzZKAnHClQ1T4OUlVXkeeiQU+ejS+y5rf /ZEjnsh4i7MQEu+lpzJVubZhDePAAHEQ2b0HlA7GPUIeAlBzK5uxaDjMC+JM6Xv218/Z bH1gzIYjW0wd8TXjJjhbwQkfvmzmI3voW1dnYrzrbS6QDHQIQjqo9K1MfmFFZG1pmKjs brTo1irvt8AjXi6QDtDTLBtpUOTWyUHur+KhLOMS0iT66Dq8QPz/dR+8RwaVWF2CdD7e AHWw== X-Gm-Message-State: AOAM531Nz3QEWj4hk1wueoErLuZ7hYI754bE9CQ9cPiPVUGmH61GG/IZ 6cjGTXXttoiJxtkfMGfKNoT9ig== X-Google-Smtp-Source: ABdhPJwcPPC9tUJoAsYpjtdzRM0Sfy7A/YlLh58B/PmALOKFZQvLipupQ6B7GMgD7E6hVaMZuAiNcA== X-Received: by 2002:a17:903:2c7:b0:14f:fc47:5a2f with SMTP id s7-20020a17090302c700b0014ffc475a2fmr20228267plk.112.1646051031022; Mon, 28 Feb 2022 04:23:51 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.227]) by smtp.gmail.com with ESMTPSA id ep22-20020a17090ae65600b001b92477db10sm10466753pjb.29.2022.02.28.04.23.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 04:23:50 -0800 (PST) From: Muchun Song To: willy@infradead.org, akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, shakeelb@google.com, roman.gushchin@linux.dev, shy828301@gmail.com, alexs@kernel.org, richard.weiyang@gmail.com, david@fromorbit.com, trond.myklebust@hammerspace.com, anna.schumaker@netapp.com, jaegeuk@kernel.org, chao@kernel.org, kari.argillander@gmail.com, vbabka@suse.cz Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nfs@vger.kernel.org, zhengqi.arch@bytedance.com, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v6 11/16] mm: list_lru: rename memcg_drain_all_list_lrus to memcg_reparent_list_lrus Date: Mon, 28 Feb 2022 20:21:21 +0800 Message-Id: <20220228122126.37293-12-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220228122126.37293-1-songmuchun@bytedance.com> References: <20220228122126.37293-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: EB4EA4000E X-Rspam-User: Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=PmFMDGtf; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf01.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.47 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Stat-Signature: q6beutkyuhkx14xtq1pzwkf31qbn886m X-HE-Tag: 1646051031-872170 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The purpose of the memcg_drain_all_list_lrus() is list_lrus reparenting. It is very similar to memcg_reparent_objcgs(). Rename it to memcg_reparent_list_lrus() so that the name can more consistent with memcg_reparent_objcgs(). Signed-off-by: Muchun Song --- include/linux/list_lru.h | 2 +- mm/list_lru.c | 24 ++++++++++++------------ mm/memcontrol.c | 6 +++--- 3 files changed, 16 insertions(+), 16 deletions(-) diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h index c36db6dc2a65..4b00fd8cb373 100644 --- a/include/linux/list_lru.h +++ b/include/linux/list_lru.h @@ -78,7 +78,7 @@ int __list_lru_init(struct list_lru *lru, bool memcg_aware, int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru, gfp_t gfp); int memcg_update_all_list_lrus(int num_memcgs); -void memcg_drain_all_list_lrus(struct mem_cgroup *src, struct mem_cgroup *dst); +void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *parent); /** * list_lru_add: add an element to the lru list's tail diff --git a/mm/list_lru.c b/mm/list_lru.c index fc938d8ff48f..488dacd1f8ff 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -457,8 +457,8 @@ int memcg_update_all_list_lrus(int new_size) return ret; } -static void memcg_drain_list_lru_node(struct list_lru *lru, int nid, - int src_idx, struct mem_cgroup *dst_memcg) +static void memcg_reparent_list_lru_node(struct list_lru *lru, int nid, + int src_idx, struct mem_cgroup *dst_memcg) { struct list_lru_node *nlru = &lru->node[nid]; int dst_idx = dst_memcg->kmemcg_id; @@ -486,22 +486,22 @@ static void memcg_drain_list_lru_node(struct list_lru *lru, int nid, spin_unlock_irq(&nlru->lock); } -static void memcg_drain_list_lru(struct list_lru *lru, - int src_idx, struct mem_cgroup *dst_memcg) +static void memcg_reparent_list_lru(struct list_lru *lru, + int src_idx, struct mem_cgroup *dst_memcg) { int i; for_each_node(i) - memcg_drain_list_lru_node(lru, i, src_idx, dst_memcg); + memcg_reparent_list_lru_node(lru, i, src_idx, dst_memcg); memcg_list_lru_free(lru, src_idx); } -void memcg_drain_all_list_lrus(struct mem_cgroup *src, struct mem_cgroup *dst) +void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *parent) { struct cgroup_subsys_state *css; struct list_lru *lru; - int src_idx = src->kmemcg_id; + int src_idx = memcg->kmemcg_id; /* * Change kmemcg_id of this cgroup and all its descendants to the @@ -517,17 +517,17 @@ void memcg_drain_all_list_lrus(struct mem_cgroup *src, struct mem_cgroup *dst) * call. */ rcu_read_lock(); - css_for_each_descendant_pre(css, &src->css) { - struct mem_cgroup *memcg; + css_for_each_descendant_pre(css, &memcg->css) { + struct mem_cgroup *child; - memcg = mem_cgroup_from_css(css); - memcg->kmemcg_id = dst->kmemcg_id; + child = mem_cgroup_from_css(css); + child->kmemcg_id = parent->kmemcg_id; } rcu_read_unlock(); mutex_lock(&list_lrus_mutex); list_for_each_entry(lru, &memcg_list_lrus, list) - memcg_drain_list_lru(lru, src_idx, dst); + memcg_reparent_list_lru(lru, src_idx, parent); mutex_unlock(&list_lrus_mutex); } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index c8d39d7fc2e5..87ee5b431c05 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3656,7 +3656,7 @@ static void memcg_offline_kmem(struct mem_cgroup *memcg) memcg_reparent_objcgs(memcg, parent); /* - * memcg_drain_all_list_lrus() can change memcg->kmemcg_id. + * memcg_reparent_list_lrus() can change memcg->kmemcg_id. * Cache it to local @kmemcg_id. */ kmemcg_id = memcg->kmemcg_id; @@ -3665,9 +3665,9 @@ static void memcg_offline_kmem(struct mem_cgroup *memcg) * After we have finished memcg_reparent_objcgs(), all list_lrus * corresponding to this cgroup are guaranteed to remain empty. * The ordering is imposed by list_lru_node->lock taken by - * memcg_drain_all_list_lrus(). + * memcg_reparent_list_lrus(). */ - memcg_drain_all_list_lrus(memcg, parent); + memcg_reparent_list_lrus(memcg, parent); memcg_free_cache_id(kmemcg_id); } From patchwork Mon Feb 28 12:21:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12763170 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BC2CC433EF for ; Mon, 28 Feb 2022 12:24:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 17C2E8D0003; Mon, 28 Feb 2022 07:24:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 105408D0001; Mon, 28 Feb 2022 07:24:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E9A368D0003; Mon, 28 Feb 2022 07:24:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0093.hostedemail.com [216.40.44.93]) by kanga.kvack.org (Postfix) with ESMTP id D20ED8D0001 for ; Mon, 28 Feb 2022 07:24:01 -0500 (EST) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 8F0459BBB0 for ; Mon, 28 Feb 2022 12:24:01 +0000 (UTC) X-FDA: 79192105482.31.3D648CD Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) by imf07.hostedemail.com (Postfix) with ESMTP id F25ED4000E for ; Mon, 28 Feb 2022 12:24:00 +0000 (UTC) Received: by mail-pl1-f170.google.com with SMTP id s1so10537338plg.12 for ; Mon, 28 Feb 2022 04:24:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=g8MYbcQ/Q73LJRrsRbD0hpbw/lDA2L2mdoyltjAtS9o=; b=wk/ASRcKSSeX1OCGSZA25aXqMpUKybKk17Ld2rQdP/q9vcZ+6xb30IZCrh0HlEMjKt s0Mqh8Y3oA4jDcs6bWgFVOscMkuaFAU/seLrgZyTfBperqSG2Fozz7/P51gbj8XX6pa7 43hMezp8QN4aBExAPMHWYcEJ+/XdsRiIoXlkZWLtg2ZUSh4d1zqp5dHO/eZNaMTu+q5h bir8rxt5XFwno/0N0z44GV9Ie9gsC0JYb4ck4a+ofyBUFLlh2e+Z7kKKW0RWYvyOKqB4 nyH5fL0ZbMgk7qYu0rLt4stubdYzQ+plnNYZ+592g7z6V+8FJ9ZFk/RgDJ6xYmIcQNoN 38Kw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=g8MYbcQ/Q73LJRrsRbD0hpbw/lDA2L2mdoyltjAtS9o=; b=VhgKHzznefromGdA0Ngtro+FnmMk4g/D6mItfNW6KQgSOQ7VAJhFDuQQmQLmFdkKNT 6B9yz8gTcHKei+TaTqxZ3AgqFPwJfdmuca5vnn21aFNnQkhSC+gMOSj8EEovm7SsjML+ guqIdDcMh3NfTfkGAovqRzJwDfpXSQqiWj9AHtN8piGB13QYAlBS1yr5JEPRERwzFK1r Jat+7rbeKmQ933pIF6RdPDkgA7D5RH1CK7v8Z/j4A00LAo6RMIwOTwwpUvVzaVbbm9KR 0gcpz9m7ehGombaY0BNt8NKNjuSpmoMqYeGurlFBrbsswFnDJpZQBMsbxrK+zIDn4HKy FPbw== X-Gm-Message-State: AOAM530kfRCh1C24xO+Zy94+NU1DLcDD41jBlu6Tsoy4HA20PTJLDy1A zDe4F966Z5kDAuDyNllu1KZH1A== X-Google-Smtp-Source: ABdhPJwct/POKcofLI4dIorgmOY1KkPzBPxannS87/25gMpOhI/EU7ViUtPJl5yoUSX09ftRms7ASQ== X-Received: by 2002:a17:90b:19d4:b0:1bc:3489:73b9 with SMTP id nm20-20020a17090b19d400b001bc348973b9mr16374304pjb.175.1646051039921; Mon, 28 Feb 2022 04:23:59 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.227]) by smtp.gmail.com with ESMTPSA id ep22-20020a17090ae65600b001b92477db10sm10466753pjb.29.2022.02.28.04.23.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 04:23:59 -0800 (PST) From: Muchun Song To: willy@infradead.org, akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, shakeelb@google.com, roman.gushchin@linux.dev, shy828301@gmail.com, alexs@kernel.org, richard.weiyang@gmail.com, david@fromorbit.com, trond.myklebust@hammerspace.com, anna.schumaker@netapp.com, jaegeuk@kernel.org, chao@kernel.org, kari.argillander@gmail.com, vbabka@suse.cz Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nfs@vger.kernel.org, zhengqi.arch@bytedance.com, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v6 12/16] mm: list_lru: replace linear array with xarray Date: Mon, 28 Feb 2022 20:21:22 +0800 Message-Id: <20220228122126.37293-13-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220228122126.37293-1-songmuchun@bytedance.com> References: <20220228122126.37293-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: F25ED4000E X-Stat-Signature: a1u1m6kbgyhi1dz6ub33591s6wciwxfs Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b="wk/ASRcK"; spf=pass (imf07.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.170 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1646051040-717880 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If we run 10k containers in the system, the size of the list_lru_memcg->lrus can be ~96KB per list_lru. When we decrease the number containers, the size of the array will not be shrinked. It is not scalable. The xarray is a good choice for this case. We can save a lot of memory when there are tens of thousands continers in the system. If we use xarray, we also can remove the logic code of resizing array, which can simplify the code. Signed-off-by: Muchun Song Signed-off-by: Muchun Song --- include/linux/list_lru.h | 13 +-- include/linux/memcontrol.h | 23 ------ mm/list_lru.c | 202 +++++++++++++++------------------------------ mm/memcontrol.c | 77 ++--------------- 4 files changed, 73 insertions(+), 242 deletions(-) diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h index 4b00fd8cb373..572c263561ac 100644 --- a/include/linux/list_lru.h +++ b/include/linux/list_lru.h @@ -11,6 +11,7 @@ #include #include #include +#include struct mem_cgroup; @@ -37,12 +38,6 @@ struct list_lru_per_memcg { struct list_lru_one node[]; }; -struct list_lru_memcg { - struct rcu_head rcu; - /* array of per cgroup lists, indexed by memcg_cache_id */ - struct list_lru_per_memcg __rcu *mlru[]; -}; - struct list_lru_node { /* protects all lists on the node, including per cgroup */ spinlock_t lock; @@ -57,10 +52,7 @@ struct list_lru { struct list_head list; int shrinker_id; bool memcg_aware; - /* protects ->mlrus->mlru[i] */ - spinlock_t lock; - /* for cgroup aware lrus points to per cgroup lists, otherwise NULL */ - struct list_lru_memcg __rcu *mlrus; + struct xarray xa; #endif }; @@ -77,7 +69,6 @@ int __list_lru_init(struct list_lru *lru, bool memcg_aware, int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru, gfp_t gfp); -int memcg_update_all_list_lrus(int num_memcgs); void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *parent); /** diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 0d6d838cadfd..1fe44ec5aa03 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1686,18 +1686,6 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size); extern struct static_key_false memcg_kmem_enabled_key; -extern int memcg_nr_cache_ids; -void memcg_get_cache_ids(void); -void memcg_put_cache_ids(void); - -/* - * Helper macro to loop through all memcg-specific caches. Callers must still - * check if the cache is valid (it is either valid or NULL). - * the slab_mutex must be held when looping through those caches - */ -#define for_each_memcg_cache_index(_idx) \ - for ((_idx) = 0; (_idx) < memcg_nr_cache_ids; (_idx)++) - static inline bool memcg_kmem_enabled(void) { return static_branch_likely(&memcg_kmem_enabled_key); @@ -1754,9 +1742,6 @@ static inline void __memcg_kmem_uncharge_page(struct page *page, int order) { } -#define for_each_memcg_cache_index(_idx) \ - for (; NULL; ) - static inline bool memcg_kmem_enabled(void) { return false; @@ -1767,14 +1752,6 @@ static inline int memcg_cache_id(struct mem_cgroup *memcg) return -1; } -static inline void memcg_get_cache_ids(void) -{ -} - -static inline void memcg_put_cache_ids(void) -{ -} - static inline struct mem_cgroup *mem_cgroup_from_obj(void *p) { return NULL; diff --git a/mm/list_lru.c b/mm/list_lru.c index 488dacd1f8ff..8dc1dabb9f05 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -52,21 +52,12 @@ static int lru_shrinker_id(struct list_lru *lru) static inline struct list_lru_one * list_lru_from_memcg_idx(struct list_lru *lru, int nid, int idx) { - struct list_lru_memcg *mlrus; - struct list_lru_node *nlru = &lru->node[nid]; + if (list_lru_memcg_aware(lru) && idx >= 0) { + struct list_lru_per_memcg *mlru = xa_load(&lru->xa, idx); - /* - * Either lock or RCU protects the array of per cgroup lists - * from relocation (see memcg_update_list_lru). - */ - mlrus = rcu_dereference_check(lru->mlrus, lockdep_is_held(&nlru->lock)); - if (mlrus && idx >= 0) { - struct list_lru_per_memcg *mlru; - - mlru = rcu_dereference_check(mlrus->mlru[idx], true); return mlru ? &mlru->node[nid] : NULL; } - return &nlru->lru; + return &lru->node[nid].lru; } static inline struct list_lru_one * @@ -77,7 +68,7 @@ list_lru_from_kmem(struct list_lru *lru, int nid, void *ptr, struct list_lru_one *l = &nlru->lru; struct mem_cgroup *memcg = NULL; - if (!lru->mlrus) + if (!list_lru_memcg_aware(lru)) goto out; memcg = mem_cgroup_from_obj(ptr); @@ -309,16 +300,20 @@ unsigned long list_lru_walk_node(struct list_lru *lru, int nid, unsigned long *nr_to_walk) { long isolated = 0; - int memcg_idx; isolated += list_lru_walk_one(lru, nid, NULL, isolate, cb_arg, nr_to_walk); + +#ifdef CONFIG_MEMCG_KMEM if (*nr_to_walk > 0 && list_lru_memcg_aware(lru)) { - for_each_memcg_cache_index(memcg_idx) { + struct list_lru_per_memcg *mlru; + unsigned long index; + + xa_for_each(&lru->xa, index, mlru) { struct list_lru_node *nlru = &lru->node[nid]; spin_lock(&nlru->lock); - isolated += __list_lru_walk_one(lru, nid, memcg_idx, + isolated += __list_lru_walk_one(lru, nid, index, isolate, cb_arg, nr_to_walk); spin_unlock(&nlru->lock); @@ -327,6 +322,8 @@ unsigned long list_lru_walk_node(struct list_lru *lru, int nid, break; } } +#endif + return isolated; } EXPORT_SYMBOL_GPL(list_lru_walk_node); @@ -338,15 +335,6 @@ static void init_one_lru(struct list_lru_one *l) } #ifdef CONFIG_MEMCG_KMEM -static void memcg_destroy_list_lru_range(struct list_lru_memcg *mlrus, - int begin, int end) -{ - int i; - - for (i = begin; i < end; i++) - kfree(mlrus->mlru[i]); -} - static struct list_lru_per_memcg *memcg_init_list_lru_one(gfp_t gfp) { int nid; @@ -364,14 +352,7 @@ static struct list_lru_per_memcg *memcg_init_list_lru_one(gfp_t gfp) static void memcg_list_lru_free(struct list_lru *lru, int src_idx) { - struct list_lru_memcg *mlrus; - struct list_lru_per_memcg *mlru; - - spin_lock_irq(&lru->lock); - mlrus = rcu_dereference_protected(lru->mlrus, true); - mlru = rcu_dereference_protected(mlrus->mlru[src_idx], true); - rcu_assign_pointer(mlrus->mlru[src_idx], NULL); - spin_unlock_irq(&lru->lock); + struct list_lru_per_memcg *mlru = xa_erase_irq(&lru->xa, src_idx); /* * The __list_lru_walk_one() can walk the list of this node. @@ -383,78 +364,27 @@ static void memcg_list_lru_free(struct list_lru *lru, int src_idx) kvfree_rcu(mlru, rcu); } -static int memcg_init_list_lru(struct list_lru *lru, bool memcg_aware) +static inline void memcg_init_list_lru(struct list_lru *lru, bool memcg_aware) { - struct list_lru_memcg *mlrus; - int size = memcg_nr_cache_ids; - + if (memcg_aware) + xa_init_flags(&lru->xa, XA_FLAGS_LOCK_IRQ); lru->memcg_aware = memcg_aware; - if (!memcg_aware) - return 0; - - spin_lock_init(&lru->lock); - - mlrus = kvzalloc(struct_size(mlrus, mlru, size), GFP_KERNEL); - if (!mlrus) - return -ENOMEM; - - RCU_INIT_POINTER(lru->mlrus, mlrus); - - return 0; } static void memcg_destroy_list_lru(struct list_lru *lru) { - struct list_lru_memcg *mlrus; + XA_STATE(xas, &lru->xa, 0); + struct list_lru_per_memcg *mlru; if (!list_lru_memcg_aware(lru)) return; - /* - * This is called when shrinker has already been unregistered, - * and nobody can use it. So, there is no need to use kvfree_rcu(). - */ - mlrus = rcu_dereference_protected(lru->mlrus, true); - memcg_destroy_list_lru_range(mlrus, 0, memcg_nr_cache_ids); - kvfree(mlrus); -} - -static int memcg_update_list_lru(struct list_lru *lru, int old_size, int new_size) -{ - struct list_lru_memcg *old, *new; - - BUG_ON(old_size > new_size); - - old = rcu_dereference_protected(lru->mlrus, - lockdep_is_held(&list_lrus_mutex)); - new = kvmalloc(struct_size(new, mlru, new_size), GFP_KERNEL); - if (!new) - return -ENOMEM; - - spin_lock_irq(&lru->lock); - memcpy(&new->mlru, &old->mlru, flex_array_size(new, mlru, old_size)); - memset(&new->mlru[old_size], 0, flex_array_size(new, mlru, new_size - old_size)); - rcu_assign_pointer(lru->mlrus, new); - spin_unlock_irq(&lru->lock); - - kvfree_rcu(old, rcu); - return 0; -} - -int memcg_update_all_list_lrus(int new_size) -{ - int ret = 0; - struct list_lru *lru; - int old_size = memcg_nr_cache_ids; - - mutex_lock(&list_lrus_mutex); - list_for_each_entry(lru, &memcg_list_lrus, list) { - ret = memcg_update_list_lru(lru, old_size, new_size); - if (ret) - break; + xas_lock_irq(&xas); + xas_for_each(&xas, mlru, ULONG_MAX) { + kfree(mlru); + xas_store(&xas, NULL); } - mutex_unlock(&list_lrus_mutex); - return ret; + xas_unlock_irq(&xas); } static void memcg_reparent_list_lru_node(struct list_lru *lru, int nid, @@ -521,7 +451,7 @@ void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *paren struct mem_cgroup *child; child = mem_cgroup_from_css(css); - child->kmemcg_id = parent->kmemcg_id; + WRITE_ONCE(child->kmemcg_id, parent->kmemcg_id); } rcu_read_unlock(); @@ -531,21 +461,12 @@ void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *paren mutex_unlock(&list_lrus_mutex); } -static bool memcg_list_lru_allocated(struct mem_cgroup *memcg, - struct list_lru *lru) +static inline bool memcg_list_lru_allocated(struct mem_cgroup *memcg, + struct list_lru *lru) { - bool allocated; - int idx; + int idx = memcg->kmemcg_id; - idx = memcg->kmemcg_id; - if (unlikely(idx < 0)) - return true; - - rcu_read_lock(); - allocated = !!rcu_access_pointer(rcu_dereference(lru->mlrus)->mlru[idx]); - rcu_read_unlock(); - - return allocated; + return idx < 0 || xa_load(&lru->xa, idx); } int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru, @@ -558,6 +479,7 @@ int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru, struct list_lru_per_memcg *mlru; struct mem_cgroup *memcg; } *table; + XA_STATE(xas, &lru->xa, 0); if (!list_lru_memcg_aware(lru) || memcg_list_lru_allocated(memcg, lru)) return 0; @@ -586,27 +508,48 @@ int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru, } } - spin_lock_irqsave(&lru->lock, flags); - mlrus = rcu_dereference_protected(lru->mlrus, true); + xas_lock_irqsave(&xas, flags); while (i--) { - int index = table[i].memcg->kmemcg_id; + int index = READ_ONCE(table[i].memcg->kmemcg_id); struct list_lru_per_memcg *mlru = table[i].mlru; - if (index < 0 || rcu_dereference_protected(mlrus->mlru[index], true)) + xas_set(&xas, index); +retry: + if (unlikely(index < 0 || xas_error(&xas) || xas_load(&xas))) { kfree(mlru); - else - rcu_assign_pointer(mlrus->mlru[index], mlru); + } else { + xas_store(&xas, mlru); + if (xas_error(&xas) == -ENOMEM) { + xas_unlock_irqrestore(&xas, flags); + if (xas_nomem(&xas, gfp)) + xas_set_err(&xas, 0); + xas_lock_irqsave(&xas, flags); + /* + * The xas lock has been released, this memcg + * can be reparented before us. So reload + * memcg id. More details see the comments + * in memcg_reparent_list_lrus(). + */ + index = READ_ONCE(table[i].memcg->kmemcg_id); + if (index < 0) + xas_set_err(&xas, 0); + else if (!xas_error(&xas) && index != xas.xa_index) + xas_set(&xas, index); + goto retry; + } + } } - spin_unlock_irqrestore(&lru->lock, flags); - + /* xas_nomem() is used to free memory instead of memory allocation. */ + if (xas.xa_alloc) + xas_nomem(&xas, gfp); + xas_unlock_irqrestore(&xas, flags); kfree(table); - return 0; + return xas_error(&xas); } #else -static int memcg_init_list_lru(struct list_lru *lru, bool memcg_aware) +static inline void memcg_init_list_lru(struct list_lru *lru, bool memcg_aware) { - return 0; } static void memcg_destroy_list_lru(struct list_lru *lru) @@ -618,7 +561,6 @@ int __list_lru_init(struct list_lru *lru, bool memcg_aware, struct lock_class_key *key, struct shrinker *shrinker) { int i; - int err = -ENOMEM; #ifdef CONFIG_MEMCG_KMEM if (shrinker) @@ -626,11 +568,10 @@ int __list_lru_init(struct list_lru *lru, bool memcg_aware, else lru->shrinker_id = -1; #endif - memcg_get_cache_ids(); lru->node = kcalloc(nr_node_ids, sizeof(*lru->node), GFP_KERNEL); if (!lru->node) - goto out; + return -ENOMEM; for_each_node(i) { spin_lock_init(&lru->node[i].lock); @@ -639,18 +580,10 @@ int __list_lru_init(struct list_lru *lru, bool memcg_aware, init_one_lru(&lru->node[i].lru); } - err = memcg_init_list_lru(lru, memcg_aware); - if (err) { - kfree(lru->node); - /* Do this so a list_lru_destroy() doesn't crash: */ - lru->node = NULL; - goto out; - } - + memcg_init_list_lru(lru, memcg_aware); list_lru_register(lru); -out: - memcg_put_cache_ids(); - return err; + + return 0; } EXPORT_SYMBOL_GPL(__list_lru_init); @@ -660,8 +593,6 @@ void list_lru_destroy(struct list_lru *lru) if (!lru->node) return; - memcg_get_cache_ids(); - list_lru_unregister(lru); memcg_destroy_list_lru(lru); @@ -671,6 +602,5 @@ void list_lru_destroy(struct list_lru *lru) #ifdef CONFIG_MEMCG_KMEM lru->shrinker_id = -1; #endif - memcg_put_cache_ids(); } EXPORT_SYMBOL_GPL(list_lru_destroy); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 87ee5b431c05..361ac289d8e9 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -351,42 +351,17 @@ static void memcg_reparent_objcgs(struct mem_cgroup *memcg, * This will be used as a shrinker list's index. * The main reason for not using cgroup id for this: * this works better in sparse environments, where we have a lot of memcgs, - * but only a few kmem-limited. Or also, if we have, for instance, 200 - * memcgs, and none but the 200th is kmem-limited, we'd have to have a - * 200 entry array for that. - * - * The current size of the caches array is stored in memcg_nr_cache_ids. It - * will double each time we have to increase it. + * but only a few kmem-limited. */ static DEFINE_IDA(memcg_cache_ida); -int memcg_nr_cache_ids; - -/* Protects memcg_nr_cache_ids */ -static DECLARE_RWSEM(memcg_cache_ids_sem); - -void memcg_get_cache_ids(void) -{ - down_read(&memcg_cache_ids_sem); -} - -void memcg_put_cache_ids(void) -{ - up_read(&memcg_cache_ids_sem); -} /* - * MIN_SIZE is different than 1, because we would like to avoid going through - * the alloc/free process all the time. In a small machine, 4 kmem-limited - * cgroups is a reasonable guess. In the future, it could be a parameter or - * tunable, but that is strictly not necessary. - * * MAX_SIZE should be as large as the number of cgrp_ids. Ideally, we could get * this constant directly from cgroup, but it is understandable that this is * better kept as an internal representation in cgroup.c. In any case, the * cgrp_id space is not getting any smaller, and we don't have to necessarily * increase ours as well if it increases. */ -#define MEMCG_CACHES_MIN_SIZE 4 #define MEMCG_CACHES_MAX_SIZE MEM_CGROUP_ID_MAX /* @@ -2922,49 +2897,6 @@ __always_inline struct obj_cgroup *get_obj_cgroup_from_current(void) return objcg; } -static int memcg_alloc_cache_id(void) -{ - int id, size; - int err; - - id = ida_simple_get(&memcg_cache_ida, - 0, MEMCG_CACHES_MAX_SIZE, GFP_KERNEL); - if (id < 0) - return id; - - if (id < memcg_nr_cache_ids) - return id; - - /* - * There's no space for the new id in memcg_caches arrays, - * so we have to grow them. - */ - down_write(&memcg_cache_ids_sem); - - size = 2 * (id + 1); - if (size < MEMCG_CACHES_MIN_SIZE) - size = MEMCG_CACHES_MIN_SIZE; - else if (size > MEMCG_CACHES_MAX_SIZE) - size = MEMCG_CACHES_MAX_SIZE; - - err = memcg_update_all_list_lrus(size); - if (!err) - memcg_nr_cache_ids = size; - - up_write(&memcg_cache_ids_sem); - - if (err) { - ida_simple_remove(&memcg_cache_ida, id); - return err; - } - return id; -} - -static void memcg_free_cache_id(int id) -{ - ida_simple_remove(&memcg_cache_ida, id); -} - /* * obj_cgroup_uncharge_pages: uncharge a number of kernel pages from a objcg * @objcg: object cgroup to uncharge @@ -3619,13 +3551,14 @@ static int memcg_online_kmem(struct mem_cgroup *memcg) if (unlikely(mem_cgroup_is_root(memcg))) return 0; - memcg_id = memcg_alloc_cache_id(); + memcg_id = ida_alloc_max(&memcg_cache_ida, MEMCG_CACHES_MAX_SIZE - 1, + GFP_KERNEL); if (memcg_id < 0) return memcg_id; objcg = obj_cgroup_alloc(); if (!objcg) { - memcg_free_cache_id(memcg_id); + ida_free(&memcg_cache_ida, memcg_id); return -ENOMEM; } objcg->memcg = memcg; @@ -3669,7 +3602,7 @@ static void memcg_offline_kmem(struct mem_cgroup *memcg) */ memcg_reparent_list_lrus(memcg, parent); - memcg_free_cache_id(kmemcg_id); + ida_free(&memcg_cache_ida, kmemcg_id); } #else static int memcg_online_kmem(struct mem_cgroup *memcg) From patchwork Mon Feb 28 12:21:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12763171 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BB38C433EF for ; Mon, 28 Feb 2022 12:24:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B8F728D000B; Mon, 28 Feb 2022 07:24:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B17568D0001; Mon, 28 Feb 2022 07:24:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 990428D000B; Mon, 28 Feb 2022 07:24:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0248.hostedemail.com [216.40.44.248]) by kanga.kvack.org (Postfix) with ESMTP id 86C728D0001 for ; Mon, 28 Feb 2022 07:24:10 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 41BED8249980 for ; Mon, 28 Feb 2022 12:24:10 +0000 (UTC) X-FDA: 79192105860.25.22F7C89 Received: from mail-pj1-f46.google.com (mail-pj1-f46.google.com [209.85.216.46]) by imf25.hostedemail.com (Postfix) with ESMTP id 66637A0007 for ; Mon, 28 Feb 2022 12:24:10 +0000 (UTC) Received: by mail-pj1-f46.google.com with SMTP id b8so10961088pjb.4 for ; Mon, 28 Feb 2022 04:24:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dade4m+tPfltv+YBL5BFr4S8sLSocJ6SVbceY12YVP4=; b=f1g/kvJBjfN30xCsylnaAygl8oPPzFBGUC8L8YafA7UAzTnfBH0+sZQQBOK++jomNI Tceo+BhZpE4oUeGWcQO1j530JhBX7s+U0HKG1ZQl1Ld42RMI15jdS7wAbaElmylLb3/s u0ZEqF7T4koLmlohM2LsJTLPYEvtbIFCu8jH9LMz7VdecFs0W1qDuzPAxiZnurzDfJ0y lVwxikx3PqhBzItEXWc3X8ZejterQIMtsBIOy/5tLh+ZbdoB9trsE+k62QIRsKeXqvS8 B+ycUjdT4PdUlLqy0fas3es6mEtLdxZTbF6GKz8nDt4WOMqSDPGfzzHihHOx2vsOHvuW Hpng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dade4m+tPfltv+YBL5BFr4S8sLSocJ6SVbceY12YVP4=; b=sj8ACpJWPRDnOwek2NGMgRJju+06YKgH46z+QFIE5ILW4d8xKBC6xaNikPPAlhHDQf M+VItnYiqUEJW2o1n2ygyPDI3sPmNqQ+gB9Fg+vqgdyXCC+3NfGNmMnnD9Rf4CNgO9fY PUCrLFtnvizzpwCck7UwVVSR7f09Ifsbv0sviieyIr213OKQNWZio0wiNEjMx8i2iWgl 02DqVhTWqR3MlJSoaFoI+ZJ2mn48F8sOCVxA75shdVNKo7aDj3HGI0c+namYmH6MFfiL zwKtIcUIuqvOUmxPFZhAXgX0Qc2aRiAEUHx9anzjOylsq0r18w852M/PzUKzppVHW8Pz DpSg== X-Gm-Message-State: AOAM533VDpPbg4pXoCtzST/DguRbMTgo+LxvLMw7QWitOpS7gOBDp8be L+LCMWRpXTTIPZsGVUkE8/MnlQ== X-Google-Smtp-Source: ABdhPJzClMB8yg4ny/EoRfVDOp+9+WQf3gvaUjAwm4C20/JUl5vRwJA/JWNNEr5DbzV/BI2AAherTg== X-Received: by 2002:a17:90a:a385:b0:1b9:cfb8:de07 with SMTP id x5-20020a17090aa38500b001b9cfb8de07mr16121962pjp.162.1646051049016; Mon, 28 Feb 2022 04:24:09 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.227]) by smtp.gmail.com with ESMTPSA id ep22-20020a17090ae65600b001b92477db10sm10466753pjb.29.2022.02.28.04.24.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 04:24:08 -0800 (PST) From: Muchun Song To: willy@infradead.org, akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, shakeelb@google.com, roman.gushchin@linux.dev, shy828301@gmail.com, alexs@kernel.org, richard.weiyang@gmail.com, david@fromorbit.com, trond.myklebust@hammerspace.com, anna.schumaker@netapp.com, jaegeuk@kernel.org, chao@kernel.org, kari.argillander@gmail.com, vbabka@suse.cz Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nfs@vger.kernel.org, zhengqi.arch@bytedance.com, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v6 13/16] mm: memcontrol: reuse memory cgroup ID for kmem ID Date: Mon, 28 Feb 2022 20:21:23 +0800 Message-Id: <20220228122126.37293-14-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220228122126.37293-1-songmuchun@bytedance.com> References: <20220228122126.37293-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 66637A0007 X-Stat-Signature: kreaa14z3eep5rm8hxukzug6cqmwhzqo Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b="f1g/kvJB"; spf=pass (imf25.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.46 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-HE-Tag: 1646051050-847328 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There are two idrs being used by memory cgroup, one is for kmem ID, another is for memory cgroup ID. The maximum ID of both is 64Ki. Both of them can limit the total number of memory cgroups. Actually, we can reuse memory cgroup ID for kmem ID to simplify the code. Signed-off-by: Muchun Song --- mm/memcontrol.c | 39 +++------------------------------------ 1 file changed, 3 insertions(+), 36 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 361ac289d8e9..809dfa4b2abc 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -348,23 +348,6 @@ static void memcg_reparent_objcgs(struct mem_cgroup *memcg, } /* - * This will be used as a shrinker list's index. - * The main reason for not using cgroup id for this: - * this works better in sparse environments, where we have a lot of memcgs, - * but only a few kmem-limited. - */ -static DEFINE_IDA(memcg_cache_ida); - -/* - * MAX_SIZE should be as large as the number of cgrp_ids. Ideally, we could get - * this constant directly from cgroup, but it is understandable that this is - * better kept as an internal representation in cgroup.c. In any case, the - * cgrp_id space is not getting any smaller, and we don't have to necessarily - * increase ours as well if it increases. - */ -#define MEMCG_CACHES_MAX_SIZE MEM_CGROUP_ID_MAX - -/* * A lot of the calls to the cache allocation functions are expected to be * inlined by the compiler. Since the calls to memcg_slab_pre_alloc_hook() are * conditional to this static branch, we'll have to allow modules that does @@ -3543,7 +3526,6 @@ static u64 mem_cgroup_read_u64(struct cgroup_subsys_state *css, static int memcg_online_kmem(struct mem_cgroup *memcg) { struct obj_cgroup *objcg; - int memcg_id; if (cgroup_memory_nokmem) return 0; @@ -3551,22 +3533,16 @@ static int memcg_online_kmem(struct mem_cgroup *memcg) if (unlikely(mem_cgroup_is_root(memcg))) return 0; - memcg_id = ida_alloc_max(&memcg_cache_ida, MEMCG_CACHES_MAX_SIZE - 1, - GFP_KERNEL); - if (memcg_id < 0) - return memcg_id; - objcg = obj_cgroup_alloc(); - if (!objcg) { - ida_free(&memcg_cache_ida, memcg_id); + if (!objcg) return -ENOMEM; - } + objcg->memcg = memcg; rcu_assign_pointer(memcg->objcg, objcg); static_branch_enable(&memcg_kmem_enabled_key); - memcg->kmemcg_id = memcg_id; + memcg->kmemcg_id = memcg->id.id; return 0; } @@ -3574,7 +3550,6 @@ static int memcg_online_kmem(struct mem_cgroup *memcg) static void memcg_offline_kmem(struct mem_cgroup *memcg) { struct mem_cgroup *parent; - int kmemcg_id; if (cgroup_memory_nokmem) return; @@ -3589,20 +3564,12 @@ static void memcg_offline_kmem(struct mem_cgroup *memcg) memcg_reparent_objcgs(memcg, parent); /* - * memcg_reparent_list_lrus() can change memcg->kmemcg_id. - * Cache it to local @kmemcg_id. - */ - kmemcg_id = memcg->kmemcg_id; - - /* * After we have finished memcg_reparent_objcgs(), all list_lrus * corresponding to this cgroup are guaranteed to remain empty. * The ordering is imposed by list_lru_node->lock taken by * memcg_reparent_list_lrus(). */ memcg_reparent_list_lrus(memcg, parent); - - ida_free(&memcg_cache_ida, kmemcg_id); } #else static int memcg_online_kmem(struct mem_cgroup *memcg) From patchwork Mon Feb 28 12:21:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12763172 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE11FC433FE for ; Mon, 28 Feb 2022 12:24:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5744F8D000C; Mon, 28 Feb 2022 07:24:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4FBE28D0001; Mon, 28 Feb 2022 07:24:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 39CFC8D000C; Mon, 28 Feb 2022 07:24:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0209.hostedemail.com [216.40.44.209]) by kanga.kvack.org (Postfix) with ESMTP id 27F538D0001 for ; Mon, 28 Feb 2022 07:24:19 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id E02C68249980 for ; Mon, 28 Feb 2022 12:24:18 +0000 (UTC) X-FDA: 79192106196.25.66F9D5D Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) by imf09.hostedemail.com (Postfix) with ESMTP id 29BDC140008 for ; Mon, 28 Feb 2022 12:24:17 +0000 (UTC) Received: by mail-pl1-f177.google.com with SMTP id bd1so10535177plb.13 for ; Mon, 28 Feb 2022 04:24:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OoMgzdYapj8GOdfPEOkybEme/3XTyQ2VrTMps0q2PrA=; b=MY4WDJ8X56A+gQ/cLVJur3SrDm3EQKN4EZJt5lZ+9Cjo6Qf9bggHRbRFFzT61hhveJ 6llCa/oKNMcsL76YUQw6b2d6O43L82wgPEiEHoU+ucBYNIVlHE3DWygEJoZi6YSvWJ67 /wPNjrpz1ZMRw0ww9cakaygssVdi8m5NyOgvH0cttq0NVGVWMiO+8SW6Ce6TYCW9DpUz CCfNEsXaEm1bwlznl+crzJ8BZNp/0S/lJuMXy53yMDCRRI/6DVnvCVD+8L9n8BQ/0FsJ 0V/AkT/2mppxZSGo9WhzogNTJG6buGPwCZ8xc4qX9CCzzQd2534lOpSgMCU3uGRGYOUO NpaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OoMgzdYapj8GOdfPEOkybEme/3XTyQ2VrTMps0q2PrA=; b=pvV/Mrxsem1Q62q69KW2hY/XQwPlg1l6WE5aXMtwnZbo8F2YIRJ49Bm8IwCjMXTKQI GPOInyohwDM90aaE55t4pHz6UYqkb2L646/66AljZIhqKpq+nmnPh2taATrmHlq3Wn1K x/5jn9C4b9qq0m6vLPd1K7XjmXKfMin/0gzKmsaewVKHJ8SdIoknyR/AoluNSlugbRXj MAFxbo3HsPzql3iEQ6I55qeWpo6L2rPSbcarQxBMFh40BpVOsGjbB8w/RmOEl4dqlnc+ OGjsbtfEF4cXojmMbmd0WN/kaNtjlW7obj06Rf4yQjyIf6t5zOtzsjaLW76ECHhZrKTl aO3A== X-Gm-Message-State: AOAM532ZXRs9Vr7bzSB2ZlYo/T+JqHKVKCbz+SOVEO9sRuHJ78Hc7jHx Rr2WD+uzQmg6tHx9mWUCz5pJlg== X-Google-Smtp-Source: ABdhPJxledOS9MW9Pak+bHPtviUjpNtJQSZYmAT5JHc/iQf/jt1NmPdvAPkhddFMB/KNePUGkwRzZQ== X-Received: by 2002:a17:90a:d511:b0:1bc:50c9:8d8a with SMTP id t17-20020a17090ad51100b001bc50c98d8amr16432585pju.112.1646051057271; Mon, 28 Feb 2022 04:24:17 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.227]) by smtp.gmail.com with ESMTPSA id ep22-20020a17090ae65600b001b92477db10sm10466753pjb.29.2022.02.28.04.24.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 04:24:17 -0800 (PST) From: Muchun Song To: willy@infradead.org, akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, shakeelb@google.com, roman.gushchin@linux.dev, shy828301@gmail.com, alexs@kernel.org, richard.weiyang@gmail.com, david@fromorbit.com, trond.myklebust@hammerspace.com, anna.schumaker@netapp.com, jaegeuk@kernel.org, chao@kernel.org, kari.argillander@gmail.com, vbabka@suse.cz Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nfs@vger.kernel.org, zhengqi.arch@bytedance.com, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v6 14/16] mm: memcontrol: fix cannot alloc the maximum memcg ID Date: Mon, 28 Feb 2022 20:21:24 +0800 Message-Id: <20220228122126.37293-15-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220228122126.37293-1-songmuchun@bytedance.com> References: <20220228122126.37293-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 29BDC140008 X-Stat-Signature: qqud7ojcskyqzs9tg8yyuyiyz5xw361w Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=MY4WDJ8X; spf=pass (imf09.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.177 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-HE-Tag: 1646051057-502655 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The idr_alloc() does not include @max ID. So in the current implementation, the maximum memcg ID is 65534 instead of 65535. It seems a bug. So fix this. Signed-off-by: Muchun Song --- mm/memcontrol.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 809dfa4b2abc..cbe6f9bb37bb 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5029,8 +5029,7 @@ static struct mem_cgroup *mem_cgroup_alloc(void) return ERR_PTR(error); memcg->id.id = idr_alloc(&mem_cgroup_idr, NULL, - 1, MEM_CGROUP_ID_MAX, - GFP_KERNEL); + 1, MEM_CGROUP_ID_MAX + 1, GFP_KERNEL); if (memcg->id.id < 0) { error = memcg->id.id; goto fail; From patchwork Mon Feb 28 12:21:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12763173 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8674FC433F5 for ; Mon, 28 Feb 2022 12:24:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1C9678D0008; Mon, 28 Feb 2022 07:24:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 152E58D0001; Mon, 28 Feb 2022 07:24:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EE72E8D0009; Mon, 28 Feb 2022 07:24:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0223.hostedemail.com [216.40.44.223]) by kanga.kvack.org (Postfix) with ESMTP id DAD5E8D0001 for ; Mon, 28 Feb 2022 07:24:27 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 98F9A181BC08A for ; Mon, 28 Feb 2022 12:24:27 +0000 (UTC) X-FDA: 79192106574.30.DDFF994 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) by imf07.hostedemail.com (Postfix) with ESMTP id 295A640007 for ; Mon, 28 Feb 2022 12:24:26 +0000 (UTC) Received: by mail-pl1-f181.google.com with SMTP id q11so10553625pln.11 for ; Mon, 28 Feb 2022 04:24:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=O7Mg/f9nLeyIWPXNcJvDRJ+Jd7WL7Il017lo2NLhARI=; b=zQqTIdjmACn7NJG/xUqkQVbl/A95M9oSKtKPYb/DBJQlkAxgYKLJqDA9F8M0aj4l+6 VnlLKjVnqZiljGgdeEC+7n+hF2KidK1suVJKwvh06Jx9uUWB1GdJu25MZQZ1IcSQ/U2o 20l2rAW3YSQtjG4wwTMRkqRwOi3twUpOU7kX5R/olhknVmxHSvS9V/TYz+WnvyqNwpYr eB6A27IFxZ4wBia+1dqzVxrkJ8J5Fyo0tStyCMJEkpEIqoYXm6bH5g3vTIwZHjB7IxVx SHIiG5CLMzY3rtYTRBxhE7Eb2jYc3mdtrmgWMLMppC8/juFanpDHY18swAyUbpf2l13Q 6DNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=O7Mg/f9nLeyIWPXNcJvDRJ+Jd7WL7Il017lo2NLhARI=; b=jigwemm2O367dgayiDzvCBa8b98FmqK63tJjPhe+FPgVuEAPOzPkJuIM13Efu070K/ 9Rpdh/47gJl280NVQC0XUJyeMDpUrHSnS8GrztWUuk98sXcQ2fxXZTmsLQrVI8n1wi7I 0HquSAc1jlUiOloyHv2CUDKErbzr/FtT9MitLjZYWiVblFCLSsNwgPEg8twtFJBEww0k 8I09tEUy+yhGekdCAIaV2zR2DwGmtoJrNaZWsM0BtG39XywZ0VNzfJ9k0Reql5kySC+E W4f3xdYoIMKgIehwMMcsEo+eDbSs2ebhP3wtzEvAmQQ2XfBXJMvt61CAh7uHJPZiVbUr wWGA== X-Gm-Message-State: AOAM53274Kwt4bFByKznHs43mSX4iEZnJpGritc64hwLfjf04JY8DQJw 6e9w9eQp90bnUWH4boD5WR3fZg== X-Google-Smtp-Source: ABdhPJxrUroSeg9tuGjy2OtYyATly8LKBMC1zKeFNANYJ0uq6OHfCXB8xDMI2/gap6AVSkeFE7BBdQ== X-Received: by 2002:a17:90a:b88a:b0:1bc:62eb:49c with SMTP id o10-20020a17090ab88a00b001bc62eb049cmr16659583pjr.228.1646051066270; Mon, 28 Feb 2022 04:24:26 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.227]) by smtp.gmail.com with ESMTPSA id ep22-20020a17090ae65600b001b92477db10sm10466753pjb.29.2022.02.28.04.24.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 04:24:26 -0800 (PST) From: Muchun Song To: willy@infradead.org, akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, shakeelb@google.com, roman.gushchin@linux.dev, shy828301@gmail.com, alexs@kernel.org, richard.weiyang@gmail.com, david@fromorbit.com, trond.myklebust@hammerspace.com, anna.schumaker@netapp.com, jaegeuk@kernel.org, chao@kernel.org, kari.argillander@gmail.com, vbabka@suse.cz Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nfs@vger.kernel.org, zhengqi.arch@bytedance.com, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v6 15/16] mm: list_lru: rename list_lru_per_memcg to list_lru_memcg Date: Mon, 28 Feb 2022 20:21:25 +0800 Message-Id: <20220228122126.37293-16-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220228122126.37293-1-songmuchun@bytedance.com> References: <20220228122126.37293-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam10 X-Rspam-User: X-Stat-Signature: qxffes8tdkbetozoh48hzsbukthuqpp9 Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=zQqTIdjm; spf=pass (imf07.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.181 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspamd-Queue-Id: 295A640007 X-HE-Tag: 1646051066-562370 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The name of list_lru_memcg was occupied before and became free since last commit. Rename list_lru_per_memcg to list_lru_memcg since the name is brief. Signed-off-by: Muchun Song --- include/linux/list_lru.h | 2 +- mm/list_lru.c | 18 +++++++++--------- 2 files changed, 10 insertions(+), 10 deletions(-) diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h index 572c263561ac..b35968ee9fb5 100644 --- a/include/linux/list_lru.h +++ b/include/linux/list_lru.h @@ -32,7 +32,7 @@ struct list_lru_one { long nr_items; }; -struct list_lru_per_memcg { +struct list_lru_memcg { struct rcu_head rcu; /* array of per cgroup per node lists, indexed by node id */ struct list_lru_one node[]; diff --git a/mm/list_lru.c b/mm/list_lru.c index 8dc1dabb9f05..38f711e9b56e 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -53,7 +53,7 @@ static inline struct list_lru_one * list_lru_from_memcg_idx(struct list_lru *lru, int nid, int idx) { if (list_lru_memcg_aware(lru) && idx >= 0) { - struct list_lru_per_memcg *mlru = xa_load(&lru->xa, idx); + struct list_lru_memcg *mlru = xa_load(&lru->xa, idx); return mlru ? &mlru->node[nid] : NULL; } @@ -306,7 +306,7 @@ unsigned long list_lru_walk_node(struct list_lru *lru, int nid, #ifdef CONFIG_MEMCG_KMEM if (*nr_to_walk > 0 && list_lru_memcg_aware(lru)) { - struct list_lru_per_memcg *mlru; + struct list_lru_memcg *mlru; unsigned long index; xa_for_each(&lru->xa, index, mlru) { @@ -335,10 +335,10 @@ static void init_one_lru(struct list_lru_one *l) } #ifdef CONFIG_MEMCG_KMEM -static struct list_lru_per_memcg *memcg_init_list_lru_one(gfp_t gfp) +static struct list_lru_memcg *memcg_init_list_lru_one(gfp_t gfp) { int nid; - struct list_lru_per_memcg *mlru; + struct list_lru_memcg *mlru; mlru = kmalloc(struct_size(mlru, node, nr_node_ids), gfp); if (!mlru) @@ -352,7 +352,7 @@ static struct list_lru_per_memcg *memcg_init_list_lru_one(gfp_t gfp) static void memcg_list_lru_free(struct list_lru *lru, int src_idx) { - struct list_lru_per_memcg *mlru = xa_erase_irq(&lru->xa, src_idx); + struct list_lru_memcg *mlru = xa_erase_irq(&lru->xa, src_idx); /* * The __list_lru_walk_one() can walk the list of this node. @@ -374,7 +374,7 @@ static inline void memcg_init_list_lru(struct list_lru *lru, bool memcg_aware) static void memcg_destroy_list_lru(struct list_lru *lru) { XA_STATE(xas, &lru->xa, 0); - struct list_lru_per_memcg *mlru; + struct list_lru_memcg *mlru; if (!list_lru_memcg_aware(lru)) return; @@ -476,7 +476,7 @@ int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru, unsigned long flags; struct list_lru_memcg *mlrus; struct list_lru_memcg_table { - struct list_lru_per_memcg *mlru; + struct list_lru_memcg *mlru; struct mem_cgroup *memcg; } *table; XA_STATE(xas, &lru->xa, 0); @@ -492,7 +492,7 @@ int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru, /* * Because the list_lru can be reparented to the parent cgroup's * list_lru, we should make sure that this cgroup and all its - * ancestors have allocated list_lru_per_memcg. + * ancestors have allocated list_lru_memcg. */ for (i = 0; memcg; memcg = parent_mem_cgroup(memcg), i++) { if (memcg_list_lru_allocated(memcg, lru)) @@ -511,7 +511,7 @@ int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru, xas_lock_irqsave(&xas, flags); while (i--) { int index = READ_ONCE(table[i].memcg->kmemcg_id); - struct list_lru_per_memcg *mlru = table[i].mlru; + struct list_lru_memcg *mlru = table[i].mlru; xas_set(&xas, index); retry: From patchwork Mon Feb 28 12:21:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12763188 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD4D4C4332F for ; Mon, 28 Feb 2022 12:24:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 749C88D000A; Mon, 28 Feb 2022 07:24:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6D0C98D0001; Mon, 28 Feb 2022 07:24:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4FCB68D000A; Mon, 28 Feb 2022 07:24:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 37F8E8D0001 for ; Mon, 28 Feb 2022 07:24:36 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 14D4260F44 for ; Mon, 28 Feb 2022 12:24:36 +0000 (UTC) X-FDA: 79192106952.09.801571B Received: from mail-pg1-f176.google.com (mail-pg1-f176.google.com [209.85.215.176]) by imf24.hostedemail.com (Postfix) with ESMTP id 9EA2F180002 for ; Mon, 28 Feb 2022 12:24:35 +0000 (UTC) Received: by mail-pg1-f176.google.com with SMTP id 139so11252268pge.1 for ; Mon, 28 Feb 2022 04:24:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=NevpP32QQ8SJ3jDpfuGxxyXaFtCuF+EJTDzyFWscs5o=; b=ihucVeQ4jgnzm85wwJYDwQ+pBJzqK/MKwHcmdBv9j/RHbv3a1ZtU6Ux8lEzIF95Ex6 3lZewDApJuoKqkNiWQKFMHmfJMXbTq0vwQ9RClH0ndB8UJrqXY2ulzd8icYrClgdXqhe Ra7U/pHY64X2k5ZCc4nNUh8KVGVtvo4zVwfTNo91F19Rcka/x2NatjgcwGgT+XHXTYF6 5uim+MWZ5ByjtbBIuAC4IyPHvYlW0iA677nEK9WjdDEd/t4Hm4CucamxAkwt2UkFjQmY nTmEhw972lZUkTw2rvYOpWuDH5fLx5DiKrBqe0fLSHLL0Z3EOgl0yJESz9RaIl4tuMGa bZAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NevpP32QQ8SJ3jDpfuGxxyXaFtCuF+EJTDzyFWscs5o=; b=py6tZ9qv68L42LO1IcDhrx09K79Kwidb4pVxjQx13a5N23GTRBv8DV0xUa+8JW+M9v Y0Aojd8STnXlVgnLdS37ny4/KozXQL3mGf/8Q8xFDSx15O3hTMTtff54l2v6b5nSVVHg XrQoT4rydJsgmN83xcmLYoCvOMB4E9rEUWVBFpdgbQQ9q1AtNzmjTe4Km2Bp5tTAeLzY mUvCI9xlfrAzfYwyeD0Z01G5iOFVNBEjD+xHDsStZZVo0QmL9XQTKLuLy3aAaIBJtkPv Y5ZZEhYmxvZHEESFwku2Bih/1zb6h40SXqsZN0f4ClOkiOEFJDV97zHRYraZWOV7AJqW ZeTA== X-Gm-Message-State: AOAM532MJGZBkUkRKYXn4G+01rf28GNtROvIMCTXinOtomTJ5kkkyZ3w XDsCUkBZ0jL8cMlkn4/G8zEaVA== X-Google-Smtp-Source: ABdhPJzBHt0LSbQDQn3rid6vU7ULCI5I4vMu3BCDn4hLIJ8Ym2Tkr9hIKLBIhVrgEacU7E7T7cAL2g== X-Received: by 2002:a63:4a63:0:b0:373:a03a:8a1d with SMTP id j35-20020a634a63000000b00373a03a8a1dmr16703240pgl.460.1646051074724; Mon, 28 Feb 2022 04:24:34 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.227]) by smtp.gmail.com with ESMTPSA id ep22-20020a17090ae65600b001b92477db10sm10466753pjb.29.2022.02.28.04.24.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 04:24:34 -0800 (PST) From: Muchun Song To: willy@infradead.org, akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, shakeelb@google.com, roman.gushchin@linux.dev, shy828301@gmail.com, alexs@kernel.org, richard.weiyang@gmail.com, david@fromorbit.com, trond.myklebust@hammerspace.com, anna.schumaker@netapp.com, jaegeuk@kernel.org, chao@kernel.org, kari.argillander@gmail.com, vbabka@suse.cz Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nfs@vger.kernel.org, zhengqi.arch@bytedance.com, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v6 16/16] mm: memcontrol: rename memcg_cache_id to memcg_kmem_id Date: Mon, 28 Feb 2022 20:21:26 +0800 Message-Id: <20220228122126.37293-17-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220228122126.37293-1-songmuchun@bytedance.com> References: <20220228122126.37293-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 9EA2F180002 X-Stat-Signature: jrs9i4q6ao7zf6qk5ikksqz59zbqab1b Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=ihucVeQ4; spf=pass (imf24.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.215.176 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspam-User: X-HE-Tag: 1646051075-868602 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The memcg_cache_id() introduced by commit 2633d7a02823 ("slab/slub: consider a memcg parameter in kmem_create_cache") is used to index in the kmem_cache->memcg_params->memcg_caches array. Since kmem_cache->memcg_params.memcg_caches has been removed by commit 9855609bde03 ("mm: memcg/slab: use a single set of kmem_caches for all accounted allocations"). So the name does not need to reflect cache related. Just rename it to memcg_kmem_id. And it can reflect kmem related. Signed-off-by: Muchun Song --- include/linux/memcontrol.h | 4 ++-- mm/list_lru.c | 8 ++++---- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 1fe44ec5aa03..fe44e4d36bd7 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1709,7 +1709,7 @@ static inline void memcg_kmem_uncharge_page(struct page *page, int order) * A helper for accessing memcg's kmem_id, used for getting * corresponding LRU lists. */ -static inline int memcg_cache_id(struct mem_cgroup *memcg) +static inline int memcg_kmem_id(struct mem_cgroup *memcg) { return memcg ? memcg->kmemcg_id : -1; } @@ -1747,7 +1747,7 @@ static inline bool memcg_kmem_enabled(void) return false; } -static inline int memcg_cache_id(struct mem_cgroup *memcg) +static inline int memcg_kmem_id(struct mem_cgroup *memcg) { return -1; } diff --git a/mm/list_lru.c b/mm/list_lru.c index 38f711e9b56e..8b402373e965 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -75,7 +75,7 @@ list_lru_from_kmem(struct list_lru *lru, int nid, void *ptr, if (!memcg) goto out; - l = list_lru_from_memcg_idx(lru, nid, memcg_cache_id(memcg)); + l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg)); out: if (memcg_ptr) *memcg_ptr = memcg; @@ -182,7 +182,7 @@ unsigned long list_lru_count_one(struct list_lru *lru, long count; rcu_read_lock(); - l = list_lru_from_memcg_idx(lru, nid, memcg_cache_id(memcg)); + l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg)); count = l ? READ_ONCE(l->nr_items) : 0; rcu_read_unlock(); @@ -273,7 +273,7 @@ list_lru_walk_one(struct list_lru *lru, int nid, struct mem_cgroup *memcg, unsigned long ret; spin_lock(&nlru->lock); - ret = __list_lru_walk_one(lru, nid, memcg_cache_id(memcg), isolate, + ret = __list_lru_walk_one(lru, nid, memcg_kmem_id(memcg), isolate, cb_arg, nr_to_walk); spin_unlock(&nlru->lock); return ret; @@ -289,7 +289,7 @@ list_lru_walk_one_irq(struct list_lru *lru, int nid, struct mem_cgroup *memcg, unsigned long ret; spin_lock_irq(&nlru->lock); - ret = __list_lru_walk_one(lru, nid, memcg_cache_id(memcg), isolate, + ret = __list_lru_walk_one(lru, nid, memcg_kmem_id(memcg), isolate, cb_arg, nr_to_walk); spin_unlock_irq(&nlru->lock); return ret;