From patchwork Tue Dec 24 07:53:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 11309169 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4C513138D for ; Tue, 24 Dec 2019 07:55:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0D21F206CB for ; Tue, 24 Dec 2019 07:55:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="q+8rr6aj" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0D21F206CB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E152E8E0009; Tue, 24 Dec 2019 02:55:28 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DC92C8E0001; Tue, 24 Dec 2019 02:55:28 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CB2C48E0009; Tue, 24 Dec 2019 02:55:28 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0162.hostedemail.com [216.40.44.162]) by kanga.kvack.org (Postfix) with ESMTP id B59AD8E0001 for ; Tue, 24 Dec 2019 02:55:28 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 53E8152D9 for ; Tue, 24 Dec 2019 07:55:28 +0000 (UTC) X-FDA: 76299275136.27.metal57_129be049ffb0b X-Spam-Summary: 2,0,0,9e0a8385e9c9f285,d41d8cd98f00b204,laoar.shao@gmail.com,:hannes@cmpxchg.org:david@fromorbit.com:mhocko@kernel.org:vdavydov.dev@gmail.com:akpm@linux-foundation.org:viro@zeniv.linux.org.uk::linux-fsdevel@vger.kernel.org:laoar.shao@gmail.com:dchinner@redhat.com,RULES_HIT:2:41:69:355:379:541:800:960:973:988:989:1260:1345:1359:1437:1535:1605:1606:1730:1747:1777:1792:2194:2199:2393:2553:2559:2562:2693:2897:3138:3139:3140:3141:3142:3865:3867:3870:3871:3872:3874:4119:4250:4321:4605:5007:6261:6653:7514:9413:9592:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12895:13161:13172:13229:14394:14687:21080:21444:21451:21627:21666:21966:21990:30054:30090,0,RBL:209.85.216.67:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:22,LUA_SUMMARY:none X-HE-Tag: metal57_129be049ffb0b X-Filterd-Recvd-Size: 8563 Received: from mail-pj1-f67.google.com (mail-pj1-f67.google.com [209.85.216.67]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Tue, 24 Dec 2019 07:55:27 +0000 (UTC) Received: by mail-pj1-f67.google.com with SMTP id j11so874448pjs.1 for ; Mon, 23 Dec 2019 23:55:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=5g2CLXGgkilXyD/2e6aUVdcgkPOtGJ9tnX8zatE5JMA=; b=q+8rr6ajBZfR9pA/qszO7K4g081IeY3nrEQ0eX0Ta/Iz18TJqfZeSQjtCFBLFjUIJH v6i+88Xa5c6dpn80FykmXrKZM39ZVIo3cpDEAsTTlpvjSgVf//zRp2wtShjifLC9M4SY OtcFOM6UMIji/p0EQHRYLXjl/GBJvgyOgMjM0+ejtQajtd9JVh9dA4y1qu18XJ0eav3L PaYyfXGVm+Sd5g6L20U+HWInvM4gQVtz/hudATeZ9XiACsil7xbo9+wLor6vNYcvw3Qa LjhUJ9FKqhoTbebSbGFj/wE19bXexAbrbzZXEZD1hiXpbwxRaP4DDYhNIi6xdIMXLD0X 4weA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=5g2CLXGgkilXyD/2e6aUVdcgkPOtGJ9tnX8zatE5JMA=; b=iBUcBGNeJ6FGq0LlVHXISPc4ZXw9uGqxzO2I2xi2+CQFXeTMxdiAofoOKSLekiJ9Cr 5UmBOOtBlS5joqTyHqDptQP+DU8WrreR2b5mbcW0y5fP6DzrQ0EeYl3dCQgBNBn1Cy+i sIZMouW5P2iJjUQxzq+4D/8MAjFwPMNdCTtOQ/SodYhRL7C/cwoIfcqJKS3X5nLEpxJO 7Yd2jhFmtCwupKm0F0wSZIkkxxP2nE4u6mBUeXSeZPnZJ75wuelCAksnd2TDa+fBfBQz cjSiarIn5v3jku0hun5QkuIvDKLmh4VxdmUg2R6reRaEosgiXWMmllzigqYbvSEczlU8 SFRA== X-Gm-Message-State: APjAAAW8KTnYS0wtSLwh9LElHavVqm33PgEUf1JGuP9oguqR/fcYAuGs PH5IXmXKw1gcQkHJjQ6lSkk= X-Google-Smtp-Source: APXvYqyjF29Qc3jrwSpUGKp+qUPPcOVOx2HneauT30nXLFZAOThRTyDFmjPqhi8ssMoVbQr35afI0g== X-Received: by 2002:a17:90a:77c1:: with SMTP id e1mr4325998pjs.134.1577174126920; Mon, 23 Dec 2019 23:55:26 -0800 (PST) Received: from dev.localdomain ([203.100.54.194]) by smtp.gmail.com with ESMTPSA id c2sm2004064pjq.27.2019.12.23.23.55.23 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 23 Dec 2019 23:55:26 -0800 (PST) From: Yafang Shao To: hannes@cmpxchg.org, david@fromorbit.com, mhocko@kernel.org, vdavydov.dev@gmail.com, akpm@linux-foundation.org, viro@zeniv.linux.org.uk Cc: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Yafang Shao , Dave Chinner Subject: [PATCH v2 4/5] mm: make memcg visible to lru walker isolation function Date: Tue, 24 Dec 2019 02:53:25 -0500 Message-Id: <1577174006-13025-5-git-send-email-laoar.shao@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1577174006-13025-1-git-send-email-laoar.shao@gmail.com> References: <1577174006-13025-1-git-send-email-laoar.shao@gmail.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The lru walker isolation function may use this memcg to do something, e.g. the inode isolatation function will use the memcg to do inode protection in followup patch. So make memcg visible to the lru walker isolation function. Something should be emphasized in this patch is it replaces for_each_memcg_cache_index() with for_each_mem_cgroup() in list_lru_walk_node(). Because there's a gap between these two MACROs that for_each_mem_cgroup() depends on CONFIG_MEMCG while the other one depends on CONFIG_MEMCG_KMEM. But as list_lru_memcg_aware() returns false if CONFIG_MEMCG_KMEM is not configured, it is safe to this replacement. Cc: Dave Chinner Signed-off-by: Yafang Shao --- include/linux/memcontrol.h | 21 +++++++++++++++++++++ mm/list_lru.c | 22 ++++++++++++---------- mm/memcontrol.c | 15 --------------- 3 files changed, 33 insertions(+), 25 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 1a315c7..f36ada9 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -449,6 +449,21 @@ struct mem_cgroup *mem_cgroup_iter(struct mem_cgroup *, int mem_cgroup_scan_tasks(struct mem_cgroup *, int (*)(struct task_struct *, void *), void *); +/* + * Iteration constructs for visiting all cgroups (under a tree). If + * loops are exited prematurely (break), mem_cgroup_iter_break() must + * be used for reference counting. + */ +#define for_each_mem_cgroup_tree(iter, root) \ + for (iter = mem_cgroup_iter(root, NULL, NULL); \ + iter != NULL; \ + iter = mem_cgroup_iter(root, iter, NULL)) + +#define for_each_mem_cgroup(iter) \ + for (iter = mem_cgroup_iter(NULL, NULL, NULL); \ + iter != NULL; \ + iter = mem_cgroup_iter(NULL, iter, NULL)) + static inline unsigned short mem_cgroup_id(struct mem_cgroup *memcg) { if (mem_cgroup_disabled()) @@ -949,6 +964,12 @@ static inline int mem_cgroup_scan_tasks(struct mem_cgroup *memcg, return 0; } +#define for_each_mem_cgroup_tree(iter) \ + for (iter = NULL; iter; ) + +#define for_each_mem_cgroup(iter) \ + for (iter = NULL; iter; ) + static inline unsigned short mem_cgroup_id(struct mem_cgroup *memcg) { return 0; diff --git a/mm/list_lru.c b/mm/list_lru.c index 0f1f6b0..536830d 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -207,11 +207,11 @@ unsigned long list_lru_count_node(struct list_lru *lru, int nid) EXPORT_SYMBOL_GPL(list_lru_count_node); static unsigned long -__list_lru_walk_one(struct list_lru_node *nlru, int memcg_idx, +__list_lru_walk_one(struct list_lru_node *nlru, struct mem_cgroup *memcg, list_lru_walk_cb isolate, void *cb_arg, unsigned long *nr_to_walk) { - + int memcg_idx = memcg_cache_id(memcg); struct list_lru_one *l; struct list_head *item, *n; unsigned long isolated = 0; @@ -273,7 +273,7 @@ unsigned long list_lru_count_node(struct list_lru *lru, int nid) unsigned long ret; spin_lock(&nlru->lock); - ret = __list_lru_walk_one(nlru, memcg_cache_id(memcg), isolate, cb_arg, + ret = __list_lru_walk_one(nlru, memcg, isolate, cb_arg, nr_to_walk); spin_unlock(&nlru->lock); return ret; @@ -289,7 +289,7 @@ unsigned long list_lru_count_node(struct list_lru *lru, int nid) unsigned long ret; spin_lock_irq(&nlru->lock); - ret = __list_lru_walk_one(nlru, memcg_cache_id(memcg), isolate, cb_arg, + ret = __list_lru_walk_one(nlru, memcg, isolate, cb_arg, nr_to_walk); spin_unlock_irq(&nlru->lock); return ret; @@ -299,17 +299,15 @@ unsigned long list_lru_walk_node(struct list_lru *lru, int nid, list_lru_walk_cb isolate, void *cb_arg, unsigned long *nr_to_walk) { + struct mem_cgroup *memcg; long isolated = 0; - int memcg_idx; - isolated += list_lru_walk_one(lru, nid, NULL, isolate, cb_arg, - nr_to_walk); - if (*nr_to_walk > 0 && list_lru_memcg_aware(lru)) { - for_each_memcg_cache_index(memcg_idx) { + if (list_lru_memcg_aware(lru)) { + for_each_mem_cgroup(memcg) { struct list_lru_node *nlru = &lru->node[nid]; spin_lock(&nlru->lock); - isolated += __list_lru_walk_one(nlru, memcg_idx, + isolated += __list_lru_walk_one(nlru, memcg, isolate, cb_arg, nr_to_walk); spin_unlock(&nlru->lock); @@ -317,7 +315,11 @@ unsigned long list_lru_walk_node(struct list_lru *lru, int nid, if (*nr_to_walk <= 0) break; } + } else { + isolated += list_lru_walk_one(lru, nid, NULL, isolate, cb_arg, + nr_to_walk); } + return isolated; } EXPORT_SYMBOL_GPL(list_lru_walk_node); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 2e78931..2fc2bf4 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -222,21 +222,6 @@ enum res_type { /* Used for OOM nofiier */ #define OOM_CONTROL (0) -/* - * Iteration constructs for visiting all cgroups (under a tree). If - * loops are exited prematurely (break), mem_cgroup_iter_break() must - * be used for reference counting. - */ -#define for_each_mem_cgroup_tree(iter, root) \ - for (iter = mem_cgroup_iter(root, NULL, NULL); \ - iter != NULL; \ - iter = mem_cgroup_iter(root, iter, NULL)) - -#define for_each_mem_cgroup(iter) \ - for (iter = mem_cgroup_iter(NULL, NULL, NULL); \ - iter != NULL; \ - iter = mem_cgroup_iter(NULL, iter, NULL)) - static inline bool should_force_charge(void) { return tsk_is_oom_victim(current) || fatal_signal_pending(current) ||