From patchwork Mon May 30 07:49:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12864351 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E7CDC433F5 for ; Mon, 30 May 2022 07:50:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A8A5C8D0005; Mon, 30 May 2022 03:50:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A63CB8D0001; Mon, 30 May 2022 03:50:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 927A78D0005; Mon, 30 May 2022 03:50:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 84AFE8D0001 for ; Mon, 30 May 2022 03:50:24 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 62B6D60DD8 for ; Mon, 30 May 2022 07:50:24 +0000 (UTC) X-FDA: 79521636768.17.34086C1 Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by imf24.hostedemail.com (Postfix) with ESMTP id 75506180011 for ; Mon, 30 May 2022 07:50:09 +0000 (UTC) Received: by mail-pl1-f173.google.com with SMTP id f18so9666267plg.0 for ; Mon, 30 May 2022 00:50:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WXoATc+zbrMinz5NUa5ozUC0jOTcvkPC5pLx07yWbkQ=; b=2EM02glyaZI9nPq2vkW6z3/AzXTv1u6Z3Rb2QQefVH2nhduDyPrtPjKRtHjq9g90FL /CXIjNPgSxLJM/RId9CYNnMZuVQ2pkO7ccabdIFa4HHSHA22NFUd7VSEbTAmRcx/C6Pq 1pdgBVjlr98ujtH4tl4OIxO4C9vjDL2OV1l6eX7q5T7QIEW73W0PuuVZ0rsRAwIrHCPg S5PL5p6FQyEYK+6qwGC/x1J2tkW2NOXpHu8+XmlEjKSJrbZ7iHrjCEv2VhtwhI52Hprp /qpQJFNAyYI3aTojFpHR+FncgU4BMwjh8nFipSkRzIzlSa7UB+xBxhdfY6PXixqlVlL5 l6zQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WXoATc+zbrMinz5NUa5ozUC0jOTcvkPC5pLx07yWbkQ=; b=PS03XKO7UczCD2cGoC1UaNzNwEgenfvB7d2f6JH4CQmZc9fKw3WabJECRh887rOFJk /d7rMBbxk5loM+w1iGlK2B8e3VGEJbcshjgTkMn0IIhYdDfyKnmv+raFmtwWpFb3s8lk IWfFNQKH5VYwmhjkit6AIk3rB29INjcfpqiN2zNG8wlyw2rh+KT3EZgBqjEUIskuJwAE lBtCSFzrbwiEgoEyE8Ccbq+DiEnUc6xHQLEw4qgN8KHAj0FQ5bwEIiOCH3hBz9oGpWHr svoYFBZfwgOdqJmYLCl/PgPd5FEIIH1ZCf74t5nDS9i0YN/20ab0bGAOJXWYKInKxUb6 ZfUA== X-Gm-Message-State: AOAM532fsmkeMlPmnfsUBEFamF0ablrZNmDqlWs0mY3Mv2tlwnPjP2BV HHrL69drAY28lkVfCTRNGMDINA== X-Google-Smtp-Source: ABdhPJzFO2bdmj87yqsVCuzBbqGlQwZx7W2PblGMZx3/l+70EpZyp1xSqrhLdvRzm2ZSRWn7pRspng== X-Received: by 2002:a17:902:d4c7:b0:162:4625:ecad with SMTP id o7-20020a170902d4c700b001624625ecadmr30734418plg.79.1653897022979; Mon, 30 May 2022 00:50:22 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([2408:8207:18da:2310:2071:e13a:8aa:cacf]) by smtp.gmail.com with ESMTPSA id a23-20020a170902b59700b001616c3bd5c2sm8421381pls.162.2022.05.30.00.50.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 May 2022 00:50:22 -0700 (PDT) From: Muchun Song To: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, akpm@linux-foundation.org Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com, longman@redhat.com, Muchun Song Subject: [PATCH v5 01/11] mm: memcontrol: remove dead code and comments Date: Mon, 30 May 2022 15:49:09 +0800 Message-Id: <20220530074919.46352-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220530074919.46352-1-songmuchun@bytedance.com> References: <20220530074919.46352-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 75506180011 X-Stat-Signature: pm7gr843s4whs3tmo4nsgab9s3qwp1mn Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=2EM02gly; spf=pass (imf24.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.173 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspamd-Server: rspam09 X-HE-Tag: 1653897009-543531 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since no-hierarchy mode is deprecated after commit bef8620cd8e0 ("mm: memcg: deprecate the non-hierarchical mode") so parent_mem_cgroup() cannot return a NULL except root memcg, however, root memcg cannot be offline, so it is safe to drop the check of returned value of parent_mem_cgroup(). Remove those dead code. The comments in memcg_offline_kmem() above memcg_reparent_list_lrus() are out of date since commit 5abc1e37afa0 ("mm: list_lru: allocate list_lru_one only when needed") There is no ordering requirement between memcg_reparent_list_lrus() and memcg_reparent_objcgs(), so remove those outdated comments. Signed-off-by: Muchun Song Acked-by: Roman Gushchin --- include/linux/memcontrol.h | 3 +-- mm/memcontrol.c | 16 ---------------- mm/vmscan.c | 6 +----- 3 files changed, 2 insertions(+), 23 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 89b14729d59f..0833be256134 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -851,8 +851,7 @@ static inline struct mem_cgroup *lruvec_memcg(struct lruvec *lruvec) * parent_mem_cgroup - find the accounting parent of a memcg * @memcg: memcg whose parent to find * - * Returns the parent memcg, or NULL if this is the root or the memory - * controller is in legacy no-hierarchy mode. + * Returns the parent memcg, or NULL if this is the root. */ static inline struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *memcg) { diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 598fece89e2b..13da256ff2e4 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3622,17 +3622,7 @@ static void memcg_offline_kmem(struct mem_cgroup *memcg) return; parent = parent_mem_cgroup(memcg); - if (!parent) - parent = root_mem_cgroup; - memcg_reparent_objcgs(memcg, parent); - - /* - * After we have finished memcg_reparent_objcgs(), all list_lrus - * corresponding to this cgroup are guaranteed to remain empty. - * The ordering is imposed by list_lru_node->lock taken by - * memcg_reparent_list_lrus(). - */ memcg_reparent_list_lrus(memcg, parent); } #else @@ -6593,10 +6583,6 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root, return; parent = parent_mem_cgroup(memcg); - /* No parent means a non-hierarchical mode on v1 memcg */ - if (!parent) - return; - if (parent == root) { memcg->memory.emin = READ_ONCE(memcg->memory.min); memcg->memory.elow = READ_ONCE(memcg->memory.low); @@ -7050,8 +7036,6 @@ static struct mem_cgroup *mem_cgroup_id_get_online(struct mem_cgroup *memcg) break; } memcg = parent_mem_cgroup(memcg); - if (!memcg) - memcg = root_mem_cgroup; } return memcg; } diff --git a/mm/vmscan.c b/mm/vmscan.c index 1678802e03e7..8c6054e06087 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -409,13 +409,9 @@ void reparent_shrinker_deferred(struct mem_cgroup *memcg) { int i, nid; long nr; - struct mem_cgroup *parent; + struct mem_cgroup *parent = parent_mem_cgroup(memcg); struct shrinker_info *child_info, *parent_info; - parent = parent_mem_cgroup(memcg); - if (!parent) - parent = root_mem_cgroup; - /* Prevent from concurrent shrinker_info expand */ down_read(&shrinker_rwsem); for_each_node(nid) { From patchwork Mon May 30 07:49:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12864352 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB55BC433EF for ; Mon, 30 May 2022 07:50:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5BF718D0006; Mon, 30 May 2022 03:50:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 56B9B8D0001; Mon, 30 May 2022 03:50:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4311E8D0006; Mon, 30 May 2022 03:50:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 335548D0001 for ; Mon, 30 May 2022 03:50:33 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 1245E12115C for ; Mon, 30 May 2022 07:50:33 +0000 (UTC) X-FDA: 79521637146.09.2C12263 Received: from mail-pg1-f182.google.com (mail-pg1-f182.google.com [209.85.215.182]) by imf15.hostedemail.com (Postfix) with ESMTP id 70F63A0026 for ; Mon, 30 May 2022 07:50:10 +0000 (UTC) Received: by mail-pg1-f182.google.com with SMTP id s68so9438717pgs.10 for ; Mon, 30 May 2022 00:50:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qP3NYqmy/kFDq8fy7T/74NXJ0W9tdNYCsTtwUre3MFo=; b=2lWVMQ4OH/ac/xcyLrM2cbUXGSinjLfD4xqq9FUvko6fE3Ndz9Fwe10KdO/ZMwW3Gk 7MlSYaSGBpbCqkL3KEJHSM1sVt4SWyXoo1O0/amdBG/7ZL3O0i3K1sD6L68KMKfFRLMA uXHJHC++Pv7FxsKTjboIcMFe5xETp4/Q26H3bQXUA9t9IzX1KeMFpjUDU/XeAyN6ZDsa iYr3PRXd+4StN2hEjFWb/eb+zQqvkhzzFZvg9sTQ/ttztnFqkbH/QGoL2nBQaBzi42cF hjMZvtcQadKm1UJTOGWbrWWCd8VwkzaH+Ghc+AqlzZxgGn/9JcFERz9GP4VCwzwjAXSh S5aQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qP3NYqmy/kFDq8fy7T/74NXJ0W9tdNYCsTtwUre3MFo=; b=ur7OHqInn7Scx+Oudm7WIHhILscSWFpr8e0nZq6HVr/C2beOiow4tKdS3d+jYtEGPl 6MNUF05FnEYsDwWj2ec7XH6uW8wyBmLyQ1/yL/6oDRLRhL5fHwOyCSaNtsM3jmoYuxxT aA2p/op1MHdzkG1jd0YTyf76bd7YyjJDqQfstL7zryvQ1+KRpfSghn3k0+0tu1b8uoZ/ iw455xsSRgZyeT2eqEDNRvzkqZVXwHEPyNRGeJwVDeBvO5IvhQiAWe8sivvvXVWf4drn j2InYysQNtqt37OvAGLRbIF44vGrEBCWzo5Bp4FbU/ZK6pI48DvTqZzpw5B0hjr/8SoO ZTlA== X-Gm-Message-State: AOAM533MYobfMWQvl4eAJBz9R2LCqTnK3dEaXb247Ivksm9cRTDLQ7v/ RDj4SmGn+ZoucqOuCgLEEs23/mEj+oVN3w== X-Google-Smtp-Source: ABdhPJz5vQlTwdV9raF7OHl26nMrY/j/IHnExQCWOCI9eOPyN02ThtPAZX7YHxztg+1OGcmlYSK0QA== X-Received: by 2002:a63:2360:0:b0:3fb:ee61:82cf with SMTP id u32-20020a632360000000b003fbee6182cfmr6489036pgm.574.1653897031530; Mon, 30 May 2022 00:50:31 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([2408:8207:18da:2310:2071:e13a:8aa:cacf]) by smtp.gmail.com with ESMTPSA id a23-20020a170902b59700b001616c3bd5c2sm8421381pls.162.2022.05.30.00.50.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 May 2022 00:50:31 -0700 (PDT) From: Muchun Song To: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, akpm@linux-foundation.org Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com, longman@redhat.com, Muchun Song Subject: [PATCH v5 02/11] mm: rename unlock_page_lruvec{_irq, _irqrestore} to lruvec_unlock{_irq, _irqrestore} Date: Mon, 30 May 2022 15:49:10 +0800 Message-Id: <20220530074919.46352-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220530074919.46352-1-songmuchun@bytedance.com> References: <20220530074919.46352-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 70F63A0026 X-Stat-Signature: xh5jwf1o7ekpttmeooguggfzt68n4795 X-Rspam-User: Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=2lWVMQ4O; spf=pass (imf15.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.215.182 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-HE-Tag: 1653897010-350195 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: It is weird to use folio_lruvec_lock() variants and unlock_page_lruvec() variants together, e.g. locking folio and unlocking page. So rename unlock_page_lruvec{_irq, _irqrestore} to lruvec_unlock{_irq, _irqrestore}. Signed-off-by: Muchun Song Acked-by: Roman Gushchin --- include/linux/memcontrol.h | 10 +++++----- mm/compaction.c | 12 ++++++------ mm/huge_memory.c | 2 +- mm/mlock.c | 2 +- mm/swap.c | 16 ++++++++-------- mm/vmscan.c | 4 ++-- 6 files changed, 23 insertions(+), 23 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 0833be256134..6d7f97cc3fd4 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1538,17 +1538,17 @@ static inline struct lruvec *parent_lruvec(struct lruvec *lruvec) return mem_cgroup_lruvec(memcg, lruvec_pgdat(lruvec)); } -static inline void unlock_page_lruvec(struct lruvec *lruvec) +static inline void lruvec_unlock(struct lruvec *lruvec) { spin_unlock(&lruvec->lru_lock); } -static inline void unlock_page_lruvec_irq(struct lruvec *lruvec) +static inline void lruvec_unlock_irq(struct lruvec *lruvec) { spin_unlock_irq(&lruvec->lru_lock); } -static inline void unlock_page_lruvec_irqrestore(struct lruvec *lruvec, +static inline void lruvec_unlock_irqrestore(struct lruvec *lruvec, unsigned long flags) { spin_unlock_irqrestore(&lruvec->lru_lock, flags); @@ -1570,7 +1570,7 @@ static inline struct lruvec *folio_lruvec_relock_irq(struct folio *folio, if (folio_matches_lruvec(folio, locked_lruvec)) return locked_lruvec; - unlock_page_lruvec_irq(locked_lruvec); + lruvec_unlock_irq(locked_lruvec); } return folio_lruvec_lock_irq(folio); @@ -1584,7 +1584,7 @@ static inline struct lruvec *folio_lruvec_relock_irqsave(struct folio *folio, if (folio_matches_lruvec(folio, locked_lruvec)) return locked_lruvec; - unlock_page_lruvec_irqrestore(locked_lruvec, *flags); + lruvec_unlock_irqrestore(locked_lruvec, *flags); } return folio_lruvec_lock_irqsave(folio, flags); diff --git a/mm/compaction.c b/mm/compaction.c index fe915db6149b..4f155df6b39c 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -874,7 +874,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, */ if (!(low_pfn % SWAP_CLUSTER_MAX)) { if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); locked = NULL; } @@ -987,7 +987,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, if (unlikely(__PageMovable(page)) && !PageIsolated(page)) { if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); locked = NULL; } @@ -1070,7 +1070,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, /* If we already hold the lock, we can skip some rechecking */ if (lruvec != locked) { if (locked) - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); locked = lruvec; @@ -1129,7 +1129,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, isolate_fail_put: /* Avoid potential deadlock in freeing page under lru_lock */ if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); locked = NULL; } put_page(page); @@ -1145,7 +1145,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, */ if (nr_isolated) { if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); locked = NULL; } putback_movable_pages(&cc->migratepages); @@ -1177,7 +1177,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, isolate_abort: if (locked) - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); if (page) { SetPageLRU(page); put_page(page); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 910a138e9859..b17b9d25d045 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2404,7 +2404,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, } ClearPageCompound(head); - unlock_page_lruvec(lruvec); + lruvec_unlock(lruvec); /* Caller disabled irqs, so they are still disabled here */ split_page_owner(head, nr); diff --git a/mm/mlock.c b/mm/mlock.c index 716caf851043..6649f3dda56e 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -205,7 +205,7 @@ static void mlock_pagevec(struct pagevec *pvec) } if (lruvec) - unlock_page_lruvec_irq(lruvec); + lruvec_unlock_irq(lruvec); release_pages(pvec->pages, pvec->nr); pagevec_reinit(pvec); } diff --git a/mm/swap.c b/mm/swap.c index 7e320ec08c6a..0a8ee33116c5 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -87,7 +87,7 @@ static void __page_cache_release(struct page *page) lruvec = folio_lruvec_lock_irqsave(folio, &flags); del_page_from_lru_list(page, lruvec); __clear_page_lru_flags(page); - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); } /* See comment on PageMlocked in release_pages() */ if (unlikely(PageMlocked(page))) { @@ -209,7 +209,7 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, SetPageLRU(page); } if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); release_pages(pvec->pages, pvec->nr); pagevec_reinit(pvec); } @@ -369,7 +369,7 @@ static void folio_activate(struct folio *folio) if (folio_test_clear_lru(folio)) { lruvec = folio_lruvec_lock_irq(folio); __folio_activate(folio, lruvec); - unlock_page_lruvec_irq(lruvec); + lruvec_unlock_irq(lruvec); folio_set_lru(folio); } } @@ -915,7 +915,7 @@ void release_pages(struct page **pages, int nr) * same lruvec. The lock is held only if lruvec != NULL. */ if (lruvec && ++lock_batch == SWAP_CLUSTER_MAX) { - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); lruvec = NULL; } @@ -925,7 +925,7 @@ void release_pages(struct page **pages, int nr) if (is_zone_device_page(page)) { if (lruvec) { - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); lruvec = NULL; } if (put_devmap_managed_page(page)) @@ -940,7 +940,7 @@ void release_pages(struct page **pages, int nr) if (PageCompound(page)) { if (lruvec) { - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); lruvec = NULL; } __put_compound_page(page); @@ -974,7 +974,7 @@ void release_pages(struct page **pages, int nr) list_add(&page->lru, &pages_to_free); } if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); mem_cgroup_uncharge_list(&pages_to_free); free_unref_page_list(&pages_to_free); @@ -1060,7 +1060,7 @@ void __pagevec_lru_add(struct pagevec *pvec) __pagevec_lru_add_fn(folio, lruvec); } if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); release_pages(pvec->pages, pvec->nr); pagevec_reinit(pvec); } diff --git a/mm/vmscan.c b/mm/vmscan.c index 8c6054e06087..a611ccf03c9b 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2171,7 +2171,7 @@ int folio_isolate_lru(struct folio *folio) folio_get(folio); lruvec = folio_lruvec_lock_irq(folio); lruvec_del_folio(lruvec, folio); - unlock_page_lruvec_irq(lruvec); + lruvec_unlock_irq(lruvec); ret = 0; } @@ -4806,7 +4806,7 @@ void check_move_unevictable_pages(struct pagevec *pvec) if (lruvec) { __count_vm_events(UNEVICTABLE_PGRESCUED, pgrescued); __count_vm_events(UNEVICTABLE_PGSCANNED, pgscanned); - unlock_page_lruvec_irq(lruvec); + lruvec_unlock_irq(lruvec); } else if (pgscanned) { count_vm_events(UNEVICTABLE_PGSCANNED, pgscanned); } From patchwork Mon May 30 07:49:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12864353 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65622C433F5 for ; Mon, 30 May 2022 07:50:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 07BB58D0007; Mon, 30 May 2022 03:50:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 02B1F8D0001; Mon, 30 May 2022 03:50:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E34188D0007; Mon, 30 May 2022 03:50:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id D4B8D8D0001 for ; Mon, 30 May 2022 03:50:41 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id A45E734575 for ; Mon, 30 May 2022 07:50:41 +0000 (UTC) X-FDA: 79521637482.01.DD17651 Received: from mail-pf1-f169.google.com (mail-pf1-f169.google.com [209.85.210.169]) by imf31.hostedemail.com (Postfix) with ESMTP id 0F80120063 for ; Mon, 30 May 2022 07:50:01 +0000 (UTC) Received: by mail-pf1-f169.google.com with SMTP id 202so9947998pfu.0 for ; Mon, 30 May 2022 00:50:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+KuhHyD4PJmicRh1rvovvsjVwRIDUdFowoJNCrBTN7Y=; b=ZLgUMfaZ+tA2Ajunkij0CRf9vFf1S0Q7PwqPVEhETJxVNROl04VLPpk0byiKakWPCp zSXMSILlxQjjexSRsmvwcKH2AeyrBT+VOKLyeWRzGc3lf/bUJtHyuMX+A4A0WoWkP95V INrPgOVrmW7S//LKWv/KgnN/1UDC261OwdlqpdWnh6tntEcn7qWrjD2a56zpuDiErFiC 1bO96DreZfqvWZZpRT3L+u77shEQMswa3q4Q53U4IF/TqbaWnY7lFO3DgAIofyDerGod aou4OYxYjrcMyaBBnHgBRa65b7trXpppLtdzw7Hm5ZzswNvScITVQ/sLLu2vPq/Fmgp6 nzTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+KuhHyD4PJmicRh1rvovvsjVwRIDUdFowoJNCrBTN7Y=; b=afahjj4nwg4GQHHbwrdXxwCcHVb/yLeF9XfR35zDoph+Ra0D20+e8OG3IEyzBPl1Dl humKgrSFqGhXg7voxsegMPseLdhWI4UhXpW7ajf1mYYh1/Su1I7RzOKPjoJTAxJ6sv1G GeShM6yfX3VzlpfXTU+CnP+UfYuIHTjW3YRYpju9dENY0uAwOoNaRioasgen4k1846I7 Ad7J55y1DbVlNmD6GUjl/WYqa9ZMnR6SxbiDSyfmMM0r5ZqKhnaqLHPs1zweuaNdD1bh ki3Hm/LZEcxJaRMOwzvrX5uVjHmOpBS50pfo04xEFmpcsVb0qDCRCrfctf/B8Gan//r3 1XxA== X-Gm-Message-State: AOAM530WwS6UTkRM5gZHueai5f84EbvN1mjmnlqMZad2G8qpKIiUFc76 VOJSBsJ7JJtOIc7hxbLR9swX2g== X-Google-Smtp-Source: ABdhPJxIW2mIJMC71rXyAx3/OFd2GVPjad4pqowx3iRyXvsDwDpp1vJywSxLS2ZA2oGfnuxO8OXp3A== X-Received: by 2002:a62:8689:0:b0:51b:4143:d1ae with SMTP id x131-20020a628689000000b0051b4143d1aemr6716887pfd.22.1653897040304; Mon, 30 May 2022 00:50:40 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([2408:8207:18da:2310:2071:e13a:8aa:cacf]) by smtp.gmail.com with ESMTPSA id a23-20020a170902b59700b001616c3bd5c2sm8421381pls.162.2022.05.30.00.50.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 May 2022 00:50:40 -0700 (PDT) From: Muchun Song To: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, akpm@linux-foundation.org Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com, longman@redhat.com, Muchun Song Subject: [PATCH v5 03/11] mm: memcontrol: prepare objcg API for non-kmem usage Date: Mon, 30 May 2022 15:49:11 +0800 Message-Id: <20220530074919.46352-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220530074919.46352-1-songmuchun@bytedance.com> References: <20220530074919.46352-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 0F80120063 X-Rspam-User: X-Stat-Signature: aqk5gt1jrrg4fecf6t8z456fyqt97eku Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=ZLgUMfaZ; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf31.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.210.169 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-HE-Tag: 1653897001-374001 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Pagecache pages are charged at the allocation time and holding a reference to the original memory cgroup until being reclaimed. Depending on the memory pressure, specific patterns of the page sharing between different cgroups and the cgroup creation and destruction rates, a large number of dying memory cgroups can be pinned by pagecache pages. It makes the page reclaim less efficient and wastes memory. We can convert LRU pages and most other raw memcg pins to the objcg direction to fix this problem, and then the page->memcg will always point to an object cgroup pointer. Therefore, the infrastructure of objcg no longer only serves CONFIG_MEMCG_KMEM. In this patch, we move the infrastructure of the objcg out of the scope of the CONFIG_MEMCG_KMEM so that the LRU pages can reuse it to charge pages. We know that the LRU pages are not accounted at the root level. But the page->memcg_data points to the root_mem_cgroup. So the page->memcg_data of the LRU pages always points to a valid pointer. But the root_mem_cgroup dose not have an object cgroup. If we use obj_cgroup APIs to charge the LRU pages, we should set the page->memcg_data to a root object cgroup. So we also allocate an object cgroup for the root_mem_cgroup. Signed-off-by: Muchun Song Acked-by: Johannes Weiner Reviewed-by: Michal Koutný Acked-by: Roman Gushchin --- include/linux/memcontrol.h | 2 +- mm/memcontrol.c | 56 +++++++++++++++++++++++++++------------------- 2 files changed, 34 insertions(+), 24 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 6d7f97cc3fd4..27f3171f42a1 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -315,10 +315,10 @@ struct mem_cgroup { #ifdef CONFIG_MEMCG_KMEM int kmemcg_id; +#endif struct obj_cgroup __rcu *objcg; /* list of inherited objcgs, protected by objcg_lock */ struct list_head objcg_list; -#endif MEMCG_PADDING(_pad2_); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 13da256ff2e4..739a1d58ce97 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -254,9 +254,9 @@ struct mem_cgroup *vmpressure_to_memcg(struct vmpressure *vmpr) return container_of(vmpr, struct mem_cgroup, vmpressure); } -#ifdef CONFIG_MEMCG_KMEM static DEFINE_SPINLOCK(objcg_lock); +#ifdef CONFIG_MEMCG_KMEM bool mem_cgroup_kmem_disabled(void) { return cgroup_memory_nokmem; @@ -265,12 +265,10 @@ bool mem_cgroup_kmem_disabled(void) static void obj_cgroup_uncharge_pages(struct obj_cgroup *objcg, unsigned int nr_pages); -static void obj_cgroup_release(struct percpu_ref *ref) +static void obj_cgroup_release_bytes(struct obj_cgroup *objcg) { - struct obj_cgroup *objcg = container_of(ref, struct obj_cgroup, refcnt); unsigned int nr_bytes; unsigned int nr_pages; - unsigned long flags; /* * At this point all allocated objects are freed, and @@ -284,9 +282,9 @@ static void obj_cgroup_release(struct percpu_ref *ref) * 3) CPU1: a process from another memcg is allocating something, * the stock if flushed, * objcg->nr_charged_bytes = PAGE_SIZE - 92 - * 5) CPU0: we do release this object, + * 4) CPU0: we do release this object, * 92 bytes are added to stock->nr_bytes - * 6) CPU0: stock is flushed, + * 5) CPU0: stock is flushed, * 92 bytes are added to objcg->nr_charged_bytes * * In the result, nr_charged_bytes == PAGE_SIZE. @@ -298,6 +296,19 @@ static void obj_cgroup_release(struct percpu_ref *ref) if (nr_pages) obj_cgroup_uncharge_pages(objcg, nr_pages); +} +#else +static inline void obj_cgroup_release_bytes(struct obj_cgroup *objcg) +{ +} +#endif + +static void obj_cgroup_release(struct percpu_ref *ref) +{ + struct obj_cgroup *objcg = container_of(ref, struct obj_cgroup, refcnt); + unsigned long flags; + + obj_cgroup_release_bytes(objcg); spin_lock_irqsave(&objcg_lock, flags); list_del(&objcg->list); @@ -326,10 +337,10 @@ static struct obj_cgroup *obj_cgroup_alloc(void) return objcg; } -static void memcg_reparent_objcgs(struct mem_cgroup *memcg, - struct mem_cgroup *parent) +static void memcg_reparent_objcgs(struct mem_cgroup *memcg) { struct obj_cgroup *objcg, *iter; + struct mem_cgroup *parent = parent_mem_cgroup(memcg); objcg = rcu_replace_pointer(memcg->objcg, NULL, true); @@ -348,6 +359,7 @@ static void memcg_reparent_objcgs(struct mem_cgroup *memcg, percpu_ref_kill(&objcg->refcnt); } +#ifdef CONFIG_MEMCG_KMEM /* * A lot of the calls to the cache allocation functions are expected to be * inlined by the compiler. Since the calls to memcg_slab_pre_alloc_hook() are @@ -3589,21 +3601,12 @@ static u64 mem_cgroup_read_u64(struct cgroup_subsys_state *css, #ifdef CONFIG_MEMCG_KMEM static int memcg_online_kmem(struct mem_cgroup *memcg) { - struct obj_cgroup *objcg; - if (cgroup_memory_nokmem) return 0; if (unlikely(mem_cgroup_is_root(memcg))) return 0; - objcg = obj_cgroup_alloc(); - if (!objcg) - return -ENOMEM; - - objcg->memcg = memcg; - rcu_assign_pointer(memcg->objcg, objcg); - static_branch_enable(&memcg_kmem_enabled_key); memcg->kmemcg_id = memcg->id.id; @@ -3613,17 +3616,13 @@ static int memcg_online_kmem(struct mem_cgroup *memcg) static void memcg_offline_kmem(struct mem_cgroup *memcg) { - struct mem_cgroup *parent; - if (cgroup_memory_nokmem) return; if (unlikely(mem_cgroup_is_root(memcg))) return; - parent = parent_mem_cgroup(memcg); - memcg_reparent_objcgs(memcg, parent); - memcg_reparent_list_lrus(memcg, parent); + memcg_reparent_list_lrus(memcg, parent_mem_cgroup(memcg)); } #else static int memcg_online_kmem(struct mem_cgroup *memcg) @@ -5106,8 +5105,8 @@ static struct mem_cgroup *mem_cgroup_alloc(void) memcg->socket_pressure = jiffies; #ifdef CONFIG_MEMCG_KMEM memcg->kmemcg_id = -1; - INIT_LIST_HEAD(&memcg->objcg_list); #endif + INIT_LIST_HEAD(&memcg->objcg_list); #ifdef CONFIG_CGROUP_WRITEBACK INIT_LIST_HEAD(&memcg->cgwb_list); for (i = 0; i < MEMCG_CGWB_FRN_CNT; i++) @@ -5169,6 +5168,7 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css) static int mem_cgroup_css_online(struct cgroup_subsys_state *css) { struct mem_cgroup *memcg = mem_cgroup_from_css(css); + struct obj_cgroup *objcg; if (memcg_online_kmem(memcg)) goto remove_id; @@ -5181,6 +5181,13 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css) if (alloc_shrinker_info(memcg)) goto offline_kmem; + objcg = obj_cgroup_alloc(); + if (!objcg) + goto free_shrinker; + + objcg->memcg = memcg; + rcu_assign_pointer(memcg->objcg, objcg); + /* Online state pins memcg ID, memcg ID pins CSS */ refcount_set(&memcg->id.ref, 1); css_get(css); @@ -5189,6 +5196,8 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css) queue_delayed_work(system_unbound_wq, &stats_flush_dwork, 2UL*HZ); return 0; +free_shrinker: + free_shrinker_info(memcg); offline_kmem: memcg_offline_kmem(memcg); remove_id: @@ -5216,6 +5225,7 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css) page_counter_set_min(&memcg->memory, 0); page_counter_set_low(&memcg->memory, 0); + memcg_reparent_objcgs(memcg); memcg_offline_kmem(memcg); reparent_shrinker_deferred(memcg); wb_memcg_offline(memcg); From patchwork Mon May 30 07:49:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12864354 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95A6EC433F5 for ; Mon, 30 May 2022 07:50:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 335038D0008; Mon, 30 May 2022 03:50:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2E4EA8D0001; Mon, 30 May 2022 03:50:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1843F8D0008; Mon, 30 May 2022 03:50:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id F241C8D0001 for ; Mon, 30 May 2022 03:50:50 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id C5ECF810D4 for ; Mon, 30 May 2022 07:50:50 +0000 (UTC) X-FDA: 79521637860.18.D4D3958 Received: from mail-pj1-f51.google.com (mail-pj1-f51.google.com [209.85.216.51]) by imf08.hostedemail.com (Postfix) with ESMTP id EA1CB16001D for ; Mon, 30 May 2022 07:50:25 +0000 (UTC) Received: by mail-pj1-f51.google.com with SMTP id q12-20020a17090a304c00b001e2d4fb0eb4so3171208pjl.4 for ; Mon, 30 May 2022 00:50:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=DFDrT6L7Ic0OCHDZSnFDn3vZ9wX5eeepTxbi1OgbPW8=; b=zQJmLbL4/+D7h7zErVxvNlAnICGCMYdOu/gHKJqzPdOGx+66n44qTTTvxszVFnR9b6 CZQZJqJqMWTpk22ClCuRMI2ine4bHBhgprMNn46ytJkPebSctl8WSg2tf4knJOFFLdLl ppUxrioy6c8/Oj8jir8vGEa46BW4ZY5om1hT2z5AGMoNumGVtcnHztCm3VRB/b3CS45x 1K2u+8BV2trALYE6HwT9XvemXagGeYIrKW/+h3l2f5qg8d6ZIcYcHfgN8oO4Qx84aEZJ n9GMHxnXIPDQFYNrHVzQRuM8aJKAXzWYbFzRe1J2iJTAhiYVmEilT/p1JQplQZ7T8fAd 1f6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=DFDrT6L7Ic0OCHDZSnFDn3vZ9wX5eeepTxbi1OgbPW8=; b=283UTUfjjRTZVDS4OGD5+KTi1Ba+9WS7z3y7Oi9utbqAA2oOPi0Yjb02LmeuxSiOee HCIwSMnFUNF2pKqBCgoHSZD+Uc+HgnAKgOqJjmT7+GBZCDRLe5WmdZmqRqOBg0Vzt7QS /S7o+4rtqKK3TdNGMBiGJvJ2NEiMlpGRpQsxOgRUlhklwUwSp145yXOPUvTMM67DvXd7 q67GMu8mdDw4EFeGiO1T8sNKdMB4AoAWGrg5nl2o3tpu3KJihn+W8lpEEJwhJZ6BYj8E 8+lGau/xCQpHkUi6DR9GwIJg/dYw5bFO/j36PmFYNVRsXONKxlt/mowl4Kbzjn/rkb/j nrNw== X-Gm-Message-State: AOAM531hTiRtJcvJ9C+ve0/oL1S5EoLgRCya8Nas5UXryVPjj2fmjjoi OMeM+r217sXl1j7UILz3OBUQpw== X-Google-Smtp-Source: ABdhPJzLXK9P++G3+iXY1INLBCQB7H/yWUAQ17Fk4gv9kP+usUlghFA/XdQQI3DfZHC36wb06zZj/g== X-Received: by 2002:a17:90a:4413:b0:1cd:2d00:9d0b with SMTP id s19-20020a17090a441300b001cd2d009d0bmr21912947pjg.81.1653897049337; Mon, 30 May 2022 00:50:49 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([2408:8207:18da:2310:2071:e13a:8aa:cacf]) by smtp.gmail.com with ESMTPSA id a23-20020a170902b59700b001616c3bd5c2sm8421381pls.162.2022.05.30.00.50.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 May 2022 00:50:48 -0700 (PDT) From: Muchun Song To: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, akpm@linux-foundation.org Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com, longman@redhat.com, Muchun Song Subject: [PATCH v5 04/11] mm: memcontrol: make lruvec lock safe when LRU pages are reparented Date: Mon, 30 May 2022 15:49:12 +0800 Message-Id: <20220530074919.46352-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220530074919.46352-1-songmuchun@bytedance.com> References: <20220530074919.46352-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: EA1CB16001D X-Stat-Signature: bejnobsy83j3yrfou4ucf1ixy5158ytw Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=zQJmLbL4; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf08.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.51 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1653897025-711202 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The diagram below shows how to make the folio lruvec lock safe when LRU pages are reparented. folio_lruvec_lock(folio) rcu_read_lock(); retry: lruvec = folio_lruvec(folio); // The folio is reparented at this time. spin_lock(&lruvec->lru_lock); if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) // Acquired the wrong lruvec lock and need to retry. // Because this folio is on the parent memcg lruvec list. spin_unlock(&lruvec->lru_lock); goto retry; // If we reach here, it means that folio_memcg(folio) is stable. memcg_reparent_objcgs(memcg) // lruvec belongs to memcg and lruvec_parent belongs to parent memcg. spin_lock(&lruvec->lru_lock); spin_lock(&lruvec_parent->lru_lock); // Move all the pages from the lruvec list to the parent lruvec list. spin_unlock(&lruvec_parent->lru_lock); spin_unlock(&lruvec->lru_lock); After we acquire the lruvec lock, we need to check whether the folio is reparented. If so, we need to reacquire the new lruvec lock. On the routine of the LRU pages reparenting, we will also acquire the lruvec lock (will be implemented in the later patch). So folio_memcg() cannot be changed when we hold the lruvec lock. Since lruvec_memcg(lruvec) is always equal to folio_memcg(folio) after we hold the lruvec lock, lruvec_memcg_debug() check is pointless. So remove it. This is a preparation for reparenting the LRU pages. Signed-off-by: Muchun Song --- include/linux/memcontrol.h | 18 +++------------- mm/compaction.c | 27 +++++++++++++++++++---- mm/memcontrol.c | 53 ++++++++++++++++++++++++++-------------------- mm/swap.c | 5 +++++ 4 files changed, 61 insertions(+), 42 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 27f3171f42a1..e390aaa46776 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -752,7 +752,9 @@ static inline struct lruvec *mem_cgroup_lruvec(struct mem_cgroup *memcg, * folio_lruvec - return lruvec for isolating/putting an LRU folio * @folio: Pointer to the folio. * - * This function relies on folio->mem_cgroup being stable. + * The lruvec can be changed to its parent lruvec when the page reparented. + * The caller need to recheck if it cares about this changes (just like + * folio_lruvec_lock() does). */ static inline struct lruvec *folio_lruvec(struct folio *folio) { @@ -771,15 +773,6 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *folio); struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags); -#ifdef CONFIG_DEBUG_VM -void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio); -#else -static inline -void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) -{ -} -#endif - static inline struct mem_cgroup *mem_cgroup_from_css(struct cgroup_subsys_state *css){ return css ? container_of(css, struct mem_cgroup, css) : NULL; @@ -1240,11 +1233,6 @@ static inline struct lruvec *folio_lruvec(struct folio *folio) return &pgdat->__lruvec; } -static inline -void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) -{ -} - static inline struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *memcg) { return NULL; diff --git a/mm/compaction.c b/mm/compaction.c index 4f155df6b39c..29ff111e5711 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -509,6 +509,25 @@ static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags, return true; } +static struct lruvec * +compact_folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags, + struct compact_control *cc) +{ + struct lruvec *lruvec; + + rcu_read_lock(); +retry: + lruvec = folio_lruvec(folio); + compact_lock_irqsave(&lruvec->lru_lock, flags, cc); + if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) { + spin_unlock_irqrestore(&lruvec->lru_lock, *flags); + goto retry; + } + rcu_read_unlock(); + + return lruvec; +} + /* * Compaction requires the taking of some coarse locks that are potentially * very heavily contended. The lock should be periodically unlocked to avoid @@ -844,6 +863,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, /* Time to isolate some pages for migration */ for (; low_pfn < end_pfn; low_pfn++) { + struct folio *folio; if (skip_on_failure && low_pfn >= next_skip_pfn) { /* @@ -1065,18 +1085,17 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, if (!TestClearPageLRU(page)) goto isolate_fail_put; - lruvec = folio_lruvec(page_folio(page)); + folio = page_folio(page); + lruvec = folio_lruvec(folio); /* If we already hold the lock, we can skip some rechecking */ if (lruvec != locked) { if (locked) lruvec_unlock_irqrestore(locked, flags); - compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); + lruvec = compact_folio_lruvec_lock_irqsave(folio, &flags, cc); locked = lruvec; - lruvec_memcg_debug(lruvec, page_folio(page)); - /* Try get exclusive access under lock */ if (!skip_updated) { skip_updated = true; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 739a1d58ce97..9d98a791353c 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1199,23 +1199,6 @@ int mem_cgroup_scan_tasks(struct mem_cgroup *memcg, return ret; } -#ifdef CONFIG_DEBUG_VM -void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) -{ - struct mem_cgroup *memcg; - - if (mem_cgroup_disabled()) - return; - - memcg = folio_memcg(folio); - - if (!memcg) - VM_BUG_ON_FOLIO(lruvec_memcg(lruvec) != root_mem_cgroup, folio); - else - VM_BUG_ON_FOLIO(lruvec_memcg(lruvec) != memcg, folio); -} -#endif - /** * folio_lruvec_lock - Lock the lruvec for a folio. * @folio: Pointer to the folio. @@ -1230,10 +1213,18 @@ void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) */ struct lruvec *folio_lruvec_lock(struct folio *folio) { - struct lruvec *lruvec = folio_lruvec(folio); + struct lruvec *lruvec; + rcu_read_lock(); +retry: + lruvec = folio_lruvec(folio); spin_lock(&lruvec->lru_lock); - lruvec_memcg_debug(lruvec, folio); + + if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) { + spin_unlock(&lruvec->lru_lock); + goto retry; + } + rcu_read_unlock(); return lruvec; } @@ -1253,10 +1244,18 @@ struct lruvec *folio_lruvec_lock(struct folio *folio) */ struct lruvec *folio_lruvec_lock_irq(struct folio *folio) { - struct lruvec *lruvec = folio_lruvec(folio); + struct lruvec *lruvec; + rcu_read_lock(); +retry: + lruvec = folio_lruvec(folio); spin_lock_irq(&lruvec->lru_lock); - lruvec_memcg_debug(lruvec, folio); + + if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) { + spin_unlock_irq(&lruvec->lru_lock); + goto retry; + } + rcu_read_unlock(); return lruvec; } @@ -1278,10 +1277,18 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *folio) struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags) { - struct lruvec *lruvec = folio_lruvec(folio); + struct lruvec *lruvec; + rcu_read_lock(); +retry: + lruvec = folio_lruvec(folio); spin_lock_irqsave(&lruvec->lru_lock, *flags); - lruvec_memcg_debug(lruvec, folio); + + if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) { + spin_unlock_irqrestore(&lruvec->lru_lock, *flags); + goto retry; + } + rcu_read_unlock(); return lruvec; } diff --git a/mm/swap.c b/mm/swap.c index 0a8ee33116c5..6cea469b6ff2 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -303,6 +303,11 @@ void lru_note_cost(struct lruvec *lruvec, bool file, unsigned int nr_pages) void lru_note_cost_folio(struct folio *folio) { + WARN_ON_ONCE(!rcu_read_lock_held()); + /* + * The rcu read lock is held by the caller, so we do not need to + * care about the lruvec returned by folio_lruvec() being released. + */ lru_note_cost(folio_lruvec(folio), folio_is_file_lru(folio), folio_nr_pages(folio)); } From patchwork Mon May 30 07:49:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12864355 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80417C433FE for ; Mon, 30 May 2022 07:51:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 03B9C8D0009; Mon, 30 May 2022 03:51:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F308B8D0001; Mon, 30 May 2022 03:50:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DF6B68D0009; Mon, 30 May 2022 03:50:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id D11F48D0001 for ; Mon, 30 May 2022 03:50:59 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id A10EB355B2 for ; Mon, 30 May 2022 07:50:59 +0000 (UTC) X-FDA: 79521638238.22.17047C4 Received: from mail-pg1-f175.google.com (mail-pg1-f175.google.com [209.85.215.175]) by imf28.hostedemail.com (Postfix) with ESMTP id 5155BC0044 for ; Mon, 30 May 2022 07:50:22 +0000 (UTC) Received: by mail-pg1-f175.google.com with SMTP id j191so2228467pgd.3 for ; Mon, 30 May 2022 00:50:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZLeoBTZOxUhrkpOpKK9VuHqYh0E8gkNOLgtoFpvje4s=; b=0UWfJ32A7C5WI27zeLqZ/c6KOFraVfKR3bXGTtaDgPDytYhXuCTM3qvoO9ceqdaRDY fr39wYyg7FNqYEXu7E3U+8G7ozPy19QPvOK9hI2Ny0vQUkeBoDLjPgS8aTFBVAodAxGx 8OSmvcX8j0gKWNAh4tE37+enbVmb1X1UZryzOlOYEQPIZj2IK7nSX8iAkvvVdAYwLWys OiEsEOxJuyBDueV6i0OF5TBGIiBI1tXdFsSoSvKcjLXQ7FdjiAImGCcBfQu2YTrRG93y VIe9OpG86IVOQ4eKuTUJmNMvndwiXUer8p45NAyNURmUmuS8zbfIud/W0jzIjXAK7IPG 5Y0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZLeoBTZOxUhrkpOpKK9VuHqYh0E8gkNOLgtoFpvje4s=; b=tBsP7OeY7Yd3oNk8UYkKloxjpS63dJpNgEsC00j67DXZylOTDDmxiCWmhsooF/EZq7 HlYwOx35ibcE65R5bv5vQK8nCbFIrukg1t7GfdVCV4FsMiCPOLtGE9Wbyq0T3I56JY0i 0Mmz99tSwjSJM8iHvSIxhdYB4aL86uL7HzZrNqhgFN6lHBjU0FWbnir85cjs4als66J7 XZg3TFvc05wdfyWoeENZIBDqhEZDv/HiBpGDskN97smnd0OhoD4NawDGvpT0fqFGH2Ry kkcG3op/YHgs23DX94PRkMaKVijQruhXupO3zDwBYOsU4n+EMVXRJQ5eUGw3TOPu1hFV 2D8Q== X-Gm-Message-State: AOAM531+/GX95bbnsHTwJW9xnNLKCY+vtgv7dbrHNyyl2YK0gUQjBnpE JIJ6v9PJ8ouPtefYzrQuT3ePfw== X-Google-Smtp-Source: ABdhPJyX2juWPk7K2/Yr/XqfWc11dY430FPctTcojFzIUFCbhzADFmo0FxVQDT1ZOU8NMg5XfBdegA== X-Received: by 2002:a63:693:0:b0:3f5:ef4e:d359 with SMTP id 141-20020a630693000000b003f5ef4ed359mr48004817pgg.540.1653897058034; Mon, 30 May 2022 00:50:58 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([2408:8207:18da:2310:2071:e13a:8aa:cacf]) by smtp.gmail.com with ESMTPSA id a23-20020a170902b59700b001616c3bd5c2sm8421381pls.162.2022.05.30.00.50.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 May 2022 00:50:57 -0700 (PDT) From: Muchun Song To: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, akpm@linux-foundation.org Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com, longman@redhat.com, Muchun Song Subject: [PATCH v5 05/11] mm: vmscan: rework move_pages_to_lru() Date: Mon, 30 May 2022 15:49:13 +0800 Message-Id: <20220530074919.46352-6-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220530074919.46352-1-songmuchun@bytedance.com> References: <20220530074919.46352-1-songmuchun@bytedance.com> MIME-Version: 1.0 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=0UWfJ32A; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf28.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.215.175 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 5155BC0044 X-Stat-Signature: h3cjojdepyfrfpkgqsub95m81miemche X-HE-Tag: 1653897022-274663 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the later patch, we will reparent the LRU pages. The pages moved to appropriate LRU list can be reparented during the process of the move_pages_to_lru(). So holding a lruvec lock by the caller is wrong, we should use the more general interface of folio_lruvec_relock_irq() to acquire the correct lruvec lock. Signed-off-by: Muchun Song Acked-by: Johannes Weiner Acked-by: Roman Gushchin --- mm/vmscan.c | 49 +++++++++++++++++++++++++------------------------ 1 file changed, 25 insertions(+), 24 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index a611ccf03c9b..67f1462b150d 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2226,23 +2226,28 @@ static int too_many_isolated(struct pglist_data *pgdat, int file, * move_pages_to_lru() moves pages from private @list to appropriate LRU list. * On return, @list is reused as a list of pages to be freed by the caller. * - * Returns the number of pages moved to the given lruvec. + * Returns the number of pages moved to the appropriate LRU list. + * + * Note: The caller must not hold any lruvec lock. */ -static unsigned int move_pages_to_lru(struct lruvec *lruvec, - struct list_head *list) +static unsigned int move_pages_to_lru(struct list_head *list) { - int nr_pages, nr_moved = 0; + int nr_moved = 0; + struct lruvec *lruvec = NULL; LIST_HEAD(pages_to_free); - struct page *page; while (!list_empty(list)) { - page = lru_to_page(list); + int nr_pages; + struct folio *folio = lru_to_folio(list); + struct page *page = &folio->page; + + lruvec = folio_lruvec_relock_irq(folio, lruvec); VM_BUG_ON_PAGE(PageLRU(page), page); list_del(&page->lru); if (unlikely(!page_evictable(page))) { - spin_unlock_irq(&lruvec->lru_lock); + lruvec_unlock_irq(lruvec); putback_lru_page(page); - spin_lock_irq(&lruvec->lru_lock); + lruvec = NULL; continue; } @@ -2263,20 +2268,16 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec, __clear_page_lru_flags(page); if (unlikely(PageCompound(page))) { - spin_unlock_irq(&lruvec->lru_lock); + lruvec_unlock_irq(lruvec); destroy_compound_page(page); - spin_lock_irq(&lruvec->lru_lock); + lruvec = NULL; } else list_add(&page->lru, &pages_to_free); continue; } - /* - * All pages were isolated from the same lruvec (and isolation - * inhibits memcg migration). - */ - VM_BUG_ON_PAGE(!folio_matches_lruvec(page_folio(page), lruvec), page); + VM_BUG_ON_PAGE(!folio_matches_lruvec(folio, lruvec), page); add_page_to_lru_list(page, lruvec); nr_pages = thp_nr_pages(page); nr_moved += nr_pages; @@ -2284,6 +2285,8 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec, workingset_age_nonresident(lruvec, nr_pages); } + if (lruvec) + lruvec_unlock_irq(lruvec); /* * To save our caller's stack, now use input list for pages to free. */ @@ -2355,16 +2358,16 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec, nr_reclaimed = shrink_page_list(&page_list, pgdat, sc, &stat, false); - spin_lock_irq(&lruvec->lru_lock); - move_pages_to_lru(lruvec, &page_list); + move_pages_to_lru(&page_list); + local_irq_disable(); __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); item = current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT; if (!cgroup_reclaim(sc)) __count_vm_events(item, nr_reclaimed); __count_memcg_events(lruvec_memcg(lruvec), item, nr_reclaimed); __count_vm_events(PGSTEAL_ANON + file, nr_reclaimed); - spin_unlock_irq(&lruvec->lru_lock); + local_irq_enable(); lru_note_cost(lruvec, file, stat.nr_pageout); mem_cgroup_uncharge_list(&page_list); @@ -2494,18 +2497,16 @@ static void shrink_active_list(unsigned long nr_to_scan, /* * Move pages back to the lru list. */ - spin_lock_irq(&lruvec->lru_lock); - - nr_activate = move_pages_to_lru(lruvec, &l_active); - nr_deactivate = move_pages_to_lru(lruvec, &l_inactive); + nr_activate = move_pages_to_lru(&l_active); + nr_deactivate = move_pages_to_lru(&l_inactive); /* Keep all free pages in l_active list */ list_splice(&l_inactive, &l_active); + local_irq_disable(); __count_vm_events(PGDEACTIVATE, nr_deactivate); __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_deactivate); - __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); - spin_unlock_irq(&lruvec->lru_lock); + local_irq_enable(); mem_cgroup_uncharge_list(&l_active); free_unref_page_list(&l_active); From patchwork Mon May 30 07:49:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12864356 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45D65C433F5 for ; Mon, 30 May 2022 07:51:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9B6C38D000A; Mon, 30 May 2022 03:51:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 969268D0001; Mon, 30 May 2022 03:51:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 82BC28D000A; Mon, 30 May 2022 03:51:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 746188D0001 for ; Mon, 30 May 2022 03:51:09 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 494B5810D7 for ; Mon, 30 May 2022 07:51:09 +0000 (UTC) X-FDA: 79521638658.24.87F28D7 Received: from mail-pg1-f178.google.com (mail-pg1-f178.google.com [209.85.215.178]) by imf29.hostedemail.com (Postfix) with ESMTP id 6ADB1120044 for ; Mon, 30 May 2022 07:50:56 +0000 (UTC) Received: by mail-pg1-f178.google.com with SMTP id q123so4430968pgq.6 for ; Mon, 30 May 2022 00:51:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=eS8SvDpHTohQF4EqjCM+H0xOhguiTqKM+7kzLSkVHtI=; b=oNhG52LEbWnwUNpy9jfaU81Vmv8ckOaaRIy4hKuoyyvIBdiOMJxTEWcGRybCQOMK5r 1nbQzFTPEPVCro2sxvSjWsUavxLKR22/Vo74emZiaBVIF/iJD8jAEiNC81BATOLEP2wz uJwI0vDomMYwIeB/05ckx17wN19YDVO478ZSuECDlAjahoZU3sRO3V323bucTt2LLBi+ ewE7brA8zxbX4o8fE2qB5WNe7IHqWMNZCQb2LoiWm73zsJHIu3/3xrWRhdsRekwZg4cn pfuCP172Gkj6OLI5/cntWLEfw/+yK55lf6NthD9yyA02aiW2I/FMkWTYgQRZgiqE7qAE Kglw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=eS8SvDpHTohQF4EqjCM+H0xOhguiTqKM+7kzLSkVHtI=; b=DFymK6tIudiOas+/E4qNnhxCfh1TtW5GCO9plH9/TxlgIgMZNJRKJK+EwJ48oaDuGg CjDR/9vJIfjJqzZlstDZkx3pf05IG8iT9sFb/uzVlxetGWE8dsWitYhv6dUKpq4+DYJb 120+auU0vLMfuA44noyJp/gGQsrcn61X/THGL0cnz5TpTc3sTpmiIUcAqyn7yf3R5m7s /DVmS2YQVfQruzXRKM2lAma37fdpzXZpji9PEqyHaLPJ9KAhxAuSDbQqPygOKvFR28vv wZJsqwrOnUWQ1KxdAi+PVTnCltQuyTRa1laWoi37XKVwBEvfZuRqCe5yUjxOBhGClqpz /QOQ== X-Gm-Message-State: AOAM531vLii2rJTLzD+BAt2E+UhyitvRbRnotozHWBvtkg4Ic3EE+UwH YMv9Q9WP+T7DnuJFAZIL8KlhNA== X-Google-Smtp-Source: ABdhPJxDcBQuPQuEg/cf8yN6kKV4P5TywkiUYecbsMiAztAXNXJlgHQcbNGGtDRXdsuez+VXYEOCWA== X-Received: by 2002:a63:2c16:0:b0:3fb:1b5f:4441 with SMTP id s22-20020a632c16000000b003fb1b5f4441mr16702730pgs.516.1653897067974; Mon, 30 May 2022 00:51:07 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([2408:8207:18da:2310:2071:e13a:8aa:cacf]) by smtp.gmail.com with ESMTPSA id a23-20020a170902b59700b001616c3bd5c2sm8421381pls.162.2022.05.30.00.50.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 May 2022 00:51:07 -0700 (PDT) From: Muchun Song To: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, akpm@linux-foundation.org Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com, longman@redhat.com, Muchun Song Subject: [PATCH v5 06/11] mm: thp: make split queue lock safe when LRU pages are reparented Date: Mon, 30 May 2022 15:49:14 +0800 Message-Id: <20220530074919.46352-7-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220530074919.46352-1-songmuchun@bytedance.com> References: <20220530074919.46352-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Stat-Signature: ioabogsax9tpxjxtkkcnbuc5g7z1yarm Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=oNhG52LE; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf29.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.215.178 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspam-User: X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 6ADB1120044 X-HE-Tag: 1653897056-154330 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Similar to the lruvec lock, we use the same approach to make the split queue lock safe when LRU pages are reparented. Signed-off-by: Muchun Song Acked-by: Roman Gushchin --- include/linux/memcontrol.h | 10 ++++ mm/huge_memory.c | 116 +++++++++++++++++++++++++++++++++++---------- 2 files changed, 100 insertions(+), 26 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index e390aaa46776..56227603dcb8 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1650,6 +1650,11 @@ int alloc_shrinker_info(struct mem_cgroup *memcg); void free_shrinker_info(struct mem_cgroup *memcg); void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id); void reparent_shrinker_deferred(struct mem_cgroup *memcg); + +static inline int shrinker_id(struct shrinker *shrinker) +{ + return shrinker->id; +} #else #define mem_cgroup_sockets_enabled 0 static inline void mem_cgroup_sk_alloc(struct sock *sk) { }; @@ -1663,6 +1668,11 @@ static inline void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id) { } + +static inline int shrinker_id(struct shrinker *shrinker) +{ + return -1; +} #endif #ifdef CONFIG_MEMCG_KMEM diff --git a/mm/huge_memory.c b/mm/huge_memory.c index b17b9d25d045..d3411dc291ab 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -503,25 +503,90 @@ pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma) } #ifdef CONFIG_MEMCG -static inline struct deferred_split *get_deferred_split_queue(struct page *page) +static inline struct mem_cgroup *folio_split_queue_memcg(struct folio *folio, + struct deferred_split *queue) { - struct mem_cgroup *memcg = page_memcg(compound_head(page)); - struct pglist_data *pgdat = NODE_DATA(page_to_nid(page)); + if (mem_cgroup_disabled()) + return NULL; + if (&NODE_DATA(folio_nid(folio))->deferred_split_queue == queue) + return NULL; + return container_of(queue, struct mem_cgroup, deferred_split_queue); +} - if (memcg) - return &memcg->deferred_split_queue; - else - return &pgdat->deferred_split_queue; +static inline struct deferred_split *folio_memcg_split_queue(struct folio *folio) +{ + struct mem_cgroup *memcg = folio_memcg(folio); + + return memcg ? &memcg->deferred_split_queue : NULL; } #else -static inline struct deferred_split *get_deferred_split_queue(struct page *page) +static inline struct mem_cgroup *folio_split_queue_memcg(struct folio *folio, + struct deferred_split *queue) { - struct pglist_data *pgdat = NODE_DATA(page_to_nid(page)); + return NULL; +} - return &pgdat->deferred_split_queue; +static inline struct deferred_split *folio_memcg_split_queue(struct folio *folio) +{ + return NULL; } #endif +static struct deferred_split *folio_split_queue(struct folio *folio) +{ + struct deferred_split *queue = folio_memcg_split_queue(folio); + + return queue ? : &NODE_DATA(folio_nid(folio))->deferred_split_queue; +} + +static struct deferred_split *folio_split_queue_lock(struct folio *folio) +{ + struct deferred_split *queue; + + rcu_read_lock(); +retry: + queue = folio_split_queue(folio); + spin_lock(&queue->split_queue_lock); + + if (unlikely(folio_split_queue_memcg(folio, queue) != folio_memcg(folio))) { + spin_unlock(&queue->split_queue_lock); + goto retry; + } + rcu_read_unlock(); + + return queue; +} + +static struct deferred_split * +folio_split_queue_lock_irqsave(struct folio *folio, unsigned long *flags) +{ + struct deferred_split *queue; + + rcu_read_lock(); +retry: + queue = folio_split_queue(folio); + spin_lock_irqsave(&queue->split_queue_lock, *flags); + + if (unlikely(folio_split_queue_memcg(folio, queue) != folio_memcg(folio))) { + spin_unlock_irqrestore(&queue->split_queue_lock, *flags); + goto retry; + } + rcu_read_unlock(); + + return queue; +} + +static inline void split_queue_unlock(struct deferred_split *queue) +{ + spin_unlock(&queue->split_queue_lock); +} + +static inline void split_queue_unlock_irqrestore(struct deferred_split *queue, + unsigned long flags) +{ + spin_unlock_irqrestore(&queue->split_queue_lock, flags); +} + void prep_transhuge_page(struct page *page) { /* @@ -2489,7 +2554,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) { struct folio *folio = page_folio(page); struct page *head = &folio->page; - struct deferred_split *ds_queue = get_deferred_split_queue(head); + struct deferred_split *ds_queue; XA_STATE(xas, &head->mapping->i_pages, head->index); struct anon_vma *anon_vma = NULL; struct address_space *mapping = NULL; @@ -2581,13 +2646,13 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) } /* Prevent deferred_split_scan() touching ->_refcount */ - spin_lock(&ds_queue->split_queue_lock); + ds_queue = folio_split_queue_lock(folio); if (page_ref_freeze(head, 1 + extra_pins)) { if (!list_empty(page_deferred_list(head))) { ds_queue->split_queue_len--; list_del(page_deferred_list(head)); } - spin_unlock(&ds_queue->split_queue_lock); + split_queue_unlock(ds_queue); if (mapping) { int nr = thp_nr_pages(head); @@ -2605,7 +2670,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) __split_huge_page(page, list, end); ret = 0; } else { - spin_unlock(&ds_queue->split_queue_lock); + split_queue_unlock(ds_queue); fail: if (mapping) xas_unlock(&xas); @@ -2630,25 +2695,23 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) void free_transhuge_page(struct page *page) { - struct deferred_split *ds_queue = get_deferred_split_queue(page); + struct deferred_split *ds_queue; unsigned long flags; - spin_lock_irqsave(&ds_queue->split_queue_lock, flags); + ds_queue = folio_split_queue_lock_irqsave(page_folio(page), &flags); if (!list_empty(page_deferred_list(page))) { ds_queue->split_queue_len--; list_del(page_deferred_list(page)); } - spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); + split_queue_unlock_irqrestore(ds_queue, flags); free_compound_page(page); } void deferred_split_huge_page(struct page *page) { - struct deferred_split *ds_queue = get_deferred_split_queue(page); -#ifdef CONFIG_MEMCG - struct mem_cgroup *memcg = page_memcg(compound_head(page)); -#endif + struct deferred_split *ds_queue; unsigned long flags; + struct folio *folio = page_folio(page); VM_BUG_ON_PAGE(!PageTransHuge(page), page); @@ -2665,18 +2728,19 @@ void deferred_split_huge_page(struct page *page) if (PageSwapCache(page)) return; - spin_lock_irqsave(&ds_queue->split_queue_lock, flags); + ds_queue = folio_split_queue_lock_irqsave(folio, &flags); if (list_empty(page_deferred_list(page))) { + struct mem_cgroup *memcg; + + memcg = folio_split_queue_memcg(folio, ds_queue); count_vm_event(THP_DEFERRED_SPLIT_PAGE); list_add_tail(page_deferred_list(page), &ds_queue->split_queue); ds_queue->split_queue_len++; -#ifdef CONFIG_MEMCG if (memcg) set_shrinker_bit(memcg, page_to_nid(page), - deferred_split_shrinker.id); -#endif + shrinker_id(&deferred_split_shrinker)); } - spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); + split_queue_unlock_irqrestore(ds_queue, flags); } static unsigned long deferred_split_count(struct shrinker *shrink, From patchwork Mon May 30 07:49:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12864357 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8CD5C433F5 for ; Mon, 30 May 2022 07:51:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8B1B88D000B; Mon, 30 May 2022 03:51:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8164D8D0001; Mon, 30 May 2022 03:51:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6B4B38D000B; Mon, 30 May 2022 03:51:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 534D98D0001 for ; Mon, 30 May 2022 03:51:19 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 1CCF660916 for ; Mon, 30 May 2022 07:51:19 +0000 (UTC) X-FDA: 79521639078.19.8B529C5 Received: from mail-pj1-f53.google.com (mail-pj1-f53.google.com [209.85.216.53]) by imf30.hostedemail.com (Postfix) with ESMTP id 6B9E180048 for ; Mon, 30 May 2022 07:50:45 +0000 (UTC) Received: by mail-pj1-f53.google.com with SMTP id cx11so1903108pjb.1 for ; Mon, 30 May 2022 00:51:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=IOnakioNTpkftz5TEEpxPkx+AXEcUhIQ+xBmxegLLLk=; b=G7UlzCFpeRyuujxoet9lrU1xhyW7FqRaPoHxhPWDapAkZHgzLVWtZvPYnkDrlFlULX RUwmuVgCUVLHss0KzevDkqrBPiahzVKv9axDXDIuimp7R7/aChPMdh+0G7dY3RE/Aqsw NA09if8i2KGe3yjtHL1j3LAcjME/OtX2A874ONr2TLKEKqGxxYvEN4DtXCouNvzV+Cdh sa0np4Yvv5OWhB8gFrkepTeti4iAWZmUC2ExdUtJzYwzoqZV3Bo4cPqdRoT1ezd4N8ye EDN2VujblGS7sShPM+Hk6rFUlmE/iFU5Zdm4MZy0LnkZXA/2P79JRNnSy1Ve7oUXzRJB JFeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=IOnakioNTpkftz5TEEpxPkx+AXEcUhIQ+xBmxegLLLk=; b=xSBf1i/riTibXcIlVWm5ZYh/DUySpqYfKu2XKuFS14FCyXyepmEabTFBOPIYLvmang XLZuNxVBiG9hFWEHlBRMXuoHuENCKx5zBKKo2HoMCidJ9VNM9PN0caqT9QbdbbsFUBYq xF4zzhkAKSQx0I5gHzuEYMpkRRHL85z+eWbbbLZHOk64JY+d/0CXWdI/MdYj1WA0xfOG VYp54fE7wFQ1xwFa1a+bay+bpHr794ZTmuRDPSR51hYBkNG+0strVf81oyd3XXBa2AtN HHO4ttBKv5VYwTN3jmmaoPLy+AyOaTYsrnHDCV1THXJZPCrqXPIJpbxti45CGf7T/Xr1 wt9A== X-Gm-Message-State: AOAM530a1yCOy+jCB3+1w5v01ILon6tBezlkMqr9/Iod9iwRkRQxalu4 jLBPrGGEwZvjvIKc190xA3sI+Q== X-Google-Smtp-Source: ABdhPJzvVjd94LyMjg4MN7EbMOK/53ve9a+HlZ4JvWRhjFBngX+lwFiXPgqD1EKJsTzag8g9j2FPqA== X-Received: by 2002:a17:90a:66c1:b0:1e2:758c:46f1 with SMTP id z1-20020a17090a66c100b001e2758c46f1mr18023933pjl.104.1653897077341; Mon, 30 May 2022 00:51:17 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([2408:8207:18da:2310:2071:e13a:8aa:cacf]) by smtp.gmail.com with ESMTPSA id a23-20020a170902b59700b001616c3bd5c2sm8421381pls.162.2022.05.30.00.51.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 May 2022 00:51:16 -0700 (PDT) From: Muchun Song To: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, akpm@linux-foundation.org Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com, longman@redhat.com, Muchun Song Subject: [PATCH v5 07/11] mm: memcontrol: make all the callers of {folio,page}_memcg() safe Date: Mon, 30 May 2022 15:49:15 +0800 Message-Id: <20220530074919.46352-8-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220530074919.46352-1-songmuchun@bytedance.com> References: <20220530074919.46352-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Stat-Signature: zmgt5xbsmea7mubm84e8tsbxkdnru13g Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=G7UlzCFp; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf30.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.53 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspam-User: X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 6B9E180048 X-HE-Tag: 1653897045-628352 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When we use objcg APIs to charge the LRU pages, the page will not hold a reference to the memcg associated with the page. So the caller of the {folio,page}_memcg() should hold an rcu read lock or obtain a reference to the memcg associated with the page to protect memcg from being released. So introduce get_mem_cgroup_from_{page,folio}() to obtain a reference to the memory cgroup associated with the page. In this patch, make all the callers hold an rcu read lock or obtain a reference to the memcg to protect memcg from being released when the LRU pages reparented. We do not need to adjust the callers of {folio,page}_memcg() during the whole process of mem_cgroup_move_task(). Because the cgroup migration and memory cgroup offlining are serialized by @cgroup_mutex. In this routine, the LRU pages cannot be reparented to its parent memory cgroup. So {folio,page}_memcg() is stable and cannot be released. This is a preparation for reparenting the LRU pages. Signed-off-by: Muchun Song Acked-by: Roman Gushchin --- fs/buffer.c | 4 +-- fs/fs-writeback.c | 23 +++++++------- include/linux/memcontrol.h | 61 +++++++++++++++++++++++++++++++---- include/trace/events/writeback.h | 5 +++ mm/memcontrol.c | 68 +++++++++++++++++++++++++++++----------- mm/migrate.c | 4 +++ mm/page_io.c | 5 +-- 7 files changed, 131 insertions(+), 39 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index 2b5561ae5d0b..80975a457670 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -819,8 +819,7 @@ struct buffer_head *alloc_page_buffers(struct page *page, unsigned long size, if (retry) gfp |= __GFP_NOFAIL; - /* The page lock pins the memcg */ - memcg = page_memcg(page); + memcg = get_mem_cgroup_from_page(page); old_memcg = set_active_memcg(memcg); head = NULL; @@ -840,6 +839,7 @@ struct buffer_head *alloc_page_buffers(struct page *page, unsigned long size, set_bh_page(bh, page, offset); } out: + mem_cgroup_put(memcg); set_active_memcg(old_memcg); return head; /* diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index 1fae0196292a..56612ace8778 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -243,15 +243,13 @@ void __inode_attach_wb(struct inode *inode, struct page *page) if (inode_cgwb_enabled(inode)) { struct cgroup_subsys_state *memcg_css; - if (page) { - memcg_css = mem_cgroup_css_from_page(page); - wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC); - } else { - /* must pin memcg_css, see wb_get_create() */ + /* must pin memcg_css, see wb_get_create() */ + if (page) + memcg_css = get_mem_cgroup_css_from_page(page); + else memcg_css = task_get_css(current, memory_cgrp_id); - wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC); - css_put(memcg_css); - } + wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC); + css_put(memcg_css); } if (!wb) @@ -868,16 +866,16 @@ void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page, if (!wbc->wb || wbc->no_cgroup_owner) return; - css = mem_cgroup_css_from_page(page); + css = get_mem_cgroup_css_from_page(page); /* dead cgroups shouldn't contribute to inode ownership arbitration */ if (!(css->flags & CSS_ONLINE)) - return; + goto out; id = css->id; if (id == wbc->wb_id) { wbc->wb_bytes += bytes; - return; + goto out; } if (id == wbc->wb_lcand_id) @@ -890,6 +888,9 @@ void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page, wbc->wb_tcand_bytes += bytes; else wbc->wb_tcand_bytes -= min(bytes, wbc->wb_tcand_bytes); + +out: + css_put(css); } EXPORT_SYMBOL_GPL(wbc_account_cgroup_owner); diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 56227603dcb8..16464116f94a 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -373,7 +373,7 @@ static inline bool folio_memcg_kmem(struct folio *folio); * a valid memcg, but can be atomically swapped to the parent memcg. * * The caller must ensure that the returned memcg won't be released: - * e.g. acquire the rcu_read_lock or css_set_lock. + * e.g. acquire the rcu_read_lock or objcg_lock or cgroup_mutex. */ static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg) { @@ -439,8 +439,8 @@ static inline struct obj_cgroup *__folio_objcg(struct folio *folio) * - lock_page_memcg() * - exclusive reference * - * For a kmem folio a caller should hold an rcu read lock to protect memcg - * associated with a kmem folio from being released. + * Note: The caller should hold an rcu read lock to protect memcg associated + * with a folio from being released. */ static inline struct mem_cgroup *folio_memcg(struct folio *folio) { @@ -449,12 +449,48 @@ static inline struct mem_cgroup *folio_memcg(struct folio *folio) return __folio_memcg(folio); } +/* + * page_memcg - Get the memory cgroup associated with a page. + * @page: Pointer to the page. + * + * See the cooments in folio_memcg(). + */ static inline struct mem_cgroup *page_memcg(struct page *page) { return folio_memcg(page_folio(page)); } -/** +/* + * get_mem_cgroup_from_folio - Obtain a reference on the memory cgroup + * associated with a folio. + * @folio: Pointer to the folio. + * + * Returns a pointer to the memory cgroup (and obtain a reference on it) + * associated with the folio, or NULL. This function assumes that the + * folio is known to have a proper memory cgroup pointer. It's not safe + * to call this function against some type of pages, e.g. slab pages or + * ex-slab pages. + */ +static inline struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *folio) +{ + struct mem_cgroup *memcg; + + rcu_read_lock(); +retry: + memcg = folio_memcg(folio); + if (unlikely(memcg && !css_tryget(&memcg->css))) + goto retry; + rcu_read_unlock(); + + return memcg; +} + +static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page) +{ + return get_mem_cgroup_from_folio(page_folio(page)); +} + +/* * folio_memcg_rcu - Locklessly get the memory cgroup associated with a folio. * @folio: Pointer to the folio. * @@ -873,7 +909,7 @@ static inline bool mm_match_cgroup(struct mm_struct *mm, return match; } -struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page); +struct cgroup_subsys_state *get_mem_cgroup_css_from_page(struct page *page); ino_t page_cgroup_ino(struct page *page); static inline bool mem_cgroup_online(struct mem_cgroup *memcg) @@ -1047,10 +1083,13 @@ static inline void count_memcg_events(struct mem_cgroup *memcg, static inline void count_memcg_page_event(struct page *page, enum vm_event_item idx) { - struct mem_cgroup *memcg = page_memcg(page); + struct mem_cgroup *memcg; + rcu_read_lock(); + memcg = page_memcg(page); if (memcg) count_memcg_events(memcg, idx, 1); + rcu_read_unlock(); } static inline void count_memcg_event_mm(struct mm_struct *mm, @@ -1129,6 +1168,16 @@ static inline struct mem_cgroup *page_memcg(struct page *page) return NULL; } +static inline struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *folio) +{ + return NULL; +} + +static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page) +{ + return NULL; +} + static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) { WARN_ON_ONCE(!rcu_read_lock_held()); diff --git a/include/trace/events/writeback.h b/include/trace/events/writeback.h index 86b2a82da546..cdb822339f13 100644 --- a/include/trace/events/writeback.h +++ b/include/trace/events/writeback.h @@ -258,6 +258,11 @@ TRACE_EVENT(track_foreign_dirty, __entry->ino = inode ? inode->i_ino : 0; __entry->memcg_id = wb->memcg_css->id; __entry->cgroup_ino = __trace_wb_assign_cgroup(wb); + /* + * TP_fast_assign() is under preemption disabled which can + * serve as an RCU read-side critical section so that the + * memcg returned by folio_memcg() cannot be freed. + */ __entry->page_cgroup_ino = cgroup_ino(folio_memcg(folio)->css.cgroup); ), diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 9d98a791353c..4cc392741753 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -371,7 +371,7 @@ EXPORT_SYMBOL(memcg_kmem_enabled_key); #endif /** - * mem_cgroup_css_from_page - css of the memcg associated with a page + * get_mem_cgroup_css_from_page - get css of the memcg associated with a page * @page: page of interest * * If memcg is bound to the default hierarchy, css of the memcg associated @@ -381,13 +381,15 @@ EXPORT_SYMBOL(memcg_kmem_enabled_key); * If memcg is bound to a traditional hierarchy, the css of root_mem_cgroup * is returned. */ -struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page) +struct cgroup_subsys_state *get_mem_cgroup_css_from_page(struct page *page) { struct mem_cgroup *memcg; - memcg = page_memcg(page); + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) + return &root_mem_cgroup->css; - if (!memcg || !cgroup_subsys_on_dfl(memory_cgrp_subsys)) + memcg = get_mem_cgroup_from_page(page); + if (!memcg) memcg = root_mem_cgroup; return &memcg->css; @@ -770,13 +772,13 @@ void __mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, void __mod_lruvec_page_state(struct page *page, enum node_stat_item idx, int val) { - struct page *head = compound_head(page); /* rmap on tail pages */ + struct folio *folio = page_folio(page); /* rmap on tail pages */ struct mem_cgroup *memcg; pg_data_t *pgdat = page_pgdat(page); struct lruvec *lruvec; rcu_read_lock(); - memcg = page_memcg(head); + memcg = folio_memcg(folio); /* Untracked pages have no memcg, no lruvec. Update only the node */ if (!memcg) { rcu_read_unlock(); @@ -2049,7 +2051,9 @@ void folio_memcg_lock(struct folio *folio) * The RCU lock is held throughout the transaction. The fast * path can get away without acquiring the memcg->move_lock * because page moving starts with an RCU grace period. - */ + * + * The RCU lock also protects the memcg from being freed. + */ rcu_read_lock(); if (mem_cgroup_disabled()) @@ -3287,7 +3291,7 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size) void split_page_memcg(struct page *head, unsigned int nr) { struct folio *folio = page_folio(head); - struct mem_cgroup *memcg = folio_memcg(folio); + struct mem_cgroup *memcg = get_mem_cgroup_from_folio(folio); int i; if (mem_cgroup_disabled() || !memcg) @@ -3300,6 +3304,8 @@ void split_page_memcg(struct page *head, unsigned int nr) obj_cgroup_get_many(__folio_objcg(folio), nr - 1); else css_get_many(&memcg->css, nr - 1); + + css_put(&memcg->css); } #ifdef CONFIG_MEMCG_SWAP @@ -4496,7 +4502,7 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages, void mem_cgroup_track_foreign_dirty_slowpath(struct folio *folio, struct bdi_writeback *wb) { - struct mem_cgroup *memcg = folio_memcg(folio); + struct mem_cgroup *memcg = get_mem_cgroup_from_folio(folio); struct memcg_cgwb_frn *frn; u64 now = get_jiffies_64(); u64 oldest_at = now; @@ -4543,6 +4549,7 @@ void mem_cgroup_track_foreign_dirty_slowpath(struct folio *folio, frn->memcg_id = wb->memcg_css->id; frn->at = now; } + css_put(&memcg->css); } /* issue foreign writeback flushes for recorded foreign dirtying events */ @@ -6077,6 +6084,14 @@ static void mem_cgroup_move_charge(void) atomic_dec(&mc.from->moving_account); } +/* + * The cgroup migration and memory cgroup offlining are serialized by + * @cgroup_mutex. If we reach here, it means that the LRU pages cannot + * be reparented to its parent memory cgroup. So during the whole process + * of mem_cgroup_move_task(), page_memcg(page) is stable. So we do not + * need to worry about the memcg (returned from page_memcg()) being + * released even if we do not hold an rcu read lock. + */ static void mem_cgroup_move_task(void) { if (mc.to) { @@ -6876,7 +6891,7 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new) if (folio_memcg(new)) return; - memcg = folio_memcg(old); + memcg = get_mem_cgroup_from_folio(old); VM_WARN_ON_ONCE_FOLIO(!memcg, old); if (!memcg) return; @@ -6895,6 +6910,8 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new) mem_cgroup_charge_statistics(memcg, nr_pages); memcg_check_events(memcg, folio_nid(new)); local_irq_restore(flags); + + css_put(&memcg->css); } DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key); @@ -7079,6 +7096,10 @@ void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry) if (cgroup_subsys_on_dfl(memory_cgrp_subsys)) return; + /* + * Interrupts should be disabled by the caller (see the comments below), + * which can serve as RCU read-side critical sections. + */ memcg = folio_memcg(folio); VM_WARN_ON_ONCE_FOLIO(!memcg, folio); @@ -7140,19 +7161,21 @@ int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry) struct page_counter *counter; struct mem_cgroup *memcg; unsigned short oldid; + int ret = 0; if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) return 0; + rcu_read_lock(); memcg = page_memcg(page); VM_WARN_ON_ONCE_PAGE(!memcg, page); if (!memcg) - return 0; + goto out; if (!entry.val) { memcg_memory_event(memcg, MEMCG_SWAP_FAIL); - return 0; + goto out; } memcg = mem_cgroup_id_get_online(memcg); @@ -7162,7 +7185,8 @@ int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry) memcg_memory_event(memcg, MEMCG_SWAP_MAX); memcg_memory_event(memcg, MEMCG_SWAP_FAIL); mem_cgroup_id_put(memcg); - return -ENOMEM; + ret = -ENOMEM; + goto out; } /* Get references for the tail pages, too */ @@ -7171,8 +7195,10 @@ int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry) oldid = swap_cgroup_record(entry, mem_cgroup_id(memcg), nr_pages); VM_BUG_ON_PAGE(oldid, page); mod_memcg_state(memcg, MEMCG_SWAP, nr_pages); +out: + rcu_read_unlock(); - return 0; + return ret; } /** @@ -7217,6 +7243,7 @@ long mem_cgroup_get_nr_swap_pages(struct mem_cgroup *memcg) bool mem_cgroup_swap_full(struct page *page) { struct mem_cgroup *memcg; + bool ret = false; VM_BUG_ON_PAGE(!PageLocked(page), page); @@ -7225,19 +7252,24 @@ bool mem_cgroup_swap_full(struct page *page) if (cgroup_memory_noswap || !cgroup_subsys_on_dfl(memory_cgrp_subsys)) return false; + rcu_read_lock(); memcg = page_memcg(page); if (!memcg) - return false; + goto out; for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg)) { unsigned long usage = page_counter_read(&memcg->swap); if (usage * 2 >= READ_ONCE(memcg->swap.high) || - usage * 2 >= READ_ONCE(memcg->swap.max)) - return true; + usage * 2 >= READ_ONCE(memcg->swap.max)) { + ret = true; + goto out; + } } +out: + rcu_read_unlock(); - return false; + return ret; } static int __init setup_swap_account(char *s) diff --git a/mm/migrate.c b/mm/migrate.c index 6c31ee1e1c9b..59e97a8a64a0 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -430,6 +430,10 @@ int folio_migrate_mapping(struct address_space *mapping, struct lruvec *old_lruvec, *new_lruvec; struct mem_cgroup *memcg; + /* + * Irq is disabled, which can serve as RCU read-side critical + * sections. + */ memcg = folio_memcg(folio); old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat); new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat); diff --git a/mm/page_io.c b/mm/page_io.c index 89fbf3cae30f..a0d9cd68e87a 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -221,13 +221,14 @@ static void bio_associate_blkg_from_page(struct bio *bio, struct page *page) struct cgroup_subsys_state *css; struct mem_cgroup *memcg; + rcu_read_lock(); memcg = page_memcg(page); if (!memcg) - return; + goto out; - rcu_read_lock(); css = cgroup_e_css(memcg->css.cgroup, &io_cgrp_subsys); bio_associate_blkg_from_css(bio, css); +out: rcu_read_unlock(); } #else From patchwork Mon May 30 07:49:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12864358 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62072C433FE for ; Mon, 30 May 2022 07:51:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 088468D000C; Mon, 30 May 2022 03:51:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 035FC8D0001; Mon, 30 May 2022 03:51:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E3E1A8D000C; Mon, 30 May 2022 03:51:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id D69688D0001 for ; Mon, 30 May 2022 03:51:27 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id B616660E99 for ; Mon, 30 May 2022 07:51:27 +0000 (UTC) X-FDA: 79521639414.12.B8DC024 Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) by imf09.hostedemail.com (Postfix) with ESMTP id 0AE4414003F for ; Mon, 30 May 2022 07:51:12 +0000 (UTC) Received: by mail-pj1-f47.google.com with SMTP id n13-20020a17090a394d00b001e30a60f82dso1281518pjf.5 for ; Mon, 30 May 2022 00:51:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=KV6RbMM8w6wD8F4JQPmwSzYM0iR92dgQE9lDfN51JOg=; b=OD2hATG9TOBEVqaSLbbb7DYN7E8AxGheax8qlgspCPL4scsq8K8ZDBJOLH6UR68odC s2KrbB/vRF3Bw+pYrdYCdp60/yeTrEYzRqbRKi8mCN0RtzUDev0NTCWYwqBZxmc2wc+n xSzVbVM9KwQdSk51ksO9a4BBDdf5ZwsFojHgLgUQaUx/Ls26AttA2JUg7SWXFBumtoYG fj3u7jLmMbfOQXtlNBi+WO+WEoIHGKj6wXpZLWfcDUfZcjqV+/WPOCBFIWgKk9P24235 7c9W84Xep8nVv4TwBL/lczB1wY+BTC1hu+9N4zYnuxQEn02sXN2tnwfUZlH3S/BQeLRJ 94cg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KV6RbMM8w6wD8F4JQPmwSzYM0iR92dgQE9lDfN51JOg=; b=xIXxAGQ/m+hoy55EuEjsCBXWtWlSvcx7rP46k2rBblqwayau5e5ScoyoqlHidaEQYc zrCs1IbqKifT357cp1GlB6gmOSyZvCc9PUmaqlWJcd+LKZtAQd2QUOWx/H5YoWHHWQFe uradxca5BhAVwPDjydNYy3cTf/0ccyJf1XfVhxuAA2GD/1yxbaT5EroUn2E7spT+9SOg 9xLEYA+GtsNlM+zjdZjLJMF3SFrLsD7kLgjSRQ0HmDsuyBLRg5WMKHw7F3WCov+RgJvb rLvYQSdcuPgK/3TZ8ne6idSPTFP5Net7K+I/WRKAHHd/xpRMNRbdhlv5tl0YsrxOKNtH QxWg== X-Gm-Message-State: AOAM530tyKEeg3JaB7umSNb5QmGLqIsZkJFhbswKLuJ97Y9CidxIoqcD SyWaQZoMqH/ge4apKS3zpcsMtQ== X-Google-Smtp-Source: ABdhPJx6AClcURIyJsLO64YIW87asSM1Y5uhNBox8YFpOq5D89BEw3QtMBy0j3V9+Otgo4I2gLmM6A== X-Received: by 2002:a17:902:bb82:b0:163:dc33:6b88 with SMTP id m2-20020a170902bb8200b00163dc336b88mr4349088pls.135.1653897086375; Mon, 30 May 2022 00:51:26 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([2408:8207:18da:2310:2071:e13a:8aa:cacf]) by smtp.gmail.com with ESMTPSA id a23-20020a170902b59700b001616c3bd5c2sm8421381pls.162.2022.05.30.00.51.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 May 2022 00:51:25 -0700 (PDT) From: Muchun Song To: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, akpm@linux-foundation.org Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com, longman@redhat.com, Muchun Song Subject: [PATCH v5 08/11] mm: memcontrol: introduce memcg_reparent_ops Date: Mon, 30 May 2022 15:49:16 +0800 Message-Id: <20220530074919.46352-9-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220530074919.46352-1-songmuchun@bytedance.com> References: <20220530074919.46352-1-songmuchun@bytedance.com> MIME-Version: 1.0 Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=OD2hATG9; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf09.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.47 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 0AE4414003F X-Stat-Signature: j55hq3rwn7k3gd553bqsgm4oh58due6g X-HE-Tag: 1653897072-213583 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the previous patch, we know how to make the lruvec lock safe when LRU pages are reparented. We should do something like following. memcg_reparent_objcgs(memcg) 1) lock // lruvec belongs to memcg and lruvec_parent belongs to parent memcg. spin_lock(&lruvec->lru_lock); spin_lock(&lruvec_parent->lru_lock); 2) relocate from current memcg to its parent // Move all the pages from the lruvec list to the parent lruvec list. 3) unlock spin_unlock(&lruvec_parent->lru_lock); spin_unlock(&lruvec->lru_lock); Apart from the page lruvec lock, the deferred split queue lock (THP only) also needs to do something similar. So we extract the necessary three steps in the memcg_reparent_objcgs(). memcg_reparent_objcgs(memcg) 1) lock memcg_reparent_ops->lock(memcg, parent); 2) relocate memcg_reparent_ops->relocate(memcg, reparent); 3) unlock memcg_reparent_ops->unlock(memcg, reparent); Now there are two different locks (e.g. lruvec lock and deferred split queue lock) need to use this infrastructure. In the next patch, we will use those APIs to make those locks safe when the LRU pages reparented. Signed-off-by: Muchun Song --- include/linux/memcontrol.h | 20 +++++++++++++++ mm/memcontrol.c | 62 ++++++++++++++++++++++++++++++++++++---------- 2 files changed, 69 insertions(+), 13 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 16464116f94a..c2ac98a0ece4 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -347,6 +347,26 @@ struct mem_cgroup { struct mem_cgroup_per_node *nodeinfo[]; }; +struct memcg_reparent_ops { + /* + * Note that interrupt is disabled before calling those callbacks, + * so the interrupt should remain disabled when leaving those callbacks. + */ + void (*lock)(struct mem_cgroup *src, struct mem_cgroup *dst); + void (*relocate)(struct mem_cgroup *src, struct mem_cgroup *dst); + void (*unlock)(struct mem_cgroup *src, struct mem_cgroup *dst); +}; + +#define DEFINE_MEMCG_REPARENT_OPS(name) \ + const struct memcg_reparent_ops memcg_##name##_reparent_ops = { \ + .lock = name##_reparent_lock, \ + .relocate = name##_reparent_relocate, \ + .unlock = name##_reparent_unlock, \ + } + +#define DECLARE_MEMCG_REPARENT_OPS(name) \ + extern const struct memcg_reparent_ops memcg_##name##_reparent_ops + /* * size of first charge trial. "32" comes from vmscan.c's magic value. * TODO: maybe necessary to use big numbers in big irons. diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 4cc392741753..059188eeb80c 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -337,24 +337,60 @@ static struct obj_cgroup *obj_cgroup_alloc(void) return objcg; } -static void memcg_reparent_objcgs(struct mem_cgroup *memcg) +static void objcg_reparent_lock(struct mem_cgroup *src, struct mem_cgroup *dst) +{ + spin_lock(&objcg_lock); +} + +static void objcg_reparent_relocate(struct mem_cgroup *src, struct mem_cgroup *dst) { struct obj_cgroup *objcg, *iter; - struct mem_cgroup *parent = parent_mem_cgroup(memcg); - objcg = rcu_replace_pointer(memcg->objcg, NULL, true); + objcg = rcu_replace_pointer(src->objcg, NULL, true); + /* 1) Ready to reparent active objcg. */ + list_add(&objcg->list, &src->objcg_list); + /* 2) Reparent active objcg and already reparented objcgs to dst. */ + list_for_each_entry(iter, &src->objcg_list, list) + WRITE_ONCE(iter->memcg, dst); + /* 3) Move already reparented objcgs to the dst's list */ + list_splice(&src->objcg_list, &dst->objcg_list); +} + +static void objcg_reparent_unlock(struct mem_cgroup *src, struct mem_cgroup *dst) +{ + spin_unlock(&objcg_lock); +} - spin_lock_irq(&objcg_lock); +static DEFINE_MEMCG_REPARENT_OPS(objcg); - /* 1) Ready to reparent active objcg. */ - list_add(&objcg->list, &memcg->objcg_list); - /* 2) Reparent active objcg and already reparented objcgs to parent. */ - list_for_each_entry(iter, &memcg->objcg_list, list) - WRITE_ONCE(iter->memcg, parent); - /* 3) Move already reparented objcgs to the parent's list */ - list_splice(&memcg->objcg_list, &parent->objcg_list); - - spin_unlock_irq(&objcg_lock); +static const struct memcg_reparent_ops *memcg_reparent_ops[] = { + &memcg_objcg_reparent_ops, +}; + +#define DEFINE_MEMCG_REPARENT_FUNC(phase) \ + static void memcg_reparent_##phase(struct mem_cgroup *src, \ + struct mem_cgroup *dst) \ + { \ + int i; \ + \ + for (i = 0; i < ARRAY_SIZE(memcg_reparent_ops); i++) \ + memcg_reparent_ops[i]->phase(src, dst); \ + } + +DEFINE_MEMCG_REPARENT_FUNC(lock) +DEFINE_MEMCG_REPARENT_FUNC(relocate) +DEFINE_MEMCG_REPARENT_FUNC(unlock) + +static void memcg_reparent_objcgs(struct mem_cgroup *src) +{ + struct mem_cgroup *dst = parent_mem_cgroup(src); + struct obj_cgroup *objcg = rcu_dereference_protected(src->objcg, true); + + local_irq_disable(); + memcg_reparent_lock(src, dst); + memcg_reparent_relocate(src, dst); + memcg_reparent_unlock(src, dst); + local_irq_enable(); percpu_ref_kill(&objcg->refcnt); } From patchwork Mon May 30 07:49:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12864359 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE4DBC433F5 for ; Mon, 30 May 2022 07:51:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 48F558D000D; Mon, 30 May 2022 03:51:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 41AEF8D0001; Mon, 30 May 2022 03:51:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 26C1F8D000D; Mon, 30 May 2022 03:51:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 117778D0001 for ; Mon, 30 May 2022 03:51:38 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id D6ECE80E48 for ; Mon, 30 May 2022 07:51:37 +0000 (UTC) X-FDA: 79521639834.07.39A681F Received: from mail-pj1-f51.google.com (mail-pj1-f51.google.com [209.85.216.51]) by imf30.hostedemail.com (Postfix) with ESMTP id 91CC98004D for ; Mon, 30 May 2022 07:51:04 +0000 (UTC) Received: by mail-pj1-f51.google.com with SMTP id j7so4391486pjn.4 for ; Mon, 30 May 2022 00:51:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HHHwRXT4NPA3dNEWLRGKZUfTyeAeKsYLTuXrhRkWfDM=; b=yUDMo/O91Wcss4NFsx1qSVoD+u+E4a7NTlFOwfAYtdDPAyAOM4RLYHUyfmKWSN6L5H wbJ8HkzAyGQs9WLmn/dPcH5/btuQQX3gnAgxe9c/8mcjYsLfBjQF7/c/u57m3iZElYTX kvnS72J1bhZrhf4dAMcv/1wWdoFB7ufe/JKc2NttPWcqnX6xcc9JlDUv/5v3GAVXC1xV fiqk66kQaGe53jU1NbeFfaDjNSSuoiFHSqpw0UBOOLNbWl9xuyEVzvIDTJWNgdLpuXj7 8wTXz6XJ9hdi9HH6ffeCxIv5Yl7tZD+pT20hfopsB/Jkls8VLfSSQXFcaSdl4om5cHrW tYzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HHHwRXT4NPA3dNEWLRGKZUfTyeAeKsYLTuXrhRkWfDM=; b=yup7vXECTkRvfDJTlWjE/O+eqlLx+hxRTPJI8vIyPY2grgrAKhfMJlFFD4iWhxct97 g27iufcYssYIBYY+wlh5vKCbIJmicpziDpl1TnzdAp4u0wElkO34mBD5FTqZsJtx7CqJ U84/cTbdZoq/hIM6qavpJLXUfBhWlkeNauvPO8xnnXikLn8WZzAX9NAAh1dcS321rJj8 u7SALtSdPz9Sd5MeSYB/B3LM5qp9IaT+C2USON3T3+ipHlkSDPLjwdLGrnt4yGzWV87V YeWqWGL4MiDn3h0j7yRJHOI9POr5v+aB3j3K5UW3PEvdvZt1KYnl6sNF+l1dJrdVK9l8 wRWA== X-Gm-Message-State: AOAM532xK0ijTqIJnU05majR+JnWKP9Vl+zz7jLwdxc3RcUsMd5UJ97Y u+1f8riVoTRqt4Ha9vJni3rdUw== X-Google-Smtp-Source: ABdhPJxTIdL9pAhUXVIJ5QlkRX1sLVU21czYu6GwIC3LiRO1i7YkmXNmWYWeC/tw/MSQuJgnemPsbg== X-Received: by 2002:a17:902:704a:b0:161:996e:bf4 with SMTP id h10-20020a170902704a00b00161996e0bf4mr54349648plt.118.1653897096375; Mon, 30 May 2022 00:51:36 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([2408:8207:18da:2310:2071:e13a:8aa:cacf]) by smtp.gmail.com with ESMTPSA id a23-20020a170902b59700b001616c3bd5c2sm8421381pls.162.2022.05.30.00.51.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 May 2022 00:51:35 -0700 (PDT) From: Muchun Song To: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, akpm@linux-foundation.org Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com, longman@redhat.com, Muchun Song Subject: [PATCH v5 09/11] mm: memcontrol: use obj_cgroup APIs to charge the LRU pages Date: Mon, 30 May 2022 15:49:17 +0800 Message-Id: <20220530074919.46352-10-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220530074919.46352-1-songmuchun@bytedance.com> References: <20220530074919.46352-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 91CC98004D X-Stat-Signature: yry17wemqjhqbbx4ipcoj1ormqfa4sz9 Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b="yUDMo/O9"; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf30.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.51 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-HE-Tag: 1653897064-244729 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We will reuse the obj_cgroup APIs to charge the LRU pages. Finally, page->memcg_data will have 2 different meanings. - For the slab pages, page->memcg_data points to an object cgroups vector. - For the kmem pages (exclude the slab pages) and the LRU pages, page->memcg_data points to an object cgroup. In this patch, we reuse obj_cgroup APIs to charge LRU pages. In the end, The page cache cannot prevent long-living objects from pinning the original memory cgroup in the memory. At the same time we also changed the rules of page and objcg or memcg binding stability. The new rules are as follows. For a page any of the following ensures page and objcg binding stability: - the page lock - LRU isolation - lock_page_memcg() - exclusive reference Based on the stable binding of page and objcg, for a page any of the following ensures page and memcg binding stability: - objcg_lock - cgroup_mutex - the lruvec lock - the split queue lock (only THP page) If the caller only want to ensure that the page counters of memcg are updated correctly, ensure that the binding stability of page and objcg is sufficient. Signed-off-by: Muchun Song Reviewed-by: Michal Koutný Acked-by: Roman Gushchin --- include/linux/memcontrol.h | 89 +++++--------- mm/huge_memory.c | 35 ++++++ mm/memcontrol.c | 289 ++++++++++++++++++++++++++++++++------------- 3 files changed, 276 insertions(+), 137 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index c2ac98a0ece4..e3a4354e20da 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -386,8 +386,6 @@ enum page_memcg_data_flags { #define MEMCG_DATA_FLAGS_MASK (__NR_MEMCG_DATA_FLAGS - 1) -static inline bool folio_memcg_kmem(struct folio *folio); - /* * After the initialization objcg->memcg is always pointing at * a valid memcg, but can be atomically swapped to the parent memcg. @@ -401,43 +399,19 @@ static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg) } /* - * __folio_memcg - Get the memory cgroup associated with a non-kmem folio - * @folio: Pointer to the folio. - * - * Returns a pointer to the memory cgroup associated with the folio, - * or NULL. This function assumes that the folio is known to have a - * proper memory cgroup pointer. It's not safe to call this function - * against some type of folios, e.g. slab folios or ex-slab folios or - * kmem folios. - */ -static inline struct mem_cgroup *__folio_memcg(struct folio *folio) -{ - unsigned long memcg_data = folio->memcg_data; - - VM_BUG_ON_FOLIO(folio_test_slab(folio), folio); - VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio); - VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_KMEM, folio); - - return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); -} - -/* - * __folio_objcg - get the object cgroup associated with a kmem folio. + * folio_objcg - get the object cgroup associated with a folio. * @folio: Pointer to the folio. * * Returns a pointer to the object cgroup associated with the folio, * or NULL. This function assumes that the folio is known to have a - * proper object cgroup pointer. It's not safe to call this function - * against some type of folios, e.g. slab folios or ex-slab folios or - * LRU folios. + * proper object cgroup pointer. */ -static inline struct obj_cgroup *__folio_objcg(struct folio *folio) +static inline struct obj_cgroup *folio_objcg(struct folio *folio) { unsigned long memcg_data = folio->memcg_data; VM_BUG_ON_FOLIO(folio_test_slab(folio), folio); VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio); - VM_BUG_ON_FOLIO(!(memcg_data & MEMCG_DATA_KMEM), folio); return (struct obj_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); } @@ -451,22 +425,33 @@ static inline struct obj_cgroup *__folio_objcg(struct folio *folio) * proper memory cgroup pointer. It's not safe to call this function * against some type of folios, e.g. slab folios or ex-slab folios. * - * For a non-kmem folio any of the following ensures folio and memcg binding - * stability: + * For a folio any of the following ensures folio and objcg binding stability: * * - the folio lock * - LRU isolation * - lock_page_memcg() * - exclusive reference * + * Based on the stable binding of folio and objcg, for a folio any of the + * following ensures folio and memcg binding stability: + * + * - objcg_lock + * - cgroup_mutex + * - the lruvec lock + * - the split queue lock (only THP page) + * + * If the caller only want to ensure that the page counters of memcg are + * updated correctly, ensure that the binding stability of folio and objcg + * is sufficient. + * * Note: The caller should hold an rcu read lock to protect memcg associated * with a folio from being released. */ static inline struct mem_cgroup *folio_memcg(struct folio *folio) { - if (folio_memcg_kmem(folio)) - return obj_cgroup_memcg(__folio_objcg(folio)); - return __folio_memcg(folio); + struct obj_cgroup *objcg = folio_objcg(folio); + + return objcg ? obj_cgroup_memcg(objcg) : NULL; } /* @@ -490,6 +475,8 @@ static inline struct mem_cgroup *page_memcg(struct page *page) * folio is known to have a proper memory cgroup pointer. It's not safe * to call this function against some type of pages, e.g. slab pages or * ex-slab pages. + * + * The page and objcg or memcg binding rules can refer to folio_memcg(). */ static inline struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *folio) { @@ -520,22 +507,20 @@ static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page) * * Return: A pointer to the memory cgroup associated with the folio, * or NULL. + * + * The folio and objcg or memcg binding rules can refer to folio_memcg(). */ static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) { unsigned long memcg_data = READ_ONCE(folio->memcg_data); + struct obj_cgroup *objcg; VM_BUG_ON_FOLIO(folio_test_slab(folio), folio); WARN_ON_ONCE(!rcu_read_lock_held()); - if (memcg_data & MEMCG_DATA_KMEM) { - struct obj_cgroup *objcg; + objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); - objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); - return obj_cgroup_memcg(objcg); - } - - return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + return objcg ? obj_cgroup_memcg(objcg) : NULL; } /* @@ -548,16 +533,10 @@ static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) * has an associated memory cgroup pointer or an object cgroups vector or * an object cgroup. * - * For a non-kmem page any of the following ensures page and memcg binding - * stability: + * The page and objcg or memcg binding rules can refer to page_memcg(). * - * - the page lock - * - LRU isolation - * - lock_page_memcg() - * - exclusive reference - * - * For a kmem page a caller should hold an rcu read lock to protect memcg - * associated with a kmem page from being released. + * A caller should hold an rcu read lock to protect memcg associated with a + * page from being released. */ static inline struct mem_cgroup *page_memcg_check(struct page *page) { @@ -566,18 +545,14 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page) * for slab pages, READ_ONCE() should be used here. */ unsigned long memcg_data = READ_ONCE(page->memcg_data); + struct obj_cgroup *objcg; if (memcg_data & MEMCG_DATA_OBJCGS) return NULL; - if (memcg_data & MEMCG_DATA_KMEM) { - struct obj_cgroup *objcg; - - objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); - return obj_cgroup_memcg(objcg); - } + objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); - return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + return objcg ? obj_cgroup_memcg(objcg) : NULL; } static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index d3411dc291ab..931d0c2ce062 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -503,6 +503,8 @@ pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma) } #ifdef CONFIG_MEMCG +static struct shrinker deferred_split_shrinker; + static inline struct mem_cgroup *folio_split_queue_memcg(struct folio *folio, struct deferred_split *queue) { @@ -519,6 +521,39 @@ static inline struct deferred_split *folio_memcg_split_queue(struct folio *folio return memcg ? &memcg->deferred_split_queue : NULL; } + +static void thp_sq_reparent_lock(struct mem_cgroup *src, struct mem_cgroup *dst) +{ + spin_lock(&src->deferred_split_queue.split_queue_lock); + spin_lock_nested(&dst->deferred_split_queue.split_queue_lock, + SINGLE_DEPTH_NESTING); +} + +static void thp_sq_reparent_relocate(struct mem_cgroup *src, struct mem_cgroup *dst) +{ + int nid; + struct deferred_split *src_queue, *dst_queue; + + src_queue = &src->deferred_split_queue; + dst_queue = &dst->deferred_split_queue; + + if (!src_queue->split_queue_len) + return; + + list_splice_tail_init(&src_queue->split_queue, &dst_queue->split_queue); + dst_queue->split_queue_len += src_queue->split_queue_len; + src_queue->split_queue_len = 0; + + for_each_node(nid) + set_shrinker_bit(dst, nid, deferred_split_shrinker.id); +} + +static void thp_sq_reparent_unlock(struct mem_cgroup *src, struct mem_cgroup *dst) +{ + spin_unlock(&dst->deferred_split_queue.split_queue_lock); + spin_unlock(&src->deferred_split_queue.split_queue_lock); +} +DEFINE_MEMCG_REPARENT_OPS(thp_sq); #else static inline struct mem_cgroup *folio_split_queue_memcg(struct folio *folio, struct deferred_split *queue) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 059188eeb80c..f4db3cb2aedc 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -76,6 +76,7 @@ struct cgroup_subsys memory_cgrp_subsys __read_mostly; EXPORT_SYMBOL(memory_cgrp_subsys); struct mem_cgroup *root_mem_cgroup __read_mostly; +static struct obj_cgroup *root_obj_cgroup __read_mostly; /* Active memory cgroup to use from an interrupt context */ DEFINE_PER_CPU(struct mem_cgroup *, int_active_memcg); @@ -256,6 +257,11 @@ struct mem_cgroup *vmpressure_to_memcg(struct vmpressure *vmpr) static DEFINE_SPINLOCK(objcg_lock); +static inline bool obj_cgroup_is_root(struct obj_cgroup *objcg) +{ + return objcg == root_obj_cgroup; +} + #ifdef CONFIG_MEMCG_KMEM bool mem_cgroup_kmem_disabled(void) { @@ -363,8 +369,77 @@ static void objcg_reparent_unlock(struct mem_cgroup *src, struct mem_cgroup *dst static DEFINE_MEMCG_REPARENT_OPS(objcg); +static void lruvec_reparent_lock(struct mem_cgroup *src, struct mem_cgroup *dst) +{ + int nid, nest = 0; + + for_each_node(nid) { + spin_lock_nested(&mem_cgroup_lruvec(src, + NODE_DATA(nid))->lru_lock, nest++); + spin_lock_nested(&mem_cgroup_lruvec(dst, + NODE_DATA(nid))->lru_lock, nest++); + } +} + +static void lruvec_reparent_lru(struct lruvec *src, struct lruvec *dst, + enum lru_list lru) +{ + int zid; + struct mem_cgroup_per_node *mz_src, *mz_dst; + + mz_src = container_of(src, struct mem_cgroup_per_node, lruvec); + mz_dst = container_of(dst, struct mem_cgroup_per_node, lruvec); + + if (lru != LRU_UNEVICTABLE) + list_splice_tail_init(&src->lists[lru], &dst->lists[lru]); + + for (zid = 0; zid < MAX_NR_ZONES; zid++) { + mz_dst->lru_zone_size[zid][lru] += mz_src->lru_zone_size[zid][lru]; + mz_src->lru_zone_size[zid][lru] = 0; + } +} + +static void lruvec_reparent_relocate(struct mem_cgroup *src, struct mem_cgroup *dst) +{ + int nid; + + for_each_node(nid) { + enum lru_list lru; + struct lruvec *src_lruvec, *dst_lruvec; + + src_lruvec = mem_cgroup_lruvec(src, NODE_DATA(nid)); + dst_lruvec = mem_cgroup_lruvec(dst, NODE_DATA(nid)); + + dst_lruvec->anon_cost += src_lruvec->anon_cost; + dst_lruvec->file_cost += src_lruvec->file_cost; + + for_each_lru(lru) + lruvec_reparent_lru(src_lruvec, dst_lruvec, lru); + } +} + +static void lruvec_reparent_unlock(struct mem_cgroup *src, struct mem_cgroup *dst) +{ + int nid; + + for_each_node(nid) { + spin_unlock(&mem_cgroup_lruvec(dst, NODE_DATA(nid))->lru_lock); + spin_unlock(&mem_cgroup_lruvec(src, NODE_DATA(nid))->lru_lock); + } +} + +static DEFINE_MEMCG_REPARENT_OPS(lruvec); + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +DECLARE_MEMCG_REPARENT_OPS(thp_sq); +#endif + static const struct memcg_reparent_ops *memcg_reparent_ops[] = { &memcg_objcg_reparent_ops, + &memcg_lruvec_reparent_ops, +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + &memcg_thp_sq_reparent_ops, +#endif }; #define DEFINE_MEMCG_REPARENT_FUNC(phase) \ @@ -2818,18 +2893,33 @@ static inline void cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages page_counter_uncharge(&memcg->memsw, nr_pages); } -static void commit_charge(struct folio *folio, struct mem_cgroup *memcg) +static void commit_charge(struct folio *folio, struct obj_cgroup *objcg) { - VM_BUG_ON_FOLIO(folio_memcg(folio), folio); + VM_BUG_ON_FOLIO(folio_objcg(folio), folio); /* - * Any of the following ensures page's memcg stability: + * Any of the following ensures page's objcg stability: * * - the page lock * - LRU isolation * - lock_page_memcg() * - exclusive reference */ - folio->memcg_data = (unsigned long)memcg; + folio->memcg_data = (unsigned long)objcg; +} + +static struct obj_cgroup *__get_obj_cgroup_from_memcg(struct mem_cgroup *memcg) +{ + struct obj_cgroup *objcg = NULL; + + rcu_read_lock(); + for (; memcg; memcg = parent_mem_cgroup(memcg)) { + objcg = rcu_dereference(memcg->objcg); + if (objcg && obj_cgroup_tryget(objcg)) + break; + } + rcu_read_unlock(); + + return objcg; } #ifdef CONFIG_MEMCG_KMEM @@ -2960,12 +3050,15 @@ __always_inline struct obj_cgroup *get_obj_cgroup_from_current(void) else memcg = mem_cgroup_from_task(current); - for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg)) { - objcg = rcu_dereference(memcg->objcg); - if (objcg && obj_cgroup_tryget(objcg)) - break; + if (mem_cgroup_is_root(memcg)) + goto out; + + objcg = __get_obj_cgroup_from_memcg(memcg); + if (obj_cgroup_is_root(objcg)) { + obj_cgroup_put(objcg); objcg = NULL; } +out: rcu_read_unlock(); return objcg; @@ -3062,13 +3155,13 @@ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order) void __memcg_kmem_uncharge_page(struct page *page, int order) { struct folio *folio = page_folio(page); - struct obj_cgroup *objcg; + struct obj_cgroup *objcg = folio_objcg(folio); unsigned int nr_pages = 1 << order; - if (!folio_memcg_kmem(folio)) + if (!objcg) return; - objcg = __folio_objcg(folio); + VM_BUG_ON_FOLIO(!folio_memcg_kmem(folio), folio); obj_cgroup_uncharge_pages(objcg, nr_pages); folio->memcg_data = 0; obj_cgroup_put(objcg); @@ -3322,26 +3415,21 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size) #endif /* CONFIG_MEMCG_KMEM */ /* - * Because page_memcg(head) is not set on tails, set it now. + * Because page_objcg(head) is not set on tails, set it now. */ void split_page_memcg(struct page *head, unsigned int nr) { struct folio *folio = page_folio(head); - struct mem_cgroup *memcg = get_mem_cgroup_from_folio(folio); + struct obj_cgroup *objcg = folio_objcg(folio); int i; - if (mem_cgroup_disabled() || !memcg) + if (mem_cgroup_disabled() || !objcg) return; for (i = 1; i < nr; i++) folio_page(folio, i)->memcg_data = folio->memcg_data; - if (folio_memcg_kmem(folio)) - obj_cgroup_get_many(__folio_objcg(folio), nr - 1); - else - css_get_many(&memcg->css, nr - 1); - - css_put(&memcg->css); + obj_cgroup_get_many(objcg, nr - 1); } #ifdef CONFIG_MEMCG_SWAP @@ -5238,6 +5326,9 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css) objcg->memcg = memcg; rcu_assign_pointer(memcg->objcg, objcg); + if (unlikely(mem_cgroup_is_root(memcg))) + root_obj_cgroup = objcg; + /* Online state pins memcg ID, memcg ID pins CSS */ refcount_set(&memcg->id.ref, 1); css_get(css); @@ -5642,10 +5733,12 @@ static int mem_cgroup_move_account(struct page *page, */ smp_mb(); - css_get(&to->css); - css_put(&from->css); + rcu_read_lock(); + obj_cgroup_get(rcu_dereference(to->objcg)); + obj_cgroup_put(rcu_dereference(from->objcg)); + rcu_read_unlock(); - folio->memcg_data = (unsigned long)to; + folio->memcg_data = (unsigned long)rcu_access_pointer(to->objcg); __folio_memcg_unlock(from); @@ -6118,6 +6211,42 @@ static void mem_cgroup_move_charge(void) mmap_read_unlock(mc.mm); atomic_dec(&mc.from->moving_account); + + /* + * Moving its pages to another memcg is finished. Wait for already + * started RCU-only updates to finish to make sure that the caller + * of lock_page_memcg() can unlock the correct move_lock. The + * possible bad scenario would like: + * + * CPU0: CPU1: + * mem_cgroup_move_charge() + * walk_page_range() + * + * lock_page_memcg(page) + * memcg = folio_memcg() + * spin_lock_irqsave(&memcg->move_lock) + * memcg->move_lock_task = current + * + * atomic_dec(&mc.from->moving_account) + * + * mem_cgroup_css_offline() + * memcg_offline_kmem() + * memcg_reparent_objcgs() <== reparented + * + * unlock_page_memcg(page) + * memcg = folio_memcg() <== memcg has been changed + * if (memcg->move_lock_task == current) <== false + * spin_unlock_irqrestore(&memcg->move_lock) + * + * Once mem_cgroup_move_charge() returns (it means that the cgroup_mutex + * would be released soon), the page can be reparented to its parent + * memcg. When the unlock_page_memcg() is called for the page, we will + * miss unlock the move_lock. So using synchronize_rcu to wait for + * already started RCU-only updates to finish before this function + * returns (mem_cgroup_move_charge() and mem_cgroup_css_offline() are + * serialized by cgroup_mutex). + */ + synchronize_rcu(); } /* @@ -6673,21 +6802,26 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root, static int charge_memcg(struct folio *folio, struct mem_cgroup *memcg, gfp_t gfp) { + struct obj_cgroup *objcg; long nr_pages = folio_nr_pages(folio); - int ret; + int ret = 0; - ret = try_charge(memcg, gfp, nr_pages); + objcg = __get_obj_cgroup_from_memcg(memcg); + /* Do not account at the root objcg level. */ + if (!obj_cgroup_is_root(objcg)) + ret = try_charge(memcg, gfp, nr_pages); if (ret) goto out; - css_get(&memcg->css); - commit_charge(folio, memcg); + obj_cgroup_get(objcg); + commit_charge(folio, objcg); local_irq_disable(); mem_cgroup_charge_statistics(memcg, nr_pages); memcg_check_events(memcg, folio_nid(folio)); local_irq_enable(); out: + obj_cgroup_put(objcg); return ret; } @@ -6773,7 +6907,7 @@ void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry) } struct uncharge_gather { - struct mem_cgroup *memcg; + struct obj_cgroup *objcg; unsigned long nr_memory; unsigned long pgpgout; unsigned long nr_kmem; @@ -6788,63 +6922,56 @@ static inline void uncharge_gather_clear(struct uncharge_gather *ug) static void uncharge_batch(const struct uncharge_gather *ug) { unsigned long flags; + struct mem_cgroup *memcg; + rcu_read_lock(); + memcg = obj_cgroup_memcg(ug->objcg); if (ug->nr_memory) { - page_counter_uncharge(&ug->memcg->memory, ug->nr_memory); + page_counter_uncharge(&memcg->memory, ug->nr_memory); if (do_memsw_account()) - page_counter_uncharge(&ug->memcg->memsw, ug->nr_memory); + page_counter_uncharge(&memcg->memsw, ug->nr_memory); if (ug->nr_kmem) - memcg_account_kmem(ug->memcg, -ug->nr_kmem); - memcg_oom_recover(ug->memcg); + memcg_account_kmem(memcg, -ug->nr_kmem); + memcg_oom_recover(memcg); } local_irq_save(flags); - __count_memcg_events(ug->memcg, PGPGOUT, ug->pgpgout); - __this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, ug->nr_memory); - memcg_check_events(ug->memcg, ug->nid); + __count_memcg_events(memcg, PGPGOUT, ug->pgpgout); + __this_cpu_add(memcg->vmstats_percpu->nr_page_events, ug->nr_memory); + memcg_check_events(memcg, ug->nid); local_irq_restore(flags); + rcu_read_unlock(); /* drop reference from uncharge_folio */ - css_put(&ug->memcg->css); + obj_cgroup_put(ug->objcg); } static void uncharge_folio(struct folio *folio, struct uncharge_gather *ug) { long nr_pages; - struct mem_cgroup *memcg; struct obj_cgroup *objcg; VM_BUG_ON_FOLIO(folio_test_lru(folio), folio); /* * Nobody should be changing or seriously looking at - * folio memcg or objcg at this point, we have fully - * exclusive access to the folio. + * folio objcg at this point, we have fully exclusive + * access to the folio. */ - if (folio_memcg_kmem(folio)) { - objcg = __folio_objcg(folio); - /* - * This get matches the put at the end of the function and - * kmem pages do not hold memcg references anymore. - */ - memcg = get_mem_cgroup_from_objcg(objcg); - } else { - memcg = __folio_memcg(folio); - } - - if (!memcg) + objcg = folio_objcg(folio); + if (!objcg) return; - if (ug->memcg != memcg) { - if (ug->memcg) { + if (ug->objcg != objcg) { + if (ug->objcg) { uncharge_batch(ug); uncharge_gather_clear(ug); } - ug->memcg = memcg; + ug->objcg = objcg; ug->nid = folio_nid(folio); - /* pairs with css_put in uncharge_batch */ - css_get(&memcg->css); + /* pairs with obj_cgroup_put in uncharge_batch */ + obj_cgroup_get(objcg); } nr_pages = folio_nr_pages(folio); @@ -6852,19 +6979,15 @@ static void uncharge_folio(struct folio *folio, struct uncharge_gather *ug) if (folio_memcg_kmem(folio)) { ug->nr_memory += nr_pages; ug->nr_kmem += nr_pages; - - folio->memcg_data = 0; - obj_cgroup_put(objcg); } else { /* LRU pages aren't accounted at the root level */ - if (!mem_cgroup_is_root(memcg)) + if (!obj_cgroup_is_root(objcg)) ug->nr_memory += nr_pages; ug->pgpgout++; - - folio->memcg_data = 0; } - css_put(&memcg->css); + folio->memcg_data = 0; + obj_cgroup_put(objcg); } void __mem_cgroup_uncharge(struct folio *folio) @@ -6872,7 +6995,7 @@ void __mem_cgroup_uncharge(struct folio *folio) struct uncharge_gather ug; /* Don't touch folio->lru of any random page, pre-check: */ - if (!folio_memcg(folio)) + if (!folio_objcg(folio)) return; uncharge_gather_clear(&ug); @@ -6895,7 +7018,7 @@ void __mem_cgroup_uncharge_list(struct list_head *page_list) uncharge_gather_clear(&ug); list_for_each_entry(folio, page_list, lru) uncharge_folio(folio, &ug); - if (ug.memcg) + if (ug.objcg) uncharge_batch(&ug); } @@ -6912,6 +7035,7 @@ void __mem_cgroup_uncharge_list(struct list_head *page_list) void mem_cgroup_migrate(struct folio *old, struct folio *new) { struct mem_cgroup *memcg; + struct obj_cgroup *objcg; long nr_pages = folio_nr_pages(new); unsigned long flags; @@ -6924,30 +7048,33 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new) return; /* Page cache replacement: new folio already charged? */ - if (folio_memcg(new)) + if (folio_objcg(new)) return; - memcg = get_mem_cgroup_from_folio(old); - VM_WARN_ON_ONCE_FOLIO(!memcg, old); - if (!memcg) + objcg = folio_objcg(old); + VM_WARN_ON_ONCE_FOLIO(!objcg, old); + if (!objcg) return; + rcu_read_lock(); + memcg = obj_cgroup_memcg(objcg); + /* Force-charge the new page. The old one will be freed soon */ - if (!mem_cgroup_is_root(memcg)) { + if (!obj_cgroup_is_root(objcg)) { page_counter_charge(&memcg->memory, nr_pages); if (do_memsw_account()) page_counter_charge(&memcg->memsw, nr_pages); } - css_get(&memcg->css); - commit_charge(new, memcg); + obj_cgroup_get(objcg); + commit_charge(new, objcg); local_irq_save(flags); mem_cgroup_charge_statistics(memcg, nr_pages); memcg_check_events(memcg, folio_nid(new)); local_irq_restore(flags); - css_put(&memcg->css); + rcu_read_unlock(); } DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key); @@ -7120,6 +7247,7 @@ static struct mem_cgroup *mem_cgroup_id_get_online(struct mem_cgroup *memcg) void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry) { struct mem_cgroup *memcg, *swap_memcg; + struct obj_cgroup *objcg; unsigned int nr_entries; unsigned short oldid; @@ -7132,15 +7260,16 @@ void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry) if (cgroup_subsys_on_dfl(memory_cgrp_subsys)) return; + objcg = folio_objcg(folio); + VM_WARN_ON_ONCE_FOLIO(!objcg, folio); + if (!objcg) + return; + /* * Interrupts should be disabled by the caller (see the comments below), * which can serve as RCU read-side critical sections. */ - memcg = folio_memcg(folio); - - VM_WARN_ON_ONCE_FOLIO(!memcg, folio); - if (!memcg) - return; + memcg = obj_cgroup_memcg(objcg); /* * In case the memcg owning these pages has been offlined and doesn't @@ -7159,7 +7288,7 @@ void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry) folio->memcg_data = 0; - if (!mem_cgroup_is_root(memcg)) + if (!obj_cgroup_is_root(objcg)) page_counter_uncharge(&memcg->memory, nr_entries); if (!cgroup_memory_noswap && memcg != swap_memcg) { @@ -7179,7 +7308,7 @@ void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry) memcg_stats_unlock(); memcg_check_events(memcg, folio_nid(folio)); - css_put(&memcg->css); + obj_cgroup_put(objcg); } /** From patchwork Mon May 30 07:49:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12864360 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32D55C433F5 for ; Mon, 30 May 2022 07:51:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BF2C58D000E; Mon, 30 May 2022 03:51:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BA4F38D0001; Mon, 30 May 2022 03:51:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A8EE38D000E; Mon, 30 May 2022 03:51:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 9940E8D0001 for ; Mon, 30 May 2022 03:51:46 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 65B0E20955 for ; Mon, 30 May 2022 07:51:46 +0000 (UTC) X-FDA: 79521640212.17.BB3E0DB Received: from mail-pf1-f173.google.com (mail-pf1-f173.google.com [209.85.210.173]) by imf24.hostedemail.com (Postfix) with ESMTP id 704CF18001D for ; Mon, 30 May 2022 07:51:31 +0000 (UTC) Received: by mail-pf1-f173.google.com with SMTP id y189so9890550pfy.10 for ; Mon, 30 May 2022 00:51:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=c1uSXYTb359GYZYUL38E0tsCvHSowrqKxSPkaS7BCw0=; b=Z+vIgsb9cnaEHIul2toSXUm6rrCc47T9Tan81WiyTc8fO5XEV8I1D0ZFf2vl7MhWLH kLBQa8mUAWBL97TrBJ4ySat0p7EVMxZ9DfvEJgOWcsN9kO+4aSfWwa6W1telCGw6aHwN 3CiTzRdDD3omXqegxcAje7SfZfyHu57wPelMG9sAHBaqD980yvgFNENio5LLl8JPnXRH N1LgGOA1LxRBixOD66Ngx0ua5As3E1N3oOgQlrtXJlCbziPvms/69m7Usnjz8Gt5hwDO U6mY4k2NR0vVq1xWJaoddNfTFC37xhE8DosrvuxlbyNp2gAbhMr3NkLJlYCrIoKN/fnB TAug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=c1uSXYTb359GYZYUL38E0tsCvHSowrqKxSPkaS7BCw0=; b=g0UWAo3kDUpV51b8proVd2nl3OIGvuqLeZt+OAG/47Qh7lESVSEEq9900+k6bW6hZk h2ebfAGVTjgA23PlY5GhVsWWV7Z9DfdUQEv77bztZPjKTrbzw08k4y4cEyFugkZXUibK yke2dX/nIegev5dYdBPwGc4uUg1jEHXGUZ3Iwy7UBDFLOSI40ITcHSK0xlnCDaryixwJ HCuXu/bUR07H285GYEpmg25jXD+qboQoQDkT+3BQ/8ZOcC9UQvUY7b++uJwUU9lgmJ1Q GXmNk8/xkoglYytGP793HSUJ7vq04jM7wpQlipHg1U9uczfd+OGrbWCiZjLzEhL+aRMI 8PTQ== X-Gm-Message-State: AOAM5326HSdMYwiCujS6lP3XGtmRSvlitXIfXyhKsQD2xMjE5+PueMpa Ob2PrfTIJwnlGfwvtsNl/RxbBw== X-Google-Smtp-Source: ABdhPJxGV5AoXKcyeUU8ZBJd7zKVAHC0obAgkRJffyrhrae0ETdWG8nqs7sScPBo5ThCSl6Lwkx+yg== X-Received: by 2002:a62:e90e:0:b0:51b:3f85:c97c with SMTP id j14-20020a62e90e000000b0051b3f85c97cmr6834623pfh.86.1653897104942; Mon, 30 May 2022 00:51:44 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([2408:8207:18da:2310:2071:e13a:8aa:cacf]) by smtp.gmail.com with ESMTPSA id a23-20020a170902b59700b001616c3bd5c2sm8421381pls.162.2022.05.30.00.51.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 May 2022 00:51:44 -0700 (PDT) From: Muchun Song To: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, akpm@linux-foundation.org Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com, longman@redhat.com, Muchun Song Subject: [PATCH v5 10/11] mm: lru: add VM_BUG_ON_FOLIO to lru maintenance function Date: Mon, 30 May 2022 15:49:18 +0800 Message-Id: <20220530074919.46352-11-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220530074919.46352-1-songmuchun@bytedance.com> References: <20220530074919.46352-1-songmuchun@bytedance.com> MIME-Version: 1.0 Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=Z+vIgsb9; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf24.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.210.173 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 704CF18001D X-Stat-Signature: dewjpta1fyi8nbf59opggix3ctu7wmmt X-HE-Tag: 1653897091-344704 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We need to make sure that the page is deleted from or added to the correct lruvec list. So add a VM_WARN_ON_ONCE_FOLIO() to catch invalid users. Then the VM_BUG_ON_PAGE() in move_pages_to_lru() could be removed since add_page_to_lru_list() will check that. Signed-off-by: Muchun Song Acked-by: Roman Gushchin --- include/linux/mm_inline.h | 6 ++++++ mm/vmscan.c | 1 - 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index ac32125745ab..e13e56c7fdbd 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -97,6 +97,8 @@ void lruvec_add_folio(struct lruvec *lruvec, struct folio *folio) { enum lru_list lru = folio_lru_list(folio); + VM_WARN_ON_ONCE_FOLIO(!folio_matches_lruvec(folio, lruvec), folio); + update_lru_size(lruvec, lru, folio_zonenum(folio), folio_nr_pages(folio)); if (lru != LRU_UNEVICTABLE) @@ -114,6 +116,8 @@ void lruvec_add_folio_tail(struct lruvec *lruvec, struct folio *folio) { enum lru_list lru = folio_lru_list(folio); + VM_WARN_ON_ONCE_FOLIO(!folio_matches_lruvec(folio, lruvec), folio); + update_lru_size(lruvec, lru, folio_zonenum(folio), folio_nr_pages(folio)); /* This is not expected to be used on LRU_UNEVICTABLE */ @@ -131,6 +135,8 @@ void lruvec_del_folio(struct lruvec *lruvec, struct folio *folio) { enum lru_list lru = folio_lru_list(folio); + VM_WARN_ON_ONCE_FOLIO(!folio_matches_lruvec(folio, lruvec), folio); + if (lru != LRU_UNEVICTABLE) list_del(&folio->lru); update_lru_size(lruvec, lru, folio_zonenum(folio), diff --git a/mm/vmscan.c b/mm/vmscan.c index 67f1462b150d..51853d6df7b4 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2277,7 +2277,6 @@ static unsigned int move_pages_to_lru(struct list_head *list) continue; } - VM_BUG_ON_PAGE(!folio_matches_lruvec(folio, lruvec), page); add_page_to_lru_list(page, lruvec); nr_pages = thp_nr_pages(page); nr_moved += nr_pages; From patchwork Mon May 30 07:49:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12864361 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA1C4C433EF for ; Mon, 30 May 2022 07:51:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 517CB8D000F; Mon, 30 May 2022 03:51:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4C9FC8D0001; Mon, 30 May 2022 03:51:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 38D1E8D000F; Mon, 30 May 2022 03:51:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 297938D0001 for ; Mon, 30 May 2022 03:51:56 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id EF87320EEF for ; Mon, 30 May 2022 07:51:55 +0000 (UTC) X-FDA: 79521640590.21.577C231 Received: from mail-pg1-f173.google.com (mail-pg1-f173.google.com [209.85.215.173]) by imf20.hostedemail.com (Postfix) with ESMTP id 3A9731C0052 for ; Mon, 30 May 2022 07:51:38 +0000 (UTC) Received: by mail-pg1-f173.google.com with SMTP id j21so9438216pga.13 for ; Mon, 30 May 2022 00:51:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=zTIPIwKXUo1jntkJwVJ1sc6imAMuNPgvSBrYOaWp7QA=; b=nG5qRDAwcXWO+X0Dki+i4YRhICkZTgizDnL8jGVt8i38gBcnMQ+PEzB+4S8f9bRmMM /dr5aZuC15y7PfpyTFr/vkOdIFmHF/lvgEW6BcSF+u0sQfJ0Jr1riyfq7eyAXMqBMriU gpaWesos7gIbjD/4ebYzpph8ntcaNrvYk6ZsIs+O9X8LLNZnEwvhBciiUHcFWbUkepH6 f7uQXVabUjxGAqexFEeMZtu0mn+ShivhUM3cH96KyBk8JbMdL6sAPc5rpchNyBeAr4vu mkiOhXkDrR99zKdVBXqW1XnPc4KPjTH5VGTiyNXtRA+pl7QVsUAJzg1OYerfzLi4ttKY WPsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zTIPIwKXUo1jntkJwVJ1sc6imAMuNPgvSBrYOaWp7QA=; b=246wBE2G8XBpRsjJtiOv/Ri0f7E7vNHpob18uQQgHQKoE0+84BzHwdpWdDKQaP9+Lv XIsE8XVg7x2TQepi+n5LnAEo/1b/lc844AFjEsXvsb+NIRoOKeN7Q3F6hXVYQ+P6gVbn ut6SgMKjvdDGkTQxpgP/mUlk+MouuM+GOv+PI1wCqSKXPxKXBsjNh63mSSIHSieNu8L9 n+ZP6PbhPx/HxURVqbchIOfihjWencmLH4kQCkWBfT1JL6BwHflGy/pgq4AoivYfDrfn nX1N4yxu6QbNzDkkZJEhN8HonVDAqStNWjpPC1TsfG9hnfXpWrQtXskQCnHH5azAADfP c6kw== X-Gm-Message-State: AOAM533wTdihMM9R0seA6kk2pQgYeZh6VIidpYf1BLvESADEpi7kDWZe FRX/Nykwbx6fjqqxuDeaXZ+RtQ== X-Google-Smtp-Source: ABdhPJwuVWp+MeKG8ZENJf/0D824rv/vfRXylHNl8RsBcRIKzT3qmI+YmeK9lvxmKvP3eliznaQo/w== X-Received: by 2002:aa7:8094:0:b0:505:b544:d1ca with SMTP id v20-20020aa78094000000b00505b544d1camr56072290pff.26.1653897114489; Mon, 30 May 2022 00:51:54 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([2408:8207:18da:2310:2071:e13a:8aa:cacf]) by smtp.gmail.com with ESMTPSA id a23-20020a170902b59700b001616c3bd5c2sm8421381pls.162.2022.05.30.00.51.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 May 2022 00:51:54 -0700 (PDT) From: Muchun Song To: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, akpm@linux-foundation.org Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com, longman@redhat.com, Muchun Song Subject: [PATCH v5 11/11] mm: lru: use lruvec lock to serialize memcg changes Date: Mon, 30 May 2022 15:49:19 +0800 Message-Id: <20220530074919.46352-12-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220530074919.46352-1-songmuchun@bytedance.com> References: <20220530074919.46352-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 3A9731C0052 X-Rspam-User: Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=nG5qRDAw; spf=pass (imf20.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.215.173 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Stat-Signature: tt1r8eoec7h9pjft8jr97ney564cwsfj X-Rspamd-Server: rspam05 X-HE-Tag: 1653897098-652190 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: As described by commit fc574c23558c ("mm/swap.c: serialize memcg changes in pagevec_lru_move_fn"), TestClearPageLRU() aims to serialize mem_cgroup_move_account() during pagevec_lru_move_fn(). Now folio_lruvec_lock*() has the ability to detect whether page memcg has been changed. So we can use lruvec lock to serialize mem_cgroup_move_account() during pagevec_lru_move_fn(). This change is a partial revert of the commit fc574c23558c ("mm/swap.c: serialize memcg changes in pagevec_lru_move_fn"). And pagevec_lru_move_fn() is more hot compare with mem_cgroup_move_account(), removing an atomic operation would be an optimization. Also this change would not dirty cacheline for a page which isn't on the LRU. Signed-off-by: Muchun Song --- mm/memcontrol.c | 34 ++++++++++++++++++++++++++++++++++ mm/swap.c | 45 ++++++++++++++------------------------------- mm/vmscan.c | 9 ++++----- 3 files changed, 52 insertions(+), 36 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index f4db3cb2aedc..3a0f3838f02d 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1333,10 +1333,39 @@ struct lruvec *folio_lruvec_lock(struct folio *folio) lruvec = folio_lruvec(folio); spin_lock(&lruvec->lru_lock); + /* + * The memcg of the page can be changed by any the following routines: + * + * 1) mem_cgroup_move_account() or + * 2) memcg_reparent_objcgs() + * + * The possible bad scenario would like: + * + * CPU0: CPU1: CPU2: + * lruvec = folio_lruvec() + * + * if (!isolate_lru_page()) + * mem_cgroup_move_account() + * + * memcg_reparent_objcgs() + * + * spin_lock(&lruvec->lru_lock) + * ^^^^^^ + * wrong lock + * + * Either CPU1 or CPU2 can change page memcg, so we need to check + * whether page memcg is changed, if so, we should reacquire the + * new lruvec lock. + */ if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) { spin_unlock(&lruvec->lru_lock); goto retry; } + + /* + * When we reach here, it means that the folio_memcg(folio) is + * stable. + */ rcu_read_unlock(); return lruvec; @@ -1364,6 +1393,7 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *folio) lruvec = folio_lruvec(folio); spin_lock_irq(&lruvec->lru_lock); + /* See the comments in folio_lruvec_lock(). */ if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) { spin_unlock_irq(&lruvec->lru_lock); goto retry; @@ -1397,6 +1427,7 @@ struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio, lruvec = folio_lruvec(folio); spin_lock_irqsave(&lruvec->lru_lock, *flags); + /* See the comments in folio_lruvec_lock(). */ if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) { spin_unlock_irqrestore(&lruvec->lru_lock, *flags); goto retry; @@ -5738,7 +5769,10 @@ static int mem_cgroup_move_account(struct page *page, obj_cgroup_put(rcu_dereference(from->objcg)); rcu_read_unlock(); + /* See the comments in folio_lruvec_lock(). */ + spin_lock(&from_vec->lru_lock); folio->memcg_data = (unsigned long)rcu_access_pointer(to->objcg); + spin_unlock(&from_vec->lru_lock); __folio_memcg_unlock(from); diff --git a/mm/swap.c b/mm/swap.c index 6cea469b6ff2..1b893c157bd1 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -199,14 +199,8 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, struct page *page = pvec->pages[i]; struct folio *folio = page_folio(page); - /* block memcg migration during page moving between lru */ - if (!TestClearPageLRU(page)) - continue; - lruvec = folio_lruvec_relock_irqsave(folio, lruvec, &flags); (*move_fn)(page, lruvec); - - SetPageLRU(page); } if (lruvec) lruvec_unlock_irqrestore(lruvec, flags); @@ -218,7 +212,7 @@ static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec) { struct folio *folio = page_folio(page); - if (!folio_test_unevictable(folio)) { + if (folio_test_lru(folio) && !folio_test_unevictable(folio)) { lruvec_del_folio(lruvec, folio); folio_clear_active(folio); lruvec_add_folio_tail(lruvec, folio); @@ -314,7 +308,8 @@ void lru_note_cost_folio(struct folio *folio) static void __folio_activate(struct folio *folio, struct lruvec *lruvec) { - if (!folio_test_active(folio) && !folio_test_unevictable(folio)) { + if (folio_test_lru(folio) && !folio_test_active(folio) && + !folio_test_unevictable(folio)) { long nr_pages = folio_nr_pages(folio); lruvec_del_folio(lruvec, folio); @@ -371,12 +366,9 @@ static void folio_activate(struct folio *folio) { struct lruvec *lruvec; - if (folio_test_clear_lru(folio)) { - lruvec = folio_lruvec_lock_irq(folio); - __folio_activate(folio, lruvec); - lruvec_unlock_irq(lruvec); - folio_set_lru(folio); - } + lruvec = folio_lruvec_lock_irq(folio); + __folio_activate(folio, lruvec); + lruvec_unlock_irq(lruvec); } #endif @@ -519,6 +511,9 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec) bool active = PageActive(page); int nr_pages = thp_nr_pages(page); + if (!PageLRU(page)) + return; + if (PageUnevictable(page)) return; @@ -556,7 +551,7 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec) static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec) { - if (PageActive(page) && !PageUnevictable(page)) { + if (PageLRU(page) && PageActive(page) && !PageUnevictable(page)) { int nr_pages = thp_nr_pages(page); del_page_from_lru_list(page, lruvec); @@ -572,7 +567,7 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec) static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec) { - if (PageAnon(page) && PageSwapBacked(page) && + if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) && !PageSwapCache(page) && !PageUnevictable(page)) { int nr_pages = thp_nr_pages(page); @@ -1007,8 +1002,9 @@ void __pagevec_release(struct pagevec *pvec) } EXPORT_SYMBOL(__pagevec_release); -static void __pagevec_lru_add_fn(struct folio *folio, struct lruvec *lruvec) +static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec) { + struct folio *folio = page_folio(page); int was_unevictable = folio_test_clear_unevictable(folio); long nr_pages = folio_nr_pages(folio); @@ -1054,20 +1050,7 @@ static void __pagevec_lru_add_fn(struct folio *folio, struct lruvec *lruvec) */ void __pagevec_lru_add(struct pagevec *pvec) { - int i; - struct lruvec *lruvec = NULL; - unsigned long flags = 0; - - for (i = 0; i < pagevec_count(pvec); i++) { - struct folio *folio = page_folio(pvec->pages[i]); - - lruvec = folio_lruvec_relock_irqsave(folio, lruvec, &flags); - __pagevec_lru_add_fn(folio, lruvec); - } - if (lruvec) - lruvec_unlock_irqrestore(lruvec, flags); - release_pages(pvec->pages, pvec->nr); - pagevec_reinit(pvec); + pagevec_lru_move_fn(pvec, __pagevec_lru_add_fn); } /** diff --git a/mm/vmscan.c b/mm/vmscan.c index 51853d6df7b4..c591d071a598 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4789,18 +4789,17 @@ void check_move_unevictable_pages(struct pagevec *pvec) nr_pages = thp_nr_pages(page); pgscanned += nr_pages; - /* block memcg migration during page moving between lru */ - if (!TestClearPageLRU(page)) + lruvec = folio_lruvec_relock_irq(folio, lruvec); + + if (!PageLRU(page) || !PageUnevictable(page)) continue; - lruvec = folio_lruvec_relock_irq(folio, lruvec); - if (page_evictable(page) && PageUnevictable(page)) { + if (page_evictable(page)) { del_page_from_lru_list(page, lruvec); ClearPageUnevictable(page); add_page_to_lru_list(page, lruvec); pgrescued += nr_pages; } - SetPageLRU(page); } if (lruvec) {