From patchwork Wed Aug 14 09:34:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 13763166 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40C1EC52D7F for ; Wed, 14 Aug 2024 09:34:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B01A26B0082; Wed, 14 Aug 2024 05:34:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AB1C36B0083; Wed, 14 Aug 2024 05:34:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 978D26B0085; Wed, 14 Aug 2024 05:34:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 7BF2E6B0082 for ; Wed, 14 Aug 2024 05:34:34 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 2BDB4160D50 for ; Wed, 14 Aug 2024 09:34:34 +0000 (UTC) X-FDA: 82450340868.02.247B11E Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) by imf26.hostedemail.com (Postfix) with ESMTP id F161A14000A for ; Wed, 14 Aug 2024 09:34:30 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=FnuJOHTH; spf=pass (imf26.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.170 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723628000; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=bC0XfL2W5FE9snaEjt3YQk8wdW3PboX02I62azn3cOk=; b=x5hLCyGK2TZ4zYtWVmt28HJtP5BMeUHArUAdD2ddW7spcU6fBstWtSj5hEokU4oM6MtmjI nT/3Ran7uWRNV723vu2b+vq2ozTtdnvwh3IIWyVXczRqowiRSG4NnsJk57m9pipoU0tvJN 8h3zHkZWogOIHBr/sb78KnKpUNTvG6g= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723628000; a=rsa-sha256; cv=none; b=fi2Gd3ec3IPT6a4L6FQX7d20tow/wmf4ER2mi09qcAOyPpbO+3fz9MK6HcpCFy0SuWBWI1 YMI7AQ2u3CsqzbcEeVPOCDwVIuCGzjnJHtEcoSruwKbtzLewCGnELoSI3ZtGYgrWSPhtJR V0ZRbgR35OoJsLKPE9cG6JtTc9lSLro= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=FnuJOHTH; spf=pass (imf26.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.170 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com Received: by mail-pl1-f170.google.com with SMTP id d9443c01a7336-201daff5461so2778225ad.1 for ; Wed, 14 Aug 2024 02:34:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1723628069; x=1724232869; darn=kvack.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=bC0XfL2W5FE9snaEjt3YQk8wdW3PboX02I62azn3cOk=; b=FnuJOHTHPO8pnoB+mEYRJ84goQscO/q0TlQsn/Z3jdVv8F1O18qJb4n03KKcYiQSnj m/7MoWZ7nWiCOehpWf1RE83U1xmbIjlCnJq5K7ZLmNG/na1NN3/nLtQOFhNTIUmCKJz6 Vqu8VU2pcif0CQi9wdpI6qcXiCHHp7AlRt0aynUAb1rdsyol6O3TdbTnP7qtxn2AsFkR olAYKLEihQwvAopm2j0Eexxbf3qvS5wwWZeBY5kmq3vXxf583rER36jSgjgI4mh0HqsJ IVhVY4LNny0jgyBbmwE40vSdPkJ3GGL5dZad2RyKGb1Y7E0uzZNMMklaBiks6ojspY4v 2KXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1723628069; x=1724232869; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=bC0XfL2W5FE9snaEjt3YQk8wdW3PboX02I62azn3cOk=; b=rf3+rUVp47RrWtwvRqZf4qfHKjEtJGZ4ZYMwdK9iWWQ6Jo90gauIUbZX/8ICEP3Wo5 zv+KWMzhHXRh8fV8udKewMfTP04L2iCqxOmmm20z28LuVOJzAuC59xhtufynWLsAPzFm eidergv8h80Rtv4fcsh6fDIY3HNXnDFBNlI2HA8byQOO52IBud3IUWivmjDKXPxnwaNK ENu49dc5ssya5qG9PgZHrpdmi2XkMqhZsw1gHq6fpvZx41U3W1ACSZb5odKsPB2zE7Lo vfx6xxLkubHdM7W5QAaoh+z6d8WgPAxkwDZ25pAx18oo9lmN+N2V6IAZeVn0nfcFZPuG AZ1g== X-Forwarded-Encrypted: i=1; AJvYcCW+8I9CythtRco0iNohNjaVxKzsZeMcd5lN+QjiYGiYap9yG5MO4yVg+A+kaPJdDcATh9zhSNZ8ucEdx85jyFbkHRo= X-Gm-Message-State: AOJu0Yx1Pxx3LFnfrSdwid1pNmrvMRe8gtyBQ7VIhqcOYZwGCP8t0aF+ bfgHC6NCZ7f3B8svwrswQua9YzpOCzMWxlTH2nhMBWqGizvD88hzMKXSK2C8Ovk= X-Google-Smtp-Source: AGHT+IFnKFOYsVUUabc+Y90K4OKmBUyGjwpN5//JCZ/fIRTXKELSBwTXP/hFNmyxqIj1Zzs3BDRnhw== X-Received: by 2002:a17:903:191:b0:1fd:d6d8:1355 with SMTP id d9443c01a7336-201d63b61e9mr29347935ad.17.1723628069149; Wed, 14 Aug 2024 02:34:29 -0700 (PDT) Received: from PXLDJ45XCM.bytedance.net ([61.213.176.10]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-201cd1a93ccsm26019765ad.168.2024.08.14.02.34.24 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Wed, 14 Aug 2024 02:34:28 -0700 (PDT) From: Muchun Song To: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, akpm@linux-foundation.org, vbabka@kernel.org Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Muchun Song , Vlastimil Babka Subject: [PATCH RESEND v3] mm: kmem: add lockdep assertion to obj_cgroup_memcg Date: Wed, 14 Aug 2024 17:34:15 +0800 Message-Id: <20240814093415.17634-1-songmuchun@bytedance.com> X-Mailer: git-send-email 2.39.3 (Apple Git-146) MIME-Version: 1.0 X-Stat-Signature: izs3yiyktthxhmjzy6w75dbaz8ehhdzp X-Rspamd-Queue-Id: F161A14000A X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1723628070-451131 X-HE-Meta: U2FsdGVkX1+5tsPsZP2dLzkLRTk7yNpPw6BUeFmnOVGZcK1KQGFxIWvk6FWAp3TFrsVvXw2CtE4u881NAkwIOahFhisAyCwpEOTSObU8tsHs/GgA6No65m22SIvALvwT2qA9us4p/0P7qBWcKm3KHcxSsk/gU0mQUOh1Are1YZaBh8/iC0vNToscBAPcdHU1yC0N9bbYdpulTwAO/D6QRmES5kYcKDtFkWn8nxUftzOhbtd7trFIBi75pkCAIJS5KfE/nC9XA2QCN90sKnBUUxfPDibHUoA2jRb5Oosf9Y1mkvq7h/DfAR6Cqefyb0/riRtPevMibAPgQKHWeXYBRqcQ1k4rI+Najz1/d9fGsNqQI63lvAcjxEeTGu4c9QR9iBQZw2Tydhw0mOPv5G9TLk53maHMzkdNs1u5KXbFwHINOoZB6yU1ioLmbn2biGlU3ADOrWhnYOAK8xw5x51Os4LnHdFF5hsABlabGVFcmDr7HFgKY0QcztxiCT+3Gad7beh3XWfPq9erxhOP5lPkIkwVvGrigRL/Foy8k5C2YCvdGs/NB3WzHcc31OIPusTmAAGJS+IZohDwbp+tdkQgJGYADmXQtPePVlOw8TBjghZATT5XwTTwVgZtDeqPMMmHjoXdrVoKSVCC8+DlSFP/DHv49PBtjZc/BFPFW6aSFxqDhAeAFcsE4uY/cTb+2CQiOdXHsfz/AKrXgdSD6PAVj3Xua3IRCqT+3ErDh3gkV13mAjeOeXwmHnhPFSrDnGqMFnenv+3S5cRXamDfXgjVzCvvjzRwiJQlogtYEukXQoXWFUPMS8fU8tEVxm4y94fIb5miroyK1Kp3gtH/HMNr2gkfg7qloqfTgp3xiAKy95fEefKYj4seJXgnjezeNOQDjLV6MwdzIP3qCQHfkYlawmIofTPsoear7eY1kreOOKBEbQ0PTqIst79CuuDFQSWMLejdHtYqzzk2y1hUG6P 74Qc7gxm t4d6XtdQjVQ5sdKBvzdE/E0/CvcTCpx+tSSTF97hHTlVsenFh0bPj++2RrN/SdHXYBd/593JUmUUDoigud+Clu8Eg4cDojHga9wRVe8GBcbbKZpH9x8LRoa0FPzBxfsAbX4L5wtvHYU7d1ZUF9Ae7CAaHaTLy9xN8sWDlIW0GRSMX+qoL6JAm1HTMd4qfR0wmzwCkHRe+9BzxU8JSwEll5efpkUXdk+ZjpUqNcbpy7cK+TlMIRkq1TsHJ/56SWfM7t32iwd+kfH7hpWZLOwVDK4R1U/gZAY9PCHlivPlFnHsQtbrJ+ONY+boL1WNj+2MCCzCHT0T7ZmVXIpmtNKBs1G7MWUkEMEvelnUcLNvlQfxVB4H6zowYC77mEwn3FOd2/p/Q4Cgfb/6kn8aRivwVkL/WSkYTjSj1DZvOgtGnqwlja1IUaznn5sisKA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The obj_cgroup_memcg() is supposed to safe to prevent the returned memory cgroup from being freed only when the caller is holding the rcu read lock or objcg_lock or cgroup_mutex. It is very easy to ignore thoes conditions when users call some upper APIs which call obj_cgroup_memcg() internally like mem_cgroup_from_slab_obj() (See the link below). So it is better to add lockdep assertion to obj_cgroup_memcg() to find those issues ASAP. Because there is no user of obj_cgroup_memcg() holding objcg_lock to make the returned memory cgroup safe, do not add objcg_lock assertion (We should export objcg_lock if we really want to do). Additionally, this is some internal implementation detail of memcg and should not be accessible outside memcg code. Some users like __mem_cgroup_uncharge() do not care the lifetime of the returned memory cgroup, which just want to know if the folio is charged to a memory cgroup, therefore, they do not need to hold the needed locks. In which case, introduce a new helper folio_memcg_charged() to do this. Compare it to folio_memcg(), it could eliminate a memory access of objcg->memcg for kmem, actually, a really small gain. Link: https://lore.kernel.org/all/20240718083607.42068-1-songmuchun@bytedance.com/ Signed-off-by: Muchun Song Acked-by: Shakeel Butt Acked-by: Roman Gushchin Acked-by: Vlastimil Babka --- Resend because commit 5161b48712dcd08 is not merged first. v3: - Use lockdep_assert_once(Vlastimil). v2: - Remove mention of objcg_lock in obj_cgroup_memcg()(Shakeel Butt). include/linux/memcontrol.h | 20 +++++++++++++++++--- mm/memcontrol.c | 6 +++--- 2 files changed, 20 insertions(+), 6 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index fc94879db4dff..95f823deafeca 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -360,11 +360,11 @@ static inline bool folio_memcg_kmem(struct folio *folio); * After the initialization objcg->memcg is always pointing at * a valid memcg, but can be atomically swapped to the parent memcg. * - * The caller must ensure that the returned memcg won't be released: - * e.g. acquire the rcu_read_lock or css_set_lock. + * The caller must ensure that the returned memcg won't be released. */ static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg) { + lockdep_assert_once(rcu_read_lock_held() || lockdep_is_held(&cgroup_mutex)); return READ_ONCE(objcg->memcg); } @@ -438,6 +438,19 @@ static inline struct mem_cgroup *folio_memcg(struct folio *folio) return __folio_memcg(folio); } +/* + * folio_memcg_charged - If a folio is charged to a memory cgroup. + * @folio: Pointer to the folio. + * + * Returns true if folio is charged to a memory cgroup, otherwise returns false. + */ +static inline bool folio_memcg_charged(struct folio *folio) +{ + if (folio_memcg_kmem(folio)) + return __folio_objcg(folio) != NULL; + return __folio_memcg(folio) != NULL; +} + /** * folio_memcg_rcu - Locklessly get the memory cgroup associated with a folio. * @folio: Pointer to the folio. @@ -454,7 +467,6 @@ static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) unsigned long memcg_data = READ_ONCE(folio->memcg_data); VM_BUG_ON_FOLIO(folio_test_slab(folio), folio); - WARN_ON_ONCE(!rcu_read_lock_held()); if (memcg_data & MEMCG_DATA_KMEM) { struct obj_cgroup *objcg; @@ -463,6 +475,8 @@ static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) return obj_cgroup_memcg(objcg); } + WARN_ON_ONCE(!rcu_read_lock_held()); + return (struct mem_cgroup *)(memcg_data & ~OBJEXTS_FLAGS_MASK); } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 622d4544edd24..3da0284573857 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2366,7 +2366,7 @@ void mem_cgroup_cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages) static void commit_charge(struct folio *folio, struct mem_cgroup *memcg) { - VM_BUG_ON_FOLIO(folio_memcg(folio), folio); + VM_BUG_ON_FOLIO(folio_memcg_charged(folio), folio); /* * Any of the following ensures page's memcg stability: * @@ -4617,7 +4617,7 @@ void __mem_cgroup_uncharge(struct folio *folio) struct uncharge_gather ug; /* Don't touch folio->lru of any random page, pre-check: */ - if (!folio_memcg(folio)) + if (!folio_memcg_charged(folio)) return; uncharge_gather_clear(&ug); @@ -4662,7 +4662,7 @@ void mem_cgroup_replace_folio(struct folio *old, struct folio *new) return; /* Page cache replacement: new folio already charged? */ - if (folio_memcg(new)) + if (folio_memcg_charged(new)) return; memcg = folio_memcg(old);