From patchwork Thu Jul 25 09:43:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 13741688 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F77BC3DA49 for ; Thu, 25 Jul 2024 09:43:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1E3246B0083; Thu, 25 Jul 2024 05:43:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 16B3C6B0085; Thu, 25 Jul 2024 05:43:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 00B9C6B0088; Thu, 25 Jul 2024 05:43:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id D6AE76B0083 for ; Thu, 25 Jul 2024 05:43:48 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 751AB160F3F for ; Thu, 25 Jul 2024 09:43:48 +0000 (UTC) X-FDA: 82377788136.03.0588712 Received: from mail-pg1-f173.google.com (mail-pg1-f173.google.com [209.85.215.173]) by imf03.hostedemail.com (Postfix) with ESMTP id 7843420029 for ; Thu, 25 Jul 2024 09:43:44 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=KrUOHePL; spf=pass (imf03.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.215.173 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721900588; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=RwdJcLKz/5f8keH5JShytJBM+1xtFuhCZMNTi7Yke18=; b=DI37uhVVnANAWAe04ZOifLuTJF9Q3RWA9zhFnbGpT89qMB3Vl5zX6p3Nyg7swQL5SbgaHj 5sxKM/ieiXU45/lI70B1aJxN3uKnUqot3w6WjpF8Q9E1IrpyEZsqa1zamdtHouykCenUKA VwgKRbHH0SriAk4jelsj9xawrpYCxZE= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=KrUOHePL; spf=pass (imf03.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.215.173 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721900588; a=rsa-sha256; cv=none; b=RxR7FXXj91HmWNk+E2FHxaQ2f9C3jqtoqm5vqVQMo9f8S6JVhgSqRHVDprUeqyiTlyHfr/ +xzmDF5MR0Ava5aPWlJ7OwoRB1Q7eI06L+F2Xp64kyRcpcZej/FbXy0F7OTEEcEGkKLHI7 eNVFtvpwq3gfbw2SdBcdv7j4Zu4Fmu8= Received: by mail-pg1-f173.google.com with SMTP id 41be03b00d2f7-71871d5e087so543444a12.1 for ; Thu, 25 Jul 2024 02:43:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1721900623; x=1722505423; darn=kvack.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=RwdJcLKz/5f8keH5JShytJBM+1xtFuhCZMNTi7Yke18=; b=KrUOHePLAONs++XLLUM/Bct6E+FYdBuGPMWU6cK9HkShzX0L2BwJJP8zZqfgxLXP04 drwnbIabYDhkgqBcIlp+V66Q+W3nYL0rhbqjU09T+n4KZVmuxEF6TGL5QrWnfxzKA3IX 6pgpDkzsWqQWRrASnLrotDU0mSpYyBzKN8uh9meT86I7JqtAnEpMLc3utr74bND+tnEs 74B9genIjb17OdAgQo+iBNZAxuvyd9qc22/OwC63MMXgfZtqVThRaUxCBf4XoLVm4YXB Ct7lh9ATPoYCYhzP7C5NnUF60m0yeWKUm7Mpb2RuFlxUMWU8CWvmkJfL1Nl+yxE/Ew0+ JmLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721900623; x=1722505423; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=RwdJcLKz/5f8keH5JShytJBM+1xtFuhCZMNTi7Yke18=; b=Mh8f2tacZViZyXk5WmvVq86BKfB2B6jftTXilZ1tUfnFfBDXeJq/Y9oeyoqK3IMw0k z4l6EFHWkJLTNIW3ogKo7Hv2YAHfnXTR+82a028UIp3Tnf8HvvFV7jlwj4jSPxYxeDTq cMRnNL8vw1uyLm/N8Hyi7JUA62O8OoDMLRdDhbuMxXrxsbc09ZcttHAAd/oZqBhwbadX oJQrBt/bkElv5IV4xP8VM5u34Of1jYcUNjup3wrMNq7tuE56WrMkKUKwiTHsq0sf9vIz +UgcaDnMoHYeKKuJCRJTkgxhHtYnT5LR/ZHQvRmci2fi3aam4CIiXKDiaGq3P+h9gA0I gwmQ== X-Forwarded-Encrypted: i=1; AJvYcCVKRXwSq9PNy38P03h9cMwHdDUs6NbkV4Xi9TS4csKtLIdcmidAGQdnCQaWfgRVjrXil6+/Zq2NddxaY33vao57TdY= X-Gm-Message-State: AOJu0YxSnWE4SAoodM/7nf8gdkbCPT0F3uHFKJ+y1C3F5LMWBsQzOTUK OFfBLg5J9ZNLnKL4GshtlaqYe6H2CSHXe85Wa+nAkY9F9OOd3Eo7+4CvFZj6NAk= X-Google-Smtp-Source: AGHT+IEVAsa3ZBsFgD39pMLw2LVoMjX7BbCCQjMcMcYrKg6bxSued2Hy0XURQjw7WOeO/dWSuV3Fmg== X-Received: by 2002:a05:6a20:8402:b0:1be:2e2e:5ae8 with SMTP id adf61e73a8af0-1c472ce45c2mr2933218637.40.1721900622845; Thu, 25 Jul 2024 02:43:42 -0700 (PDT) Received: from PXLDJ45XCM.bytedance.net ([139.177.225.254]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1fed7ee1477sm10029425ad.169.2024.07.25.02.43.38 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 25 Jul 2024 02:43:42 -0700 (PDT) From: Muchun Song To: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, akpm@linux-foundation.org, vbabka@kernel.org Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Muchun Song Subject: [PATCH v3] mm: kmem: add lockdep assertion to obj_cgroup_memcg Date: Thu, 25 Jul 2024 17:43:30 +0800 Message-Id: <20240725094330.72537-1-songmuchun@bytedance.com> X-Mailer: git-send-email 2.39.3 (Apple Git-146) MIME-Version: 1.0 X-Stat-Signature: q9hrsnmbz489rmsxr4mzogkhpfdex8ti X-Rspam-User: X-Rspamd-Queue-Id: 7843420029 X-Rspamd-Server: rspam02 X-HE-Tag: 1721900624-813250 X-HE-Meta: U2FsdGVkX189LxCY1BXIgwfyH/t0cgmBxZ81gE6mtctCN9ttrr8iIt6KqWmV+hMAzB6tvNVyBbQbS2k1Goi5Rp3L3rr0U/YtCD9W6vTJzEtLLak6/hhM897/24yymRP6i4hmM7FQPz0XHOYTwIkc9coCZTyF4EGY3eg7jejIvDrdcNiwLP5Qz4WVu5MaBN+1cRB4FfZ3Pjb7YZl7QWz4XoeMAoQLpFsVbsvGqIcos1cPjHb/4wCVyjThhV2OJ2wfcFRNNdMdB4S0mXpThxKsUtI47/QFcgNl+08DlGC6AVGw4NcX8Y+v+igm2nFNAyMwvrpj8PdYevA8Z5mlu48EmfWKnW6m3rifEetWwwsGUMN5KhQukztsOor8ujfzrsWXKazDg9mY+gi5MCJNYlIZ7bxnVp+DLTSjbMw7rThJS+7K20JAa2LzeuKxzzOBClTbs0SU7NY3urtra82ynKMfLJGhbARmMaUvFMfjJnpi4cJhyLFWeXpelvfFfQ9HrYxcE+DMpU24UVMis1RfPLEBCFlQfILN/FDKPqSJSuxkiAOCZiTm+VVV1HVjMvfg9QiEJx4yuSNwoJ0wt1td8Se4PSeVeNNs36jwtl8zuteDB2vV8zBv4l43mHzZUefaATdS7mr6oy69mH0ikF3c8diCOydCWAlIgS3Z8F3vPjfOKy/NFGsWpWlua22pyRko3183cblLe5NMqPt4ygN7j30zliufXb8VPplMLgqthpSgJDIdyoP9ZgN1V31/aYkPu+7O1snTzDzY9cY8hnvBRlySVciMKwua4643/sqjNmDZcLMxpczBz6reGPh4QZOu/4Vfh5ydotyQ75NjGZ8rNNKgZ/tXSc/y3S8mdUpt+eVHsOKF1X4hxHLa1SnuG/1U1uTY6vA7lpKUP0OsCf1O6eIlVPeQCDShVOVjSgtcML54CH1K/iNFMG7gz+dmxy9MhQboDm80nVg5kLCmkzKRHTo 55ijmyLi n3j+dUW4NngH+wXbYHbOA2Ea0ria/KSyae1Ry0Nq5Cj20zg1BsbXcOhBQjJknW7kp1xRT7xS4k3IL499xxdUHM0cc65CKs47GKcwmGJcqSwZAYW0Pcg+TRHzulfjJ4qNzXvvYFLIhE/0mJgVWU42z9hA5yhYbhsQa49j1gS7dQZUuyzJu5dj4zJMTo+lr1bRBR5xEaz5AXISx1iQPtyqtYkvCqSaPn32bVZ8Fp/MoHLPDjyU231jIUEHPW39+gvXrhw9dxRPGcqBDceDQjcMnTctXqT1NiR8/1YsPBLXnPaBxOiMzvP8mPI8+trterEDK8I5VulOgv1IwrHrjhPNxYuwBRx2Pek73cqRdvughweXiQYw+Wz1FKtCPcgeF8XHiySuaTljaO/c3ss2C6FU2UhbIBq/tH0Bh/hIGI963aT9Vg2LPiaodkkl/wA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The obj_cgroup_memcg() is supposed to safe to prevent the returned memory cgroup from being freed only when the caller is holding the rcu read lock or objcg_lock or cgroup_mutex. It is very easy to ignore thoes conditions when users call some upper APIs which call obj_cgroup_memcg() internally like mem_cgroup_from_slab_obj() (See the link below). So it is better to add lockdep assertion to obj_cgroup_memcg() to find those issues ASAP. Because there is no user of obj_cgroup_memcg() holding objcg_lock to make the returned memory cgroup safe, do not add objcg_lock assertion (We should export objcg_lock if we really want to do). Additionally, this is some internal implementation detail of memcg and should not be accessible outside memcg code. Some users like __mem_cgroup_uncharge() do not care the lifetime of the returned memory cgroup, which just want to know if the folio is charged to a memory cgroup, therefore, they do not need to hold the needed locks. In which case, introduce a new helper folio_memcg_charged() to do this. Compare it to folio_memcg(), it could eliminate a memory access of objcg->memcg for kmem, actually, a really small gain. Link: https://lore.kernel.org/all/20240718083607.42068-1-songmuchun@bytedance.com/ Signed-off-by: Muchun Song Acked-by: Shakeel Butt Acked-by: Roman Gushchin Acked-by: Vlastimil Babka Tested-by: Marek Szyprowski --- v3: - Use lockdep_assert_once(Vlastimil). v2: - Remove mention of objcg_lock in obj_cgroup_memcg()(Shakeel Butt). include/linux/memcontrol.h | 20 +++++++++++++++++--- mm/memcontrol.c | 6 +++--- 2 files changed, 20 insertions(+), 6 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index fc94879db4dff..95f823deafeca 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -360,11 +360,11 @@ static inline bool folio_memcg_kmem(struct folio *folio); * After the initialization objcg->memcg is always pointing at * a valid memcg, but can be atomically swapped to the parent memcg. * - * The caller must ensure that the returned memcg won't be released: - * e.g. acquire the rcu_read_lock or css_set_lock. + * The caller must ensure that the returned memcg won't be released. */ static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg) { + lockdep_assert_once(rcu_read_lock_held() || lockdep_is_held(&cgroup_mutex)); return READ_ONCE(objcg->memcg); } @@ -438,6 +438,19 @@ static inline struct mem_cgroup *folio_memcg(struct folio *folio) return __folio_memcg(folio); } +/* + * folio_memcg_charged - If a folio is charged to a memory cgroup. + * @folio: Pointer to the folio. + * + * Returns true if folio is charged to a memory cgroup, otherwise returns false. + */ +static inline bool folio_memcg_charged(struct folio *folio) +{ + if (folio_memcg_kmem(folio)) + return __folio_objcg(folio) != NULL; + return __folio_memcg(folio) != NULL; +} + /** * folio_memcg_rcu - Locklessly get the memory cgroup associated with a folio. * @folio: Pointer to the folio. @@ -454,7 +467,6 @@ static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) unsigned long memcg_data = READ_ONCE(folio->memcg_data); VM_BUG_ON_FOLIO(folio_test_slab(folio), folio); - WARN_ON_ONCE(!rcu_read_lock_held()); if (memcg_data & MEMCG_DATA_KMEM) { struct obj_cgroup *objcg; @@ -463,6 +475,8 @@ static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) return obj_cgroup_memcg(objcg); } + WARN_ON_ONCE(!rcu_read_lock_held()); + return (struct mem_cgroup *)(memcg_data & ~OBJEXTS_FLAGS_MASK); } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 622d4544edd24..3da0284573857 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2366,7 +2366,7 @@ void mem_cgroup_cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages) static void commit_charge(struct folio *folio, struct mem_cgroup *memcg) { - VM_BUG_ON_FOLIO(folio_memcg(folio), folio); + VM_BUG_ON_FOLIO(folio_memcg_charged(folio), folio); /* * Any of the following ensures page's memcg stability: * @@ -4617,7 +4617,7 @@ void __mem_cgroup_uncharge(struct folio *folio) struct uncharge_gather ug; /* Don't touch folio->lru of any random page, pre-check: */ - if (!folio_memcg(folio)) + if (!folio_memcg_charged(folio)) return; uncharge_gather_clear(&ug); @@ -4662,7 +4662,7 @@ void mem_cgroup_replace_folio(struct folio *old, struct folio *new) return; /* Page cache replacement: new folio already charged? */ - if (folio_memcg(new)) + if (folio_memcg_charged(new)) return; memcg = folio_memcg(old);