From patchwork Mon Jul 22 07:08:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 13738467 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D6DCC3DA5D for ; Mon, 22 Jul 2024 07:08:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A6B4A6B0082; Mon, 22 Jul 2024 03:08:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9F2926B0083; Mon, 22 Jul 2024 03:08:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 86C956B0085; Mon, 22 Jul 2024 03:08:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 655686B0082 for ; Mon, 22 Jul 2024 03:08:56 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id C9C93A15B5 for ; Mon, 22 Jul 2024 07:08:55 +0000 (UTC) X-FDA: 82366511430.25.61DBAFA Received: from mail-oi1-f172.google.com (mail-oi1-f172.google.com [209.85.167.172]) by imf16.hostedemail.com (Postfix) with ESMTP id 7716A180021 for ; Mon, 22 Jul 2024 07:08:53 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=ZMZR4dOv; spf=none (imf16.hostedemail.com: domain of songmuchun@bytedance.com has no SPF policy when checking 209.85.167.172) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721632112; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=oHPVRWiuUtWTrvLskZ3yStEQcSQvqjLGmDrDS3cHO/M=; b=y5napSeHn6+JEGQVhX2n0xkc9eFVgqyUyV8z3oKy3V71ugDNjbYe/MNFZAhjsOd22ZA8Ih V8DmBO6/6T+VQHvCuRzLtnX5RfHjchoOaTtdjrHCH0c3mk3KjOgn7YxbjO00wybfIN69br 3M3J5ZufUdBPJbznmbPhpAYe2I5a4TM= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=ZMZR4dOv; spf=none (imf16.hostedemail.com: domain of songmuchun@bytedance.com has no SPF policy when checking 209.85.167.172) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721632112; a=rsa-sha256; cv=none; b=T1aO+L+SMkRVqDTPwZAGYAoYCfxp/pOqjyhgcBtbJouA8Y5uWxpEprhY9Vlk89R9a1JV0M 6wRJhhxtotcHE6AJlEvtadDqOXsz/ZlWa/gHJuOqfRk/pnvwwJyBjV0HxCvFpEpWF4/hA1 eU2OJeFKIPUihoOgXlD2Xr/XaIP/Dlk= Received: by mail-oi1-f172.google.com with SMTP id 5614622812f47-3d9400a1ad9so2266140b6e.1 for ; Mon, 22 Jul 2024 00:08:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1721632132; x=1722236932; darn=kvack.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=oHPVRWiuUtWTrvLskZ3yStEQcSQvqjLGmDrDS3cHO/M=; b=ZMZR4dOvOx83Lyb8IyiqfSQklQxm35noI8P4ItLF7r1nGmsglRe7g6SZxP/Il0MRY3 qvCTGlAtgdEN6UVon/aBzStf8HTiIjm7L38Kuivf6S4Yvds2hWtQC/ng4LLx1vYFd0Eg WPH9DpPU7+C2aREuNl1LGgWIBOEL8Z1oOJURvYAlOXDL3lRpOzVnHHk/MTM9Y12F41IT pp9zzSlFg+JPtLTLFZa2Mk62VORz3mUmAhkXuC7Zq0kGVkuJOwGNKF3GXi/8FG2WoyUx SNXXtm1fGGQFMgUd6d2S9pJtpRC+YL127pyHnMf9ivQsgfk6Rxf2nIWOUdthk6JXudbp PdEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721632132; x=1722236932; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=oHPVRWiuUtWTrvLskZ3yStEQcSQvqjLGmDrDS3cHO/M=; b=TEpgH2lJbtPcSS52Zp7qCf+ZnCLyIIK46RVk5wXLeo5pzbnGdHmLrMSrxRmxmOPPLT L7qX1ecPykc6qP/EvvAyvmE/hnIxvPucSBXGBgzpzbllpfbKD3FZp/oAXpvsV0uVL3oy jOk76zkCkaa5Zq0qqBDjLrxgNpKQZA4MmHVIGFK18ieKuciojGuanox95Njzh1NuhT7k YOPg4BA/KavP55n6W0XXS7HpSex5TMbfcKR2yZ3/N6/tueMMEWanbRMtNYlMufSlLvRA w5nSwnVdjaS6CTXqLGk3kaLBmS58q6v133EuDVHKr2Fu1+VYhpOcK5mxCs/KhZdW00+H UmCw== X-Forwarded-Encrypted: i=1; AJvYcCVhpnpxFvRL3ByKwtFNCHv53hTv0afDye//Wou8QZ2Sgm22jEfLyzTYgmeXPndghGOBeCzyg/zfxD8n0BzjNYfeFFI= X-Gm-Message-State: AOJu0Yzpdl6Sm/I7NFn0P9lPnqgxQydilqv3V7nk827Y30BkdGWmb9/g HgbQ3msNiJXTtvJF7LWykEHjjhPMJ2F1pWdh4trwfz1YrrXsMZqAFDiCJJpRrHk= X-Google-Smtp-Source: AGHT+IHr7HkR/anMuG3VKEXdqsl9UtpZ/IDbd1la2gHLu5qQkQ1uPTgweGow+GK9PeOlEBlcbdHfrw== X-Received: by 2002:a05:6808:19a9:b0:3da:3207:edb1 with SMTP id 5614622812f47-3dae5f401aamr9948741b6e.7.1721632131712; Mon, 22 Jul 2024 00:08:51 -0700 (PDT) Received: from PXLDJ45XCM.bytedance.net ([139.177.225.240]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-70d11b66ef9sm2914539b3a.219.2024.07.22.00.08.47 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 22 Jul 2024 00:08:51 -0700 (PDT) From: Muchun Song To: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, akpm@linux-foundation.org Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Muchun Song Subject: [PATCH] mm: kmem: add lockdep assertion to obj_cgroup_memcg Date: Mon, 22 Jul 2024 15:08:10 +0800 Message-Id: <20240722070810.46016-1-songmuchun@bytedance.com> X-Mailer: git-send-email 2.39.3 (Apple Git-146) MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 7716A180021 X-Stat-Signature: m7a8fgp8p89m3hwo99h78zp8mdd8ar9m X-HE-Tag: 1721632133-952306 X-HE-Meta: U2FsdGVkX18WGOoR2hmQ1LAozWOINLWlNuEEvnpf0ONpJ0C4uwvlIa2pHDw+BvfU1ZNKttBufA21QmoRBes+feG9FPPR/sqEYtzBBoBnLxT8nk9vxmMkF4x4k+jpAPxqljJm73ctTa0SwefHFSFsrfA1XEFOjT3GC237scM019HE1JiiJ93hYdDz9Cy6fDhfyuoXcEXDtv30RCxFygRvVw5ReSsI0P3uFGj023yZAsbnJTk3NW1PFsn9cSD6kM1jg0scneSs32pQhvsXqYnE0weEqlgmd43lzxF/oHfbCJJvYRygiWqVNAZA5lnqT9Gr/hLf59iJgGXguosdt3OHtz/FczPWuNUOH+W7uHuo4fosj/OGcjB+YLTH3SNbBS2FM/tAzzRDtBYvGo3mpIdZ1hjPVbcyCgVAQOCdhY1dCYkGcn5HInzMz8NCaOWGzfC7jiqaEWI4MRcWiCjxjeC86mpbvFllN5GC4HJI/VHtsSZ7Ms8mnAPt9pPX7/8yRmJZZ7DxDQlGaNiHd/zwxmYG/CEh8/cL5dFMFA03gEGWAt9AZpdnWZZnkisDCMINVRKfZllD/pXfF3Z9zYQT3BrP2SZMivsbWquOtxFGF/qxaVqyp4ykfSVIrcwXQyUyld4lCralSzTtrMt6qPuFHGfWzWwC29cBSz8yf63D0sVgL2NsLxxBKgCAZ6o9qQWwzRvli3pzehkCaJpa3552m2kOxe2yZeprq1Z1vLJa204D1VfGt6+I2z1zErqrSk/RBTZAkh2hWl4CtqfpDsLtL7bQSNEvzerTDTEWamMAr636KmZPV15mXAk/MnE4rP8gmdyTY4I5Yek1vYntAF4hh2Aix0oBGsoBkNd8m58sLddvDZEcB0UlzPerCAp3aGdWDNaHyzSvAkzxOoHyR1Nrl3E5ZAsex9bJSmkOc5mpwYymEp7++Le3DqTEdVEMiGsstcYyOXDQ86hNittovF/2nki Kk+lmXRm Ool3FxsyN+zoZxzcEyRYeEcsHhcd/qKtdqXexg6sFRLCBVYSlvyLdtPdGbGuqniWfnFfOGxQwgnlKf4jNDpqwLbRGo2XcBdCbAS8JzyH/VaH8SkGuiUtkv0fvYnEQRqqqO+BIv/gT63CVtctZ+rUrzq2EOTItotqCWLEs9rUzxwNf/hpf0UOTLR1SLHPUq874OBsoiRZIdBKmtNU8Cux2YHu9zcBbWjehNKnqOefHQtiOu54Y1qTlLDlV9S1C+RwriICVWGYolBC+34lKK4yGqkaoE4mHDORz0yBFuIPXQ1/pv3VuHjkHEf1gFB2+1Fdvbg8Kl3RYgJO7aeovqRkTqihMo6Nwi8S2Aven+lRYnLEKNsU3ATrQDibE/EyqOIjnaoT5Wy8u5MSSBm8mIOuxrQS6XV7zy5rb8G4a X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The obj_cgroup_memcg() is supposed to safe to prevent the returned memory cgroup from being freed only when the caller is holding the rcu read lock or objcg_lock or cgroup_mutex. It is very easy to ignore thoes conditions when users call some upper APIs which call obj_cgroup_memcg() internally like mem_cgroup_from_slab_obj() (See the link below). So it is better to add lockdep assertion to obj_cgroup_memcg() to find those issues ASAP. Because there is no user of obj_cgroup_memcg() holding objcg_lock to make the returned memory cgroup safe, do not add objcg_lock assertion (We should export objcg_lock if we really want to do) and leave a comment to indicate it is intentional. Some users like __mem_cgroup_uncharge() do not care the lifetime of the returned memory cgroup, which just want to know if the folio is charged to a memory cgroup, therefore, they do not need to hold the needed locks. In which case, introduce a new helper folio_memcg_charged() to do this. Compare it to folio_memcg(), it could eliminate a memory access of objcg->memcg for kmem, actually, a really small gain. Link: https://lore.kernel.org/all/20240718083607.42068-1-songmuchun@bytedance.com/ Signed-off-by: Muchun Song --- include/linux/memcontrol.h | 22 +++++++++++++++++++--- mm/memcontrol.c | 6 +++--- 2 files changed, 22 insertions(+), 6 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index fc94879db4dff..d616c50025098 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -360,11 +360,13 @@ static inline bool folio_memcg_kmem(struct folio *folio); * After the initialization objcg->memcg is always pointing at * a valid memcg, but can be atomically swapped to the parent memcg. * - * The caller must ensure that the returned memcg won't be released: - * e.g. acquire the rcu_read_lock or css_set_lock. + * The caller must ensure that the returned memcg won't be released. */ static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg) { + WARN_ON_ONCE(!rcu_read_lock_held() && + /* !lockdep_is_held(&objcg_lock) && */ + !lockdep_is_held(&cgroup_mutex)); return READ_ONCE(objcg->memcg); } @@ -438,6 +440,19 @@ static inline struct mem_cgroup *folio_memcg(struct folio *folio) return __folio_memcg(folio); } +/* + * folio_memcg_charged - If a folio is charged to a memory cgroup. + * @folio: Pointer to the folio. + * + * Returns true if folio is charged to a memory cgroup, otherwise returns false. + */ +static inline bool folio_memcg_charged(struct folio *folio) +{ + if (folio_memcg_kmem(folio)) + return __folio_objcg(folio) != NULL; + return __folio_memcg(folio) != NULL; +} + /** * folio_memcg_rcu - Locklessly get the memory cgroup associated with a folio. * @folio: Pointer to the folio. @@ -454,7 +469,6 @@ static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) unsigned long memcg_data = READ_ONCE(folio->memcg_data); VM_BUG_ON_FOLIO(folio_test_slab(folio), folio); - WARN_ON_ONCE(!rcu_read_lock_held()); if (memcg_data & MEMCG_DATA_KMEM) { struct obj_cgroup *objcg; @@ -463,6 +477,8 @@ static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) return obj_cgroup_memcg(objcg); } + WARN_ON_ONCE(!rcu_read_lock_held()); + return (struct mem_cgroup *)(memcg_data & ~OBJEXTS_FLAGS_MASK); } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 622d4544edd24..3da0284573857 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2366,7 +2366,7 @@ void mem_cgroup_cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages) static void commit_charge(struct folio *folio, struct mem_cgroup *memcg) { - VM_BUG_ON_FOLIO(folio_memcg(folio), folio); + VM_BUG_ON_FOLIO(folio_memcg_charged(folio), folio); /* * Any of the following ensures page's memcg stability: * @@ -4617,7 +4617,7 @@ void __mem_cgroup_uncharge(struct folio *folio) struct uncharge_gather ug; /* Don't touch folio->lru of any random page, pre-check: */ - if (!folio_memcg(folio)) + if (!folio_memcg_charged(folio)) return; uncharge_gather_clear(&ug); @@ -4662,7 +4662,7 @@ void mem_cgroup_replace_folio(struct folio *old, struct folio *new) return; /* Page cache replacement: new folio already charged? */ - if (folio_memcg(new)) + if (folio_memcg_charged(new)) return; memcg = folio_memcg(old);