From patchwork Fri Aug 2 23:58:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shakeel Butt X-Patchwork-Id: 13752092 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1E5EC3DA4A for ; Fri, 2 Aug 2024 23:58:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4186E6B007B; Fri, 2 Aug 2024 19:58:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3A15F6B0083; Fri, 2 Aug 2024 19:58:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2428C6B0085; Fri, 2 Aug 2024 19:58:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id F2BDA6B007B for ; Fri, 2 Aug 2024 19:58:38 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 68CC0A8014 for ; Fri, 2 Aug 2024 23:58:38 +0000 (UTC) X-FDA: 82408972716.27.2F2F95B Received: from out-171.mta0.migadu.com (out-171.mta0.migadu.com [91.218.175.171]) by imf09.hostedemail.com (Postfix) with ESMTP id 7C63A14000A for ; Fri, 2 Aug 2024 23:58:36 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=hNkoehj8; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf09.hostedemail.com: domain of shakeel.butt@linux.dev designates 91.218.175.171 as permitted sender) smtp.mailfrom=shakeel.butt@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722643051; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=vIQpQxNg2MyXYsuSEiANpsXDOnYEMzwRDvdV46avMd4=; b=gKXKR6HP60O32fedEj4l7+It+NLTmQAO3hRgnYPyftC3I6ecAYZbdb9GK40TX9WxLXvtjO n8dASEIpBL7ul8KU7RTvYVAXyExwqZXPouEotmN3HERPDnR6DWDlokM4LAjLzWrKb9eIvZ r4vuUIxn6zyf9zquhZwqFGWmgy4fnvY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722643051; a=rsa-sha256; cv=none; b=zHMpm141ljXpLGJT+8z/B7YiUseiH1Dh8x3xBZBBZ1FMDn574IxGpvYzXcmqQTvKBWdzTV vn5/4OrETTF6YF6wsYb8OdRrNUNERuRU6JD17wPRoY+CRJeA3QE/96NPGQCbKKbNYkoM2k k11AZn7yUOos53ab/HkIO0yDeFmXJ/M= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=hNkoehj8; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf09.hostedemail.com: domain of shakeel.butt@linux.dev designates 91.218.175.171 as permitted sender) smtp.mailfrom=shakeel.butt@linux.dev X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1722643114; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=vIQpQxNg2MyXYsuSEiANpsXDOnYEMzwRDvdV46avMd4=; b=hNkoehj8SQFvqoNGAU3xYIlPtkmBu3wq4mO2bfPvELiR4pHFgu49d0eJr1gYnD62kxfOME JOraAP21UfmZ9Z63EmTk1QSXiRXxr/LbPDu8MQUNCQUdUqRzYz0a3HHUOu/gidzcR4imVv sTMkN1wPpVjOPFiooS0YmHx4jFKLSDI= From: Shakeel Butt To: Andrew Morton , Johannes Weiner Cc: Michal Hocko , Roman Gushchin , Muchun Song , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Meta kernel team , cgroups@vger.kernel.org Subject: [PATCH] memcg: protect concurrent access to mem_cgroup_idr Date: Fri, 2 Aug 2024 16:58:22 -0700 Message-ID: <20240802235822.1830976-1-shakeel.butt@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 7C63A14000A X-Stat-Signature: q4chf7i74eoxdp4jmkw5pn3ne4fwu9mq X-Rspam-User: X-HE-Tag: 1722643116-531978 X-HE-Meta: U2FsdGVkX188+dTkBlblY8d/0+bBki/e5uB1ztO4oPwOCOHKaZws//dp7Z23iU7lVi0qvAYJZLxQ/cR4WsskVp4MwYjGrG2Eek+S0yT3NMzKewiYs5uKHBJ5k46KjMiaXNsses3DIclQn0O++NEfSKP/lLci1G6mt6ojvAtEwcbXi9oUL5dQ597vh+MtfhKe6s1KKGyl1XRoii1e0rAL6/y0QudESLtBNNQAsaS8xhu8UUyMPwwFIqLvI/0n5Sck1dazKqI97xx1kp1admGyz7ZW+Qk6sDHNcGQO0iUroE/EQoOMnk6XgcwvnFX2wtjhLi1tuCY2+uo/wDO4UE2572FeK9OBAO7tzMtFxa79E18mNiOdVrpg2/REGDvosiXGDA7LG8+p4t2jcqLH4GFNM/HZK7IL1p1zYm8ZUe6+dDOXE+Y2zkIbM0HRh675qFZ9g6wAtA5TjWHmZBy9Mx+eE9yQAJGiMzAMPBPql+1ByH2Dd5KLk7c8YLTs3NNpF/ESteHjgoS7OxtSounPym9evzGOARIMQqTg/BIP3Zn5DEui1qaa3FwVFZvpfxhfwPBzAoVgPtD2tAaS95kBwbaLx1C+lUQ7aw1F+DP9s00RxnFBITeVDoZvilWKiiDqGRFyPXbfCeJfeJ5MFBFrF7DFUb3F3XDCPqe5mYLxp5MZACC7kcwXHZrhiUTYoREZQi698LTBCTN4u2Msb825ybRd/vAKBiwwLsTfHmIu8pURVTA6D0Gk9pQoR92e/roSGE+Lyijbpz6+zN1aCRfQ7y9kMBVDaM2buPCsjUBL6dRfL6msOKPSr8Uat2aioWeQguG4rkOYDJpOQWmGNG44Ws6QLQAxoVd97QjmJqWjfCro8j3mbgZwQB3q2hZ+4scvKl8h1Sr2LJDrL/Ar22pyBYvxCKzLpDiE5TALiYnLORnxVCW08Wm98yoTYlV4qQuJ28p2pUlyJX6T1uUrz4l9yKP ySzsG0+S DBxZflphuYfLPTyFxQlqQvJW8TgDB56xe7lETIA6ecyeHwumR4600bR+5fGyyXvPL2AxSntjU996MggM87Lat4IL448i5cu+xFQRUkzx3pU/zhdReFIEUfaHO0stJ0u3PUYzfr+HVtkjrxU2DFwpP2Lq7p1Jc0nkFihID X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The commit 73f576c04b94 ("mm: memcontrol: fix cgroup creation failure after many small jobs") decoupled the memcg IDs from the CSS ID space to fix the cgroup creation failures. It introduced IDR to maintain the memcg ID space. The IDR depends on external synchronization mechanisms for modifications. For the mem_cgroup_idr, the idr_alloc() and idr_replace() happen within css callback and thus are protected through cgroup_mutex from concurrent modifications. However idr_remove() for mem_cgroup_idr was not protected against concurrency and can be run concurrently for different memcgs when they hit their refcnt to zero. Fix that. We have been seeing list_lru based kernel crashes at a low frequency in our fleet for a long time. These crashes were in different part of list_lru code including list_lru_add(), list_lru_del() and reparenting code. Upon further inspection, it looked like for a given object (dentry and inode), the super_block's list_lru didn't have list_lru_one for the memcg of that object. The initial suspicions were either the object is not allocated through kmem_cache_alloc_lru() or somehow memcg_list_lru_alloc() failed to allocate list_lru_one() for a memcg but returned success. No evidence were found for these cases. Looking more deeper, we started seeing situations where valid memcg's id is not present in mem_cgroup_idr and in some cases multiple valid memcgs have same id and mem_cgroup_idr is pointing to one of them. So, the most reasonable explanation is that these situations can happen due to race between multiple idr_remove() calls or race between idr_alloc()/idr_replace() and idr_remove(). These races are causing multiple memcgs to acquire the same ID and then offlining of one of them would cleanup list_lrus on the system for all of them. Later access from other memcgs to the list_lru cause crashes due to missing list_lru_one. Fixes: 73f576c04b94 ("mm: memcontrol: fix cgroup creation failure after many small jobs") Signed-off-by: Shakeel Butt Acked-by: Muchun Song Acked-by: Johannes Weiner Reviewed-by: Roman Gushchin Acked-by: Michal Hocko --- mm/memcontrol.c | 22 ++++++++++++++++++++-- 1 file changed, 20 insertions(+), 2 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index b889a7fbf382..8971d3473a7b 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3364,11 +3364,28 @@ static void memcg_wb_domain_size_changed(struct mem_cgroup *memcg) #define MEM_CGROUP_ID_MAX ((1UL << MEM_CGROUP_ID_SHIFT) - 1) static DEFINE_IDR(mem_cgroup_idr); +static DEFINE_SPINLOCK(memcg_idr_lock); + +static int mem_cgroup_alloc_id(void) +{ + int ret; + + idr_preload(GFP_KERNEL); + spin_lock(&memcg_idr_lock); + ret = idr_alloc(&mem_cgroup_idr, NULL, 1, MEM_CGROUP_ID_MAX + 1, + GFP_NOWAIT); + spin_unlock(&memcg_idr_lock); + idr_preload_end(); + return ret; +} static void mem_cgroup_id_remove(struct mem_cgroup *memcg) { if (memcg->id.id > 0) { + spin_lock(&memcg_idr_lock); idr_remove(&mem_cgroup_idr, memcg->id.id); + spin_unlock(&memcg_idr_lock); + memcg->id.id = 0; } } @@ -3502,8 +3519,7 @@ static struct mem_cgroup *mem_cgroup_alloc(struct mem_cgroup *parent) if (!memcg) return ERR_PTR(error); - memcg->id.id = idr_alloc(&mem_cgroup_idr, NULL, - 1, MEM_CGROUP_ID_MAX + 1, GFP_KERNEL); + memcg->id.id = mem_cgroup_alloc_id(); if (memcg->id.id < 0) { error = memcg->id.id; goto fail; @@ -3648,7 +3664,9 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css) * publish it here at the end of onlining. This matches the * regular ID destruction during offlining. */ + spin_lock(&memcg_idr_lock); idr_replace(&mem_cgroup_idr, memcg, memcg->id.id); + spin_unlock(&memcg_idr_lock); return 0; offline_kmem: