From patchwork Mon Jun 24 17:53:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13709936 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8FE5BC2BD09 for ; Mon, 24 Jun 2024 17:53:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1A1856B0186; Mon, 24 Jun 2024 13:53:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 156EB6B013D; Mon, 24 Jun 2024 13:53:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E98646B0397; Mon, 24 Jun 2024 13:53:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id C8EFB6B0393 for ; Mon, 24 Jun 2024 13:53:29 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 0FB25140F69 for ; Mon, 24 Jun 2024 17:53:29 +0000 (UTC) X-FDA: 82266529338.13.A0D6B0E Received: from mail-pf1-f169.google.com (mail-pf1-f169.google.com [209.85.210.169]) by imf06.hostedemail.com (Postfix) with ESMTP id 3A689180007 for ; Mon, 24 Jun 2024 17:53:27 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=BpCBqMeQ; spf=pass (imf06.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.169 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719251588; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=FCnimar8H+lPv/3bct6eyTo5OROr2uJMk5cUxjITOfs=; b=cBQ+wv2erDFOdsoFY8n9ouovtkc6wIut+b4uBcAxEnlbtJe/MIhkGJI3UFLN/WRuIOh1Ro FtTv5LdjZI9xJXHNdw4vTcsai8Y5qEC7GQwgehT/kylJF/DYTuMSWDUCfct1issxqh0Rh8 FbBPzvjepfCc6z5oFiSRnyMUBtqB1r8= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=BpCBqMeQ; spf=pass (imf06.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.169 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719251588; a=rsa-sha256; cv=none; b=QCoDSpw4cZNyX+yLdK9LQhWsNPEjSR/Du8rCPgnu9ZsYtvymuFGCRkhiYT67ALQxMYkMSW N3F7HiTboPQDbIJVO6UcnDTubFy/GCs3XdnquluPCm8hm7QA+qpp1o7Gl0JOZoA76aFBRo wwfew1oFQtOV74ieqkBAzBfSMEjfJuc= Received: by mail-pf1-f169.google.com with SMTP id d2e1a72fcca58-706738c209bso1130196b3a.3 for ; Mon, 24 Jun 2024 10:53:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1719251606; x=1719856406; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=FCnimar8H+lPv/3bct6eyTo5OROr2uJMk5cUxjITOfs=; b=BpCBqMeQf3Yjc0X0lW04K6j0fQArxU+DFl4O6j/mEZ3JDQ2387co9OCF9NlAI22CLI 8thztPVuXWcxc7at3CKuN3UtB+rRdEQ0xEZuk6oox1MRlkwF/4oRFYrCji4ibkJ+YBsz TyaIlsBU7BoW1PhtIDa9aPsuAdIMrYmn/eFi4NHlpru3l0McyZ9z9JjTIP1jyGbl/rjy lCYd/nkuj4hQRStuFYqS3hHSYp0DsIaNr42glozKuKXJFnwPhIRV3odPBS7amWQba7h1 FK/xYu1vwj5rCbqkMRVSiUsu2WvfrZx+Vt4Rd54GKRZxAFP+m9qLvsoEZYzS+i6Fc/QN yA5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719251606; x=1719856406; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=FCnimar8H+lPv/3bct6eyTo5OROr2uJMk5cUxjITOfs=; b=uHqIjew+pqEiWJzowu4w3l1eEZcdjQx3m4dgYE8izC5Dtq8YKOoHca7L/vNUoMwDdy 4S0KY5NLpOzdgY3iIdlOpp9f3ys5hQbKzvPRqfw/tQPiHooIvVTwEvKrZFuKayupEqiv X+LLs8YkdufD2alMcw/QrVdfA/OdrlEA7FqWTpBizrUBmJHD/9w6oEOf8/as/+qIJKAY /P9ZeDFC9ygoddP9JcGAJ318WVscW8fNtIgY1CqONNdwHyCN892Ar6I3VZGmudb+8J0d 8YS1JIjmInBumEy1uoP+kGvbC6LtJJonp15BshMEtr/xdK4tdhYMQ6fsc3uScUEGfXnG oTzg== X-Gm-Message-State: AOJu0YygBTlCN6faocu8cygbUMIf9Cy3vD+Z30xO/p3QTdBys6sOlUuB VnhNRyitOcVsaDOA0zLj1tDeZgkvWYLJV0ex5ZF6xN/lQsVb7J9f4z4s+qxyBo4= X-Google-Smtp-Source: AGHT+IEXa6w9Ip9ReBGq8x5dsnMI+cJXnCy3v637ZPx4cWpeOl4Ipm7HxoNNRLQwhSPHnWl7OES9xA== X-Received: by 2002:a05:6a20:ba29:b0:1b8:4216:14a2 with SMTP id adf61e73a8af0-1bcf7fcb5fdmr3988725637.40.1719251605533; Mon, 24 Jun 2024 10:53:25 -0700 (PDT) Received: from KASONG-MC4.tencent.com ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-70662ee117asm5008049b3a.211.2024.06.24.10.53.21 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 24 Jun 2024 10:53:25 -0700 (PDT) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Matthew Wilcox , Johannes Weiner , Roman Gushchin , Waiman Long , Shakeel Butt , Nhat Pham , Michal Hocko , Chengming Zhou , Qi Zheng , Muchun Song , Chris Li , Yosry Ahmed , "Huang, Ying" , Kairui Song Subject: [PATCH 1/7] mm/swap, workingset: make anon workingset nodes memcg aware Date: Tue, 25 Jun 2024 01:53:07 +0800 Message-ID: <20240624175313.47329-2-ryncsn@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240624175313.47329-1-ryncsn@gmail.com> References: <20240624175313.47329-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 3A689180007 X-Stat-Signature: 4epoce1ngwy4cn5jkm1ws5fagsxrbzpr X-Rspam-User: X-HE-Tag: 1719251607-910409 X-HE-Meta: U2FsdGVkX1/ax+CPu8iKyAKnvqVYihZvdug5bg5AYpziiTJY2nQegRC1WcGP+722tdwMybFn/b/AvwUxumvvR3/3qFK6Qd1lVouVBzfKeoTZKBEGq2REQBA4BE04jGEPStRMBYKTIC/dw5i+FiMcaqnYPFVe682W7DQFoce850CyD5hXOBE51N8vwh5lmw/Pa9UvqCUlJjet1WyLNhd9OBXwoT6N4r5n1N39Vb1xLEMc2gJ3nwhpT0EwKKUtUF8fmCKgrYBw4QOtcsMlq372BYrXaJWcg1pMxzu/D+ieuFSt1zeTzs8xpeVyWIjO5TeT1MdqA4wp1MgADo32ja4DadeIPE6dsWoZQWd8oLaGzSXEoyZ/Rl23+ilTZOVDsTFk86swWOoPQ7Kft57cRMr/ZkCNwpwNPb9H0+RI5Ov7jKGz/tGlaqR4RQrvRgeu85hMFp89GKrPZIX10lqBCh9CHkEZiVue9mdGHik1f5A/kIeCNrkmOndPRWdrRBuzZWni5a/wFjle5sSNYg8gek+VF0R+6bnZgLUeZe5WMYVWSDeQTkcGgHNU2iKsw+nUErPz1PWVEL9bBSnBxN0vABalqqKKdvj/Z++aK6ThmI0aNOkjDJOyh5Q4GhhhD1oAoRqpr/4kBWPXOuk7QPeFN6w4w18iSH3btYf95zOZEYU++b2CYQp4mUP9kTJ8AIhpMLGGCA+4gMy4P6/7HiJ15r9/Pk0HhlxaXT7f5ffppOeNlug7PKz8weOeOe846DDR3m8njcCHjeLxzVjr1yT2Q6UiBLhGYBy0huqUGMIbBt1Z3zxkC+DlKQ5glV9S23moIy1qgGz1LNug9NvSJKU6yXPT9Eg9AX6We4zqGaL3Ezbfu6i8EMecsIooYVQKN1DB59+x/hHzPGzxf2w4kkDQ1pBKavxyQ3EuD9QrS301x+6pa+0T7wdvbULnW/izq2qTeRNTcDWuwSs5vO7IkWwyJkH GPzZVHDw CqLM7uzca3jQTOrVwJX3OEOxm6a/3K/EUXjLxqc84QrvyDcxhq6h6Wje9NG2J3X/62yZnEc4gmbtVODNDTm34O+25BGvfg7EBlG9b95UwntgOoqM22/rheCRczJznxKde3xa4koQfR8OrzR438kK5HVMkQdQACsRIag0iNk2ybivEXoMDZrYXMIIcrtaPYui0/2FM8chuSovYtyHE/sx/W1cKBobpnrGGfxkTLm8Cx9xig3N10nq5Tsf2hE2mr7q/D4Di82GY94oLZT+nCtgUrhdTSAivA2KB/jRspr1zucuFFwlVA68sqEMmRnCDVjsoqV6l6mya8djAzqPgCOlnbEhMABLYKu6M9ZWMyz3goDc6B7Tm1jRCnlCD9scN1gwm7MM+9NWVf5m7KxDbqTPAgZ0EmzUDMIxCZ3w/h62kMvnabjwPNqqLv9LubCN+uOTPdwPz4dGgq0oGeNfFXwUcRFkgNfJ1TZxzWRMeuunZR5hMAIJXOcPGoM0rEs+rlA/04z2DaK8wMUraQKUrxZim6ac1GGzQHraRc/1wnRr/7q2zRf8xTbRqDxIUT3k3iHtlYpKj X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song Currently, the (shadow) nodes of the swap cache are not accounted to their corresponding memory cgroup, instead, they are all accounted to the root cgroup. This leads to inaccurate accounting and ineffective reclaiming. This issue is similar to commit 7b785645e8f1 ("mm: fix page cache convergence regression"), where page cache shadow nodes were incorrectly accounted. That was due to the accidental dropping of the accounting flag during the XArray conversion in commit a28334862993 ("page cache: Finish XArray conversion"). However, this fix has a different cause. Swap cache shadow nodes were never accounted even before the XArray conversion, since they did not exist until commit 3852f6768ede ("mm/swapcache: support to handle the shadow entries"), which was years after the XArray conversion. Without shadow nodes, swap cache nodes can only use a very small amount of memory and so reclaiming is not very important. But now with shadow nodes, if a cgroup swaps out a large amount of memory, it could take up a lot of memory. This can be easily fixed by adding proper flags and LRU setters. Signed-off-by: Kairui Song --- mm/swap_state.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/mm/swap_state.c b/mm/swap_state.c index 642c30d8376c..68afaaf1c09b 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -95,6 +95,7 @@ int add_to_swap_cache(struct folio *folio, swp_entry_t entry, void *old; xas_set_update(&xas, workingset_update_node); + xas_set_lru(&xas, &shadow_nodes); VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio); @@ -714,7 +715,7 @@ int init_swap_address_space(unsigned int type, unsigned long nr_pages) return -ENOMEM; for (i = 0; i < nr; i++) { space = spaces + i; - xa_init_flags(&space->i_pages, XA_FLAGS_LOCK_IRQ); + xa_init_flags(&space->i_pages, XA_FLAGS_LOCK_IRQ | XA_FLAGS_ACCOUNT); atomic_set(&space->i_mmap_writable, 0); space->a_ops = &swap_aops; /* swap cache doesn't use writeback related tags */ From patchwork Mon Jun 24 17:53:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13709937 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7328C2BD09 for ; Mon, 24 Jun 2024 17:53:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 461036B0397; Mon, 24 Jun 2024 13:53:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 412FA6B0399; Mon, 24 Jun 2024 13:53:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 23D2E6B039A; Mon, 24 Jun 2024 13:53:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 035C06B0397 for ; Mon, 24 Jun 2024 13:53:33 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 6C061A1B18 for ; Mon, 24 Jun 2024 17:53:33 +0000 (UTC) X-FDA: 82266529506.07.3110DDB Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) by imf06.hostedemail.com (Postfix) with ESMTP id 8A0C3180015 for ; Mon, 24 Jun 2024 17:53:31 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=MkQL6evc; spf=pass (imf06.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.177 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719251597; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2ab8O9yxFHOBMPoIp/ZXBmJCmkgH6/kR9tzZfY+sUGE=; b=gRrdFvJZw7QY0zoQuYwXTOnvq+IrWE15dFyE3OA4IsKulZ80DW0v5Yd2l+0IIG4rWZ+ZNx XCiC0BRJuDnEJpxUIqE9km0xQvHRuN6m74TweTpoafW6G+SWHLkIKZgxkpPuqL8MkdzotB /DSSaGPf5bzqr8RYlALfWP+gVW03DJk= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719251597; a=rsa-sha256; cv=none; b=ntCRHKwJMTPPGcZwxgI62L7tavD4OHXgOk1qRswQp/7kkYqix/D6pXLhfof20zIvFIxq+8 Swj36DQvuep8RrS19gQp3e60o9vNMpbSpISn/7AoPkv2/RqzcCjOwWOlTmc1/hXaYrkNuy mOS8WBwE7HGeGtPUOuOEIuXdFttKG5E= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=MkQL6evc; spf=pass (imf06.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.177 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pf1-f177.google.com with SMTP id d2e1a72fcca58-706680d3a25so1513085b3a.0 for ; Mon, 24 Jun 2024 10:53:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1719251610; x=1719856410; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=2ab8O9yxFHOBMPoIp/ZXBmJCmkgH6/kR9tzZfY+sUGE=; b=MkQL6evcFWfdv5F52g9w3iPM5iaZQY5aWczxbqpFwYbWehio/lfcl5j/rbATrgAyLv mVTpEhTerElPL70zXgiiHEZntnAJ6NvOPUy3doaL8wfbDHNOAgaHl+NYN2QLqv2thQRU f+EPD0ZKJH8pCylmIHGW9LEvAfQx4qyni/pmc/upy6Jl5T/Yq6WPneUnWfEX1PW0tqd3 PVtuswDOplJfQfqoO5NhL6h+XQttA6Orj8UBsxh8rJ73ynRn8mT+EImNVUtNLReyra+T ZD91ssdtsI2uu44+Kh2T+bLA0XOmB0PZr+b2OY0Hk6kKH4qQz3TxJ/xphX3+P8/zvHn/ 7rOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719251610; x=1719856410; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=2ab8O9yxFHOBMPoIp/ZXBmJCmkgH6/kR9tzZfY+sUGE=; b=KgsZkX9CYp2kTmKFzgpKERGeQJpE1hSijEXC8gNyQKFxu2t6EIUuZcT2dHJrLjNTIS pzatkuyXraqVsNfE9Kgv4aNxS3wVYP1+WrdD5o7vlYoJ+h/pKl8AVUtZ++8OSHf6VfNd BnxANJfRTtE78rSRpVtAKiwu4VE0aWjiI490UXpJmhbmcOljF0FbmVYzePEdPs+MC7+Z pkS27SKpm0RT1+yvGx9+Vece2yXtypXAnrznTmJw5LF0Mt5f7RQulbZr9DCmlrGKDf6y ULtQFw/PV/c5XspJvK79zWyXbYlM+Btk9AgZ1QsVvoUZw85QnMxozEHXaXt0v6ImtJSK fU/Q== X-Gm-Message-State: AOJu0Yyjp+TonrrcUK5dl+Y5f7enMf+W1e8qpUpOqae6jNAtQKpGElAQ pcmcGDT8UFnOUmaNb4LputIu09xOhUKMApHvyh9it4VEPlsciHg9AK06CgdY65I= X-Google-Smtp-Source: AGHT+IEp00uStAdiJWE6i8BRDpqrzy3aSqCXj3rpTR6FEwUP03E1Y8rq3Dsam2PEu7XpvjvWHKp6CQ== X-Received: by 2002:a05:6a00:b21:b0:704:32dc:c4e4 with SMTP id d2e1a72fcca58-70670e7a958mr4909600b3a.1.1719251609855; Mon, 24 Jun 2024 10:53:29 -0700 (PDT) Received: from KASONG-MC4.tencent.com ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-70662ee117asm5008049b3a.211.2024.06.24.10.53.25 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 24 Jun 2024 10:53:29 -0700 (PDT) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Matthew Wilcox , Johannes Weiner , Roman Gushchin , Waiman Long , Shakeel Butt , Nhat Pham , Michal Hocko , Chengming Zhou , Qi Zheng , Muchun Song , Chris Li , Yosry Ahmed , "Huang, Ying" , Kairui Song Subject: [PATCH 2/7] mm/list_lru: don't pass unnecessary key parameters Date: Tue, 25 Jun 2024 01:53:08 +0800 Message-ID: <20240624175313.47329-3-ryncsn@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240624175313.47329-1-ryncsn@gmail.com> References: <20240624175313.47329-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 8A0C3180015 X-Stat-Signature: 3einb78w688trcmaxn3dzgiu8tqc3gti X-HE-Tag: 1719251611-526185 X-HE-Meta: U2FsdGVkX1/zHTgmfhtf90wqp4j3TQlQrOm2ufbcNWOV+QO3B8D8HSOJiHfpZanZeYsPmb2OrXjnVxUIPd8yoEXNKwWcg8MroWxqKt5CC9dmCAK4XfzARQWwPSwL8Ra3tfgZ2vcZN8nScXhlba5/8CoyFa2bik3yrI9QVF3R4sJTEFevJGWf+Sz9bPREKYu8gUjHWbC3i0Gyt8fHbTwE3d8xQvymZihzO5Q9FdyRWK86+rAnZDrLgAmeRguWhiroIHmfFpioblRYEb1VwbBOsDfYZ8lN+Go0a2Zr9F3Xt1rAbPXgg+QfsH4hiLenz72E57mYDqKxZeGmsf+6kWVkTJQpOPXnVEo9b/PT4bLc1mvfojPlRvltKVaZZZkXFfB8UfxBgWlduP63rA/DPf8H0oDpBPHE8UOmbpSpdgS6/EDC6nSOSE38LSWrlyyuUEAK379Q9NGcnvP3ZvcmLtzdoAiJRq0lDAa+Elsu/zdLhaoFVg616u6pFIGJRQF9D89pEfn5FI9OdIY9no7/hy4gNpu0qM8Eyd43L2nYKS/RDrmACWZAj3XU0T7aLqshDolLd/knCE6RtDN8yPi2RbtHeAglEiM6c4aWA/Vv+E9olsFhr5RoHQh9dOm5hQlRlszZ7veZmLWzDIy7jC6zvhEIB9IXa63E96rQ9C/FUff2f3Z5AnLZBSapq7BuLpWbur7OrKSK+27VKvgaML+jHsreVq5tFZHd/MWLqDS0WK0vH1RMwcB38fqva7E9jDV7nrBrSG5SjB5V07gIM/AW5yGMTuHCC57KE620F42kYmgkftWWJMtpNyo5mu3XkWAlJpqv92RnzL4TXAJuuSUxG4giHRHAtL2/tVCl/J3GvD7mp0i4/mNyrA4+Bgj0XnMlmt6SF+/nBUXt7YziaM0uV6zfx8gX5VoOX24g55JUvMurbybYCUhzwlDPphyMQsJClKlkhfe/G8We6/GBfA2QkwV Ly18+lhA HLcnT83yZhz+vRD1NB4ihOYnPQZjme33Tmg2rRo75doOVbCoZocE8+rxYAIkUczoRwTU9eaMGNK5SoU4b0KXADneGrFfYUmf9pexbfc/S6ANoA7uEwiJ/vohckXJ4T47EWj8QgNFoU0xRYeNWMY1wSWKpOJ3Jfd5KrvVkq50PWtYAZFgUoCJZkQEOhg4FImrfl4vwgDxg8P8YgjsSiV/3j/tZnPpbdXNMCktaUjVuWs5IsAlWyAMaDlqX968vpyIEcLAMrcxSNJJkvsr51Sqhdnonv21vZNhpeQwDozEDEumgAzc3Ofr84nX/KW0aXspEObJnGrJ+Lz8ZtWEtzTtQyv/69xP1y0Y6f9X6XDpKaos5Awx5BNlb8uL99Ld6VgPZ3U6CvRSrhMbkgidFfF5ipggmIwWHwZXLnHXR2Ypr9QDcfvzK6Ei/ji5Vix+qi2qgA/64PmW3z5fZE+TZ79DC8KJsIpe3JZzZ3hBkIAN41ad3fwl5pxtx+kepXYSIHYiVoeeuGaMNbEwPxR9+Yls6Ti1Y6WAgAxtplKBfc6V4oQnjedyVrMIHRAnhcMXeylCED1ndedvVeMNrkG0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song When LOCKDEP is not enabled, lock_class_key is an empty struct that is never used. But the list_lru initialization function still takes a placeholder pointer as parameter, and the compiler cannot optimize it because the function is not static and exported. Remove this parameter and move it inside the list_lru struct. Only use it when LOCKDEP is enabled. Kernel builds with LOCKDEP will be slightly larger, while !LOCKDEP builds without it will be slightly smaller (the common case). Signed-off-by: Kairui Song --- include/linux/list_lru.h | 18 +++++++++++++++--- mm/list_lru.c | 9 +++++---- mm/workingset.c | 4 ++-- 3 files changed, 22 insertions(+), 9 deletions(-) diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h index 792b67ceb631..2e5132905f42 100644 --- a/include/linux/list_lru.h +++ b/include/linux/list_lru.h @@ -56,16 +56,28 @@ struct list_lru { bool memcg_aware; struct xarray xa; #endif +#ifdef CONFIG_LOCKDEP + struct lock_class_key *key; +#endif }; void list_lru_destroy(struct list_lru *lru); int __list_lru_init(struct list_lru *lru, bool memcg_aware, - struct lock_class_key *key, struct shrinker *shrinker); + struct shrinker *shrinker); #define list_lru_init(lru) \ - __list_lru_init((lru), false, NULL, NULL) + __list_lru_init((lru), false, NULL) #define list_lru_init_memcg(lru, shrinker) \ - __list_lru_init((lru), true, NULL, shrinker) + __list_lru_init((lru), true, shrinker) + +static inline int list_lru_init_memcg_key(struct list_lru *lru, struct shrinker *shrinker, + struct lock_class_key *key) +{ +#ifdef CONFIG_LOCKDEP + lru->key = key; +#endif + return list_lru_init_memcg(lru, shrinker); +} int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru, gfp_t gfp); diff --git a/mm/list_lru.c b/mm/list_lru.c index 3fd64736bc45..264713caa713 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -546,8 +546,7 @@ static void memcg_destroy_list_lru(struct list_lru *lru) } #endif /* CONFIG_MEMCG_KMEM */ -int __list_lru_init(struct list_lru *lru, bool memcg_aware, - struct lock_class_key *key, struct shrinker *shrinker) +int __list_lru_init(struct list_lru *lru, bool memcg_aware, struct shrinker *shrinker) { int i; @@ -567,8 +566,10 @@ int __list_lru_init(struct list_lru *lru, bool memcg_aware, for_each_node(i) { spin_lock_init(&lru->node[i].lock); - if (key) - lockdep_set_class(&lru->node[i].lock, key); +#ifdef CONFIG_LOCKDEP + if (lru->key) + lockdep_set_class(&lru->node[i].lock, lru->key); +#endif init_one_lru(&lru->node[i].lru); } diff --git a/mm/workingset.c b/mm/workingset.c index c22adb93622a..1801fbe5183c 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -815,8 +815,8 @@ static int __init workingset_init(void) if (!workingset_shadow_shrinker) goto err; - ret = __list_lru_init(&shadow_nodes, true, &shadow_nodes_key, - workingset_shadow_shrinker); + ret = list_lru_init_memcg_key(&shadow_nodes, workingset_shadow_shrinker, + &shadow_nodes_key); if (ret) goto err_list_lru; From patchwork Mon Jun 24 17:53:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13709938 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98095C2D0D1 for ; Mon, 24 Jun 2024 17:53:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2B2356B0107; Mon, 24 Jun 2024 13:53:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 261D86B0191; Mon, 24 Jun 2024 13:53:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0B9706B01A0; Mon, 24 Jun 2024 13:53:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id DF07C6B0107 for ; Mon, 24 Jun 2024 13:53:37 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 78BDCA1207 for ; Mon, 24 Jun 2024 17:53:37 +0000 (UTC) X-FDA: 82266529674.29.1DE89A8 Received: from mail-pf1-f170.google.com (mail-pf1-f170.google.com [209.85.210.170]) by imf02.hostedemail.com (Postfix) with ESMTP id 8DB2480017 for ; Mon, 24 Jun 2024 17:53:35 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=gCsfOxKr; spf=pass (imf02.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.170 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719251601; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=iYA+NHa6msAwaVpTWCBZybumiU03pSVsUDbxtEtbQvM=; b=koESY1i7o+tKMAOVbhUjGBLwO8XKSN45YXQOZErs1NbP4ekBpG2GjK2Yun0PbLXwF00huw 0k965vBwlZqs5r9UpvJNjZU81uyQ4rzy5/GA5P1hxMAUNhjxqdY9W+6488iHp+2jCJ75iq AY4VQI7qOkNuxuLmyajz7tczQS3KNUY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719251601; a=rsa-sha256; cv=none; b=eJGDQOWBcypNOF+wHcCimrPUAAiq6yBvFLnP1jwkE21nplLrOAr5vL5Hq/DBOhKqpoKtFe mZyefeXK2bISr2wKouZGKZPPmvfEeJdaAv8WpG/qhlA7DpBZKiVtg+qe5GLIbhNrzKt5gu 6FVK3hM8YLNqtHifqYGmc1vo9Kl9KdM= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=gCsfOxKr; spf=pass (imf02.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.170 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pf1-f170.google.com with SMTP id d2e1a72fcca58-7067435d376so1120418b3a.0 for ; Mon, 24 Jun 2024 10:53:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1719251614; x=1719856414; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=iYA+NHa6msAwaVpTWCBZybumiU03pSVsUDbxtEtbQvM=; b=gCsfOxKrQ4vS30aLd/6+4W4ngjvntZ34U4meEsd3MdwvU4O5Mu3j+RT/8PdiWAFTHf do312S5GV5pSaIhyCtC7+hfN3kiLiF8joxMsaPExJeOMVqJT4Yamg2lEXi3oVJA/YhOf YNzR0/k7MowomNsgf5zf0WcDJMxMLXoPhdRBYao3zt6R2amlJwypylQYpLUB3w2/hUnE xx1bXw8hvL8JeO7Vg0TfaiUrUlZ3dj3KzcPCohDXHGdu0ehij4TxVtWvcNxz61IAVyNC uTl4VjgY1bz+/AxAGc9Pii3KpE1wqV732QsExyjsO2dbmhiQiROHKvbxrEZdlvv2GV6t +R7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719251614; x=1719856414; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=iYA+NHa6msAwaVpTWCBZybumiU03pSVsUDbxtEtbQvM=; b=gyHx8HwrEmnEvJ6B95p7+5pExYMEJWeTBfOM2B/PEom7pe39b1dXkbCAqiKMFO4FgS H/j6ibHykET8K19XC/AMBbv+kAy8FEevl+2C6N6PONPTc4834M16TcupX4HAXJBgcFHD uA4i6sXCxU9J5N/HHXUWFNGHmqs4izJHcld9yMkQdI8hxS5Li5rQbbCpGg+3UU0R0UhD craLKMd84Nvo1WVTiApQddCQmFYoRIga7prGpAgXbF/eLOPkIngVI8u63hQ8s/8eZWZ8 zQKPXFt+KRPqKFjQ2DC1eWq1b+OtfvMkM20cOF0skqgmNw1tEoTvUUBT0M1X6pfD/RjA 2Zdg== X-Gm-Message-State: AOJu0YxxazeycDPY5SQUJ3wOe2FOsfWTQBev7XmbsKDGcLyk7rDfAg7f c+sN4ztVBZ8xYbiMkSAGQwrKyREBNTEvaZnC0vTI+GnaiPqhXkiC9yaomiIIwjc= X-Google-Smtp-Source: AGHT+IGq7LqK9mTAr648RaJ2ZzqOnRA2mJEODo+ep/Z7VccTOgDkPtJQGFP3fuCk+dbeRi6FpF47jg== X-Received: by 2002:a05:6a00:8a09:b0:706:6c3f:ea62 with SMTP id d2e1a72fcca58-7066c3feb07mr7486354b3a.2.1719251614037; Mon, 24 Jun 2024 10:53:34 -0700 (PDT) Received: from KASONG-MC4.tencent.com ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-70662ee117asm5008049b3a.211.2024.06.24.10.53.30 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 24 Jun 2024 10:53:33 -0700 (PDT) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Matthew Wilcox , Johannes Weiner , Roman Gushchin , Waiman Long , Shakeel Butt , Nhat Pham , Michal Hocko , Chengming Zhou , Qi Zheng , Muchun Song , Chris Li , Yosry Ahmed , "Huang, Ying" , Kairui Song Subject: [PATCH 3/7] mm/list_lru: don't export list_lru_add Date: Tue, 25 Jun 2024 01:53:09 +0800 Message-ID: <20240624175313.47329-4-ryncsn@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240624175313.47329-1-ryncsn@gmail.com> References: <20240624175313.47329-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 8DB2480017 X-Stat-Signature: tir435grasai7f3k3qjrdgupk9dixjz8 X-HE-Tag: 1719251615-429067 X-HE-Meta: U2FsdGVkX1+4/RogKd9OgeflQvMiAMuNgJ+qU8rthRvyL1KIKPVCo4wHlpw8ogdINq+QQZlohsGzih0I9vTcJwGA+oOcTAeArKWqNs/7OafMPqrjIcAKG/wu0R8YK+aibc5rVdsOm3ATbxSbAMV+LAqjEsR03RpDZAb49mCvl8R679RzYkSuTJDqIHWgg1QEtepl5KvOq4cWUL7QpT/hDdXskY4orQTZ6bwTtznMuPXMpKaF7xA7HAuHlO8cNDZO4HgRHi5qnCc2MpwP1VQDOgkLlifJ1FqiIXxPxhSHofPVQv8NedVa5PYC1iouBH0EidVDSLfox/IZDC+SLIO46z3Gj5qnMrkCC0mYUVL4XX0CbTcnjdUf5i8kn9+pGn7ShqB1D+aYquU45dNebfpqbNdbtMOoBbNWOMoHhb/VkOcxoqPjTKiXN7P2jFmc4VuUHCfH+Md1ouBjL2iHHhkKIzJmy3jQIkNmfUxFfQlM1LtwX0rydo68FhZZxDC9XADQdWSY6p7UZ4IBXRhHMN5db9cXuUJw5X1qzH/Jah6IJ6XOwnRJ8Zu0jUvBcjJ7cW3e2QsU5WmhkEzVkGrGIQeg+Icab+PfGHOBtV/ui5BVX7+7wg96pzZUTfg4A8GN7+KG2wV5I7S+WduxsKDBvsrwWIoDT6bszCs82IEqlGhRx0vPW5JWBSc/uy1DAmExQTsGotZWmyh1qGwh1gv5zustcWtl0io8W2dSxhz3IrBFkcpXQ3vkHf7pWyMhc2SY1Y7n9rA1xy5uyue/FrH0zyQ9SmJB+hP6sXZ0CTWJlxM9pk6hl/0yNeob1505t1ymqf3lWR1vTURfjDuvWSWi/yPJt4wsSkCvfFceMALjfAwWzrnOnZmMgX/tCRDGBZHFGkn51aNgmZxc0ZZVAqU+MZ5DMcw2Sjo4MxP+kA4FHfDtel9Waw7VSTUVBMlZdSGiJUrxVl1Tp3dElIxA2ffAB6E d/c9ugSU 59VXJ7PKdAlcvT/KKg/06DG4rErerJBLlEWUu8BHGeNLi3XB3aiqiQd/qNLLGJNALnHrcfzZHibAYf1VdCYUx1Eb/8zj+ydxuqV8hS2wkOelXa+ni9WaiFLhP5lP/lSj/W+qpUl6KmuYgm/XPagT8pOE6TF9fQKwo1Ab+2UKrjSaDHeyz+Aeh1jjwoPnqXqDlRtVQqc9oS02w8L/Du3bBgrh0MI1Yjr5PrE6xSu7biOda9Ax1Y4C44g4uIXKxeMIC0ayAKyRUNA4oGhE9szYxY6f8DVixqaoHw1klh5oo3c0U9ST3jVb4I0R8JCkGRVfOq4whZ6jbYQZrwfsR2VZL22biYSelt8vAErT+9XtYIn8kzScijWP2F/dRVwrXP+pXfWNXkvywuvLuMOwEMoCEvSdwH/pSy4mN/urta4mXZm0hXxT51yfwAV0mJDTw0hhDXbeAwKz7n3za03ZF5tfmDQxed7GAETXZfQ+CuO9YO3wrX0djV+Y9N+43wfVQKf3OXvN63+0ABQ/6fPbljEVYRE0+kQN7fwMjDF1hw8dafk0JsS95nTfp0qql9kJEF/Q4MOfpn3Tx+PhkHic= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song It's no longer used by any module, just remove it. Signed-off-by: Kairui Song Reviewed-by: Muchun Song --- mm/list_lru.c | 1 - 1 file changed, 1 deletion(-) diff --git a/mm/list_lru.c b/mm/list_lru.c index 264713caa713..9d9ec8661354 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -105,7 +105,6 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item, int nid, spin_unlock(&nlru->lock); return false; } -EXPORT_SYMBOL_GPL(list_lru_add); bool list_lru_add_obj(struct list_lru *lru, struct list_head *item) { From patchwork Mon Jun 24 17:53:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13709939 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62B54C2BD09 for ; Mon, 24 Jun 2024 17:53:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F0EEB6B039E; Mon, 24 Jun 2024 13:53:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E96E16B039F; Mon, 24 Jun 2024 13:53:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D136E6B03A0; Mon, 24 Jun 2024 13:53:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id AEFD66B039E for ; Mon, 24 Jun 2024 13:53:42 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 58C5B141121 for ; Mon, 24 Jun 2024 17:53:42 +0000 (UTC) X-FDA: 82266529884.03.F45DFB0 Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com [209.85.210.175]) by imf23.hostedemail.com (Postfix) with ESMTP id 7AC2214000E for ; Mon, 24 Jun 2024 17:53:40 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=SIWL2t8W; spf=pass (imf23.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.175 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719251602; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wR7nCSc7wycOJ85fc2vh7+9RCx1gdhpfbe952wKPpHE=; b=PfzhZrwn3Inj9T1ciJspr17oOlQcmBy7kRTIH5+kynxho7pdNGLQUefIyMDlFti9mtClRU sKXOQMXmmVNWowiR9RymiZjCyim71HKKO8RYzGMC0XCCy3B+Uj2Fn+NunXcEULFVvkbgOa BtwCmUOjXy0xPBpWahC+5cyf97Y5Qrc= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=SIWL2t8W; spf=pass (imf23.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.175 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719251602; a=rsa-sha256; cv=none; b=1UzJ1RUV4JxTNqaeUW9oiYCKKif2Wqx1aJqO/hKIinxe9eS0vimg72sQQNL2fFYi/EKSA8 T58k5gimH1IGk8eM6NfgTUTdbfNnqCdf4TaXRqJqZVWZbADNNk9o8JTIx+kQi9l7q9pWoM yjR6AKtvPuk+P4hfsur+rx77XHoGHPY= Received: by mail-pf1-f175.google.com with SMTP id d2e1a72fcca58-70666ab6434so1771793b3a.1 for ; Mon, 24 Jun 2024 10:53:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1719251619; x=1719856419; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=wR7nCSc7wycOJ85fc2vh7+9RCx1gdhpfbe952wKPpHE=; b=SIWL2t8WjwBB4Vznyzmn6PaouWc6Ie88JFiP576JjMQBFNjlVV+oGDQ9djHvOZSxG2 ItW4bOklHgmY7rU7i+OM4Dhxr3FKJWaUlYWF1x558QFEOPDqvj/B6tONwrWSWeomgyX9 ydrbL+6bhL6bJL+8OQ7vCM8D2TiOl180AYr4h5qRklRqIuTiM1tQEa5SC9wZpE5B2JwJ +L6BnKFMqhiVAtEmWyU4D3zgJPgbYzmxj0a7FxR9FhAT9yu4xEjFbIt/Z5VbrZ2kA1et nyx34I5iHfMo7UvlgCETzlpiRQRIIuypF2x65oBZKds5cUukZYTwpw33H34T9wmcWQfx RkLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719251619; x=1719856419; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=wR7nCSc7wycOJ85fc2vh7+9RCx1gdhpfbe952wKPpHE=; b=uy7o1nDQJbb0wf6Ch58Biax3nMvSyMtk1OWzxJd9si3MfZd4fWds4SMSlyfy+CvA17 /iN06J1rEGGH+UcvaTQf/YeEDXtt7CqOqrzDir++5m6xnk0iMvLqiUziZbRXFLZ/mqY6 6C+ULoRJ+PhkS7jRV8dPzVzktEyH9t/oEg317ZnEIZ4e4C8tzXjiftcqSLcTdE7hS4l2 onIJbslOYpcNWmfo+5Jtux1GoElILOGK8opnp1XMmCASeZg3/9e57qANbvRa55Ehp42G u6bHNk20u9FCUqcz7//uIpxr1m7dGX0iA3cac113QwavEFZ36VhENeaQ2ZhnnmdlZIIm wPgQ== X-Gm-Message-State: AOJu0YyOMRi10lnJ9XugPNRr89OefahFzAeVar18DQOUmgP9KUXvU0Nf Oo52aV9dJGfuLLAFdFp63XOSwMGrXfs5w8biDiEkcwf7w2iNO86nktxxXkQJcbE= X-Google-Smtp-Source: AGHT+IHLHZoQzy7hopsa8iKiLEPkc3wyIys/gHGkIAkSEHRVzjCjaC0AObbeFEAQGbN6VramW1Ve3w== X-Received: by 2002:a05:6a00:b21:b0:704:32dc:c4e4 with SMTP id d2e1a72fcca58-70670e7a958mr4910003b3a.1.1719251618830; Mon, 24 Jun 2024 10:53:38 -0700 (PDT) Received: from KASONG-MC4.tencent.com ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-70662ee117asm5008049b3a.211.2024.06.24.10.53.34 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 24 Jun 2024 10:53:38 -0700 (PDT) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Matthew Wilcox , Johannes Weiner , Roman Gushchin , Waiman Long , Shakeel Butt , Nhat Pham , Michal Hocko , Chengming Zhou , Qi Zheng , Muchun Song , Chris Li , Yosry Ahmed , "Huang, Ying" , Kairui Song Subject: [PATCH 4/7] mm/list_lru: code clean up for reparenting Date: Tue, 25 Jun 2024 01:53:10 +0800 Message-ID: <20240624175313.47329-5-ryncsn@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240624175313.47329-1-ryncsn@gmail.com> References: <20240624175313.47329-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 7AC2214000E X-Stat-Signature: ha9qk8gkjynpa4eeuajw5k6eu5aucrzt X-Rspam-User: X-HE-Tag: 1719251620-803326 X-HE-Meta: U2FsdGVkX19VKbkRQPqCodtyROed9y2UwIz7yYu9OHLgldD78TzkSEPO2cChMiBAaa885c4aQx2TuJzWdrW73We2TI/tQqccE8Koi5tTqaU00FjIOz8s/mOZJCwiggWS6+lqAS5v2+fH0TfK4HXZEXrcUENU0nFT2sVpCrF1jZw4Z7LRfCvG+Ip5x3yH2LCuEOcF1QcX/OIEoKQ18HbMJhA4qBj50oCi3hI6UoeNgBsrEDXdTQmdduSP2/hVZ9jRIDpAabAZpdQhDAuOOEjydFiG2qUT4QhcMzwhFx1gCBYN/mBAkX/oqR6BnJZ1HnE9Ge/RVmHLakA+gJbPO+abGdf3KnEpu9kDuuJ+pA1iLi7L4q4wyUawBFQHezGdOWGCwwOmnS+wkgXMhWV9qk8XNejRACgJ/zaKuyPEU4TXCBtmxWYzdoD0LlH1dbklOxblc6C6uW+3CYj/sNcXIFFnE+lCcE7sHRFtFeK8W3CDdPPxfXQtf1s1eBHBaubqB2dglMjZotuwizHd1nYWDCXSN41E8eU8wOyG+Fc0cjXvaliPTNN5Iw14Q4fuQQgdL8p9Z2pqgw1wSN/tixOIYdJJRY7hYaldUjiAZPCIOV7GkkN5DdWbv9COMQ2mJgQqz2jIGD0j5ogqR8gdCmEKUBF6hRWArsCXklP+PlyK8egI0E8Ek5w3oEN5zgn3rXsoCIVkvVovHXGvkVbOAdHV6a29ygzJTER4vGk1rvVPyGik/LVb63qMu1Rwg7UNV8hk6gY/vO/by0v1jIOnW3XvV4uNmm75wktNpI6Hoq4P9+CFWO/fQye7ddCUOscN43mgy3WxTJxGL0gSIh+H3dxCYrkFbqippp3bLnl5WElLFZVAr6YUlzA9nH8G+pYb0ztNofygwTd8reOXi2I8HrAW55ihlqVtZlT1+x0LXlMZo2Zp3ac10P+w/JLztjNcmlr3UH6wSmTFqw+wkUouumlh5eE RkrtsM4l aaHQeYho52d2quX820jRNdk8SG6DvTfXiQ/+MEeuhbi/wTm0sxBY9fLJqbujXiU6ZexY2z/LLI9z4Xkb7XI3TmNvMDm4VEXvXAoR0sAT49bg0wRw07uIFa7Ora9d6Tqo290i3o15LyBwNvymAWq/1vx4xBPRt0TNVBvbIyeDN9GVX6m4/SLDpFoPonPnDId3nJA98skad0OvK4Sxgeqb+mqvDO5+RO2Luy0n2+a3alcfxyL1V+hPXYnp6I8tIJ1G8XQFVSQ3XUcFqqeoewguqWPJtvNxZpcWvPlZkPcSP5652oMHnXC3xqdw1NODIDs9oU9ti1svTyvKsdgzG6ZA9OFINOLbqlgbXurTolN5EuWml/G8i7aWW7/qfvNX960PYZOb0DcYXIKuulAI8EkSuMLWhYWRMrWk3DgXaAesVgOH7K7AtC/jvik0mjT0SegJVy3lXc0ZSH2VE2ud8zMc8OV8E53NSJ510fjdeJ8BGPrEfqI3HW0TmCMLIruSeaVIz/Sg37jIGez53KB3+C4vmSfXuRFnBVzprRRuxg0tA8MaDQb9vaJMAU4U9SLkagmXJoL4aLvXCfRDM3/0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song No feature change, just change of code structure and fix comment. The list lrus are not empty until memcg_reparent_list_lru_node() calls are all done, so the comments in memcg_offline_kmem were slightly inaccurate. Signed-off-by: Kairui Song Reviewed-by: Muchun Song --- mm/list_lru.c | 39 +++++++++++++++++---------------------- mm/memcontrol.c | 7 ------- 2 files changed, 17 insertions(+), 29 deletions(-) diff --git a/mm/list_lru.c b/mm/list_lru.c index 9d9ec8661354..4c619857e916 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -405,35 +405,16 @@ static void memcg_reparent_list_lru_node(struct list_lru *lru, int nid, spin_unlock_irq(&nlru->lock); } -static void memcg_reparent_list_lru(struct list_lru *lru, - int src_idx, struct mem_cgroup *dst_memcg) -{ - int i; - - for_each_node(i) - memcg_reparent_list_lru_node(lru, i, src_idx, dst_memcg); - - memcg_list_lru_free(lru, src_idx); -} - void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *parent) { struct cgroup_subsys_state *css; struct list_lru *lru; - int src_idx = memcg->kmemcg_id; + int src_idx = memcg->kmemcg_id, i; /* * Change kmemcg_id of this cgroup and all its descendants to the * parent's id, and then move all entries from this cgroup's list_lrus * to ones of the parent. - * - * After we have finished, all list_lrus corresponding to this cgroup - * are guaranteed to remain empty. So we can safely free this cgroup's - * list lrus in memcg_list_lru_free(). - * - * Changing ->kmemcg_id to the parent can prevent memcg_list_lru_alloc() - * from allocating list lrus for this cgroup after memcg_list_lru_free() - * call. */ rcu_read_lock(); css_for_each_descendant_pre(css, &memcg->css) { @@ -444,9 +425,23 @@ void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *paren } rcu_read_unlock(); + /* + * With kmemcg_id set to parent, holding the lru lock below can + * prevent list_lru_{add,del,isolate} from touching the lru, safe + * to reparent. + */ mutex_lock(&list_lrus_mutex); - list_for_each_entry(lru, &memcg_list_lrus, list) - memcg_reparent_list_lru(lru, src_idx, parent); + list_for_each_entry(lru, &memcg_list_lrus, list) { + for_each_node(i) + memcg_reparent_list_lru_node(lru, i, src_idx, parent); + + /* + * Here all list_lrus corresponding to the cgroup are guaranteed + * to remain empty, we can safely free this lru, any further + * memcg_list_lru_alloc() call will simply bail out. + */ + memcg_list_lru_free(lru, src_idx); + } mutex_unlock(&list_lrus_mutex); } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 71fe2a95b8bd..fc35c1d3f109 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -4187,13 +4187,6 @@ static void memcg_offline_kmem(struct mem_cgroup *memcg) parent = root_mem_cgroup; memcg_reparent_objcgs(memcg, parent); - - /* - * After we have finished memcg_reparent_objcgs(), all list_lrus - * corresponding to this cgroup are guaranteed to remain empty. - * The ordering is imposed by list_lru_node->lock taken by - * memcg_reparent_list_lrus(). - */ memcg_reparent_list_lrus(memcg, parent); } #else From patchwork Mon Jun 24 17:53:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13709940 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1216DC2D0D1 for ; Mon, 24 Jun 2024 17:53:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6351C6B0295; Mon, 24 Jun 2024 13:53:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5E4F86B0297; Mon, 24 Jun 2024 13:53:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 45E8D6B029B; Mon, 24 Jun 2024 13:53:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 243506B0295 for ; Mon, 24 Jun 2024 13:53:47 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id D393941204 for ; Mon, 24 Jun 2024 17:53:46 +0000 (UTC) X-FDA: 82266530052.06.16F39F9 Received: from mail-pf1-f182.google.com (mail-pf1-f182.google.com [209.85.210.182]) by imf10.hostedemail.com (Postfix) with ESMTP id 08FD3C0002 for ; Mon, 24 Jun 2024 17:53:44 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=dU32gZ5D; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf10.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.182 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719251609; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OVbj5lGZ2MXAdaurvv+B/YwNudp43tdHqr1WyLtvH38=; b=LZLC4cW33N8decxd2x11fG97u2aKz5e56Wn2r8DZlKgyZ5M6PoKG2T6jMuAvVV4sAKWlXc fy+gf1sAgOFEy8xiWMkR1DI+DDr89xxXI+wZykZp2euVxkONEmn36UrznuBi/AaO0SMeTQ g7lkebYaXsGvMXqyTFkOhAh5wRk++Vw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719251609; a=rsa-sha256; cv=none; b=pb/paGfAhuI6oAJGwVJxrW+yyS546ljwp0s86p3R8I52ggClcv6MxL7x4w1q3YlFW87Lpm r68xAAgm7DpCs8CzQ/Fottlr5kz50vCH2kug2gnqzZuNB35JSCUhchcnQVmOfK9NVEyFtR xevcinPLuXq6QNno2Sp+VvrqJxZOzTc= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=dU32gZ5D; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf10.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.182 as permitted sender) smtp.mailfrom=ryncsn@gmail.com Received: by mail-pf1-f182.google.com with SMTP id d2e1a72fcca58-7066a3229f4so1476733b3a.2 for ; Mon, 24 Jun 2024 10:53:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1719251623; x=1719856423; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=OVbj5lGZ2MXAdaurvv+B/YwNudp43tdHqr1WyLtvH38=; b=dU32gZ5DteR0cvx4Ow9x0cCn8wLRLMpIi88Y9us7Z6oDZVI0+vHWKAl/229CXBR1tl 3E+bmtstB5Z6gik9wFB+Ck8NQLRyBBhgRVJ5jYN8nNj1oRWEes1Dzq6zgORsrmCRNeZI I9K87GoddXjUgC1R6RmQOKv5+QluIViD/SlFJCUGhIEyDg93xLutvXNMkZle9gT8MUeQ b53HQeVGOIg1rfXKfurOGm3aYgrW6u2dvp1QL0i2S6BEWiQOjr4m2MGlIOzByS+7lWa7 6ud1kmMplO+JaGNlhHwosfMrOipSOk467syuRNRLmpgm0YUT3ZNfPjgcuq041G7rDmw+ fw8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719251623; x=1719856423; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=OVbj5lGZ2MXAdaurvv+B/YwNudp43tdHqr1WyLtvH38=; b=fnJXimo7rl83iUnMtbayzFRgU5G2Rw3i8hgCursVM3W/ewlSF5fsrRz/JKW1RA0fbC BHbwe7BOiXiRDnkdp0S5n4Ipg5u2Ah/kd6xXIYqNG0BpfEAe9glesUfD45tOw3TFd2pE clXn8V9hBn2YkE/Cpobzf6EvQVPjU8bzTU51uptIo3DIul+ib2ZdhTL4Ex4JwMQEY2Y8 GnF2drpswRtpozvb0A6GGEWK0qoknYCU6+q6llHMCB9aDlF1QZZa4O9z1PcZcDTpJEgf ie6WHQB+5KbaC8JF3NxLzTpa6BxI2sP1pk2pFg/AhiU8m/JxxOkVJ57VVnKQl9b4lFX0 GoJg== X-Gm-Message-State: AOJu0Yz2ZDkd5ko+rQgTnYvGzQEEpMpUW4x8y+IPVK/oRdPtyqz0DFAJ FyWssG7rPpvR+iCZRHxFFb3L6IqmOr5cONouS4zi9yp2mPt1pbQj2oqlU5vuwb8= X-Google-Smtp-Source: AGHT+IEJZZQ7bFfrIJhEvsZjquIMc/eeKkcoozXNlkD7oYTiQEBK5MC9t7olbcYhZATA7FNPVYQh7A== X-Received: by 2002:a05:6a00:8d3:b0:706:6bdc:4dea with SMTP id d2e1a72fcca58-7067459c699mr6968912b3a.4.1719251623205; Mon, 24 Jun 2024 10:53:43 -0700 (PDT) Received: from KASONG-MC4.tencent.com ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-70662ee117asm5008049b3a.211.2024.06.24.10.53.39 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 24 Jun 2024 10:53:42 -0700 (PDT) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Matthew Wilcox , Johannes Weiner , Roman Gushchin , Waiman Long , Shakeel Butt , Nhat Pham , Michal Hocko , Chengming Zhou , Qi Zheng , Muchun Song , Chris Li , Yosry Ahmed , "Huang, Ying" , Kairui Song Subject: [PATCH 5/7] mm/list_lru: simplify reparenting and initial allocation Date: Tue, 25 Jun 2024 01:53:11 +0800 Message-ID: <20240624175313.47329-6-ryncsn@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240624175313.47329-1-ryncsn@gmail.com> References: <20240624175313.47329-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 08FD3C0002 X-Stat-Signature: 67jf3tafzikoguu56zwk749s5nod6e6t X-Rspam-User: X-HE-Tag: 1719251624-56559 X-HE-Meta: U2FsdGVkX19tIWNTf8w01lZ3qiGDgYHSxoxHXO6tVk7zc0wFqZAlYjB/BaqweF/HJBfjvIACD2oxwAuC+XGwtFOmREtnh9t0QiXAVmbblqJ4wvOVJ4+Y+RinoOmIixummEgPFSygHZ8pla1nlRa/q6slhIWRdfJj+4q+6AyIwhlrlwZVar9NdBqQPTeXWFW26IQB9FWdU15u7Cli2OwPCudE10/T/TRmqFISJvodWXkGFeSv7Qdnvj42+y8R3svTQ0ZjK9UrG4+fe1m5HCyqaUUEGmvWYrhmK5z4Wdg30AxWfUMQbMo2JkK5+a0BlWvPCDeJOR30A/DFDPDHRkDGov+NPJsoBpNMW594ZFZKbkQ7QL2vR70H5AMCa4InG8dnQgihEdbZj5nSFGfmD+1P5TiZ6AQsOxEGA4xQz2LU66moR+Y9vJ6nYA/pNj9I8zKFCyHAnOppzSfyMqmg1dGptIhbeC5ey7Iu9jU9gq4MkdsD1m83sye6p7W4dtBSOR0O3M1CI0tCYpCgEH2LTxrsmg5/ii1sg1DXZl2VwldMMTpcUeMbkqiYqvvHa/78INacSvbYwt9nib632bjtCD8IGdJU/35c4ypMa6dLFHpYYqvBpT6a8x8G6902Khvh3tppQJgBTA0FWnugjBh+FZZ9TOJ+sxmWejhY2bhDawH1skdIhqV0CrjbgYimAdcyWmXOR3aOvjx0Iu+RG1NuAACZyYIfLKMZbxB3XWW7oL/uxcAcrlf+FDdw+ne8agvrZT9z+cBWGNHuxJIleCRcb61kMXgqZrWeioYHGZSSv8OIKgAZGyu3vxRLz4/vpJrb3B9372meMjXfUfVlqz7lUdO1iTlSZJYij9jJ646lzWkyazeWHhNB+Rz66bmHaBtDIAaVTntP078SuJCWQ5FLdEPzc+TUGnN8SieCutNNfo7X2YCfGpQaLo6l8+L9Or39z3poq+iSjcc4hLzpTNQk34U z5SIcM6Q DVYQBT6MDD5vCKNLxiEGzAbqX8oqOsJ6RJ1KnG0ujYr5hWeshoUq7GkggVe8TnTsZyuEWlPKVgCodQxJu8sAU7ZuluBopfNskQjSkgaJ7UDnutdXbmtTWuz7oOBeBGNdT5EMzwmGWgegVoKp23X20pEL2G1nIyhrW93qnxWSo9J8gU+iuKRKz0Ui133lfRkOVnnHBnbkJrncutnhd41uojwzSQuABXmg+VBk01FIhoNp7cSE5sBO1yCr233JBhv1Q3TT8PlH/Ssy6q+LHENfj904PAIADa8VVrrOQDJlXLjatc5Sxbe6VQlmQNtY7ON6u2v2EoUVf9ncIRy/Yjo8d6XwkAFUBzUXc1wuy3KGIr5aINloKE82Po9WIOuG3XMsCMPJn2V48tCaDI5D1m3DI3bYssyXrmScOGTjA0e/uN289DWJQJcGFj3X4Q0b9xTBvXmdTZ5FvnNLtaZZ7VOn2reNnkMoVXv4Qv4oTRZ0/l2TKpQjxHDlVeHzIUbDeAyL/F9tb/oxGOqMPPdnDkbqGzaNYp30AAvlc2HMqDzs525NW9H+1fFAHa77Bv2Pd4b6j85RU3nZKRZgRZ00= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song Currently, there is a lot of code for detecting reparent racing using kmemcg_id as the synchronization flag. And an intermediate table is required to record and compare the kmemcg_id. We can simplify this by just checking the cgroup css status, skip if cgroup is being offlined. On the reparenting side, ensure no more allocation is on going and no further allocation will occur by using the XArray lock as barrier. Combined with a O(n^2) top-down walk for the allocation, we get rid of the intermediate table allocation completely. Despite being O(n^2), it should be actually faster because it's not practical to have a very deep cgroup level. This also avoided changing kmemcg_id before reparenting, making cgroups have a stable index for list_lru_memcg. After this change it's possible that a dying cgroup will see a NULL value in XArray corresponding to the kmemcg_id, because the kmemcg_id will point to an empty slot. In such case, just fallback to use its parent. As a result the code is simpler, following test also showed a performance gain (6 test runs): mkdir /tmp/test-fs modprobe brd rd_nr=1 rd_size=16777216 mkfs.xfs /dev/ram0 mount -t xfs /dev/ram0 /tmp/test-fs worker() { echo TEST-CONTENT > "/tmp/test-fs/$1" } do_test() { for i in $(seq 1 2048); do (exec sh -c 'echo "$PPID"') > "/sys/fs/cgroup/benchmark/$i/cgroup.procs" worker "$i" & done; wait echo 1 > /proc/sys/vm/drop_caches } mkdir -p /sys/fs/cgroup/benchmark echo +memory > /sys/fs/cgroup/benchmark/cgroup.subtree_control for i in $(seq 1 2048); do rmdir "/sys/fs/cgroup/benchmark/$i" &>/dev/null mkdir -p "/sys/fs/cgroup/benchmark/$i" done time do_test Before: real 0m5.932s user 0m2.366s sys 0m5.583s real 0m5.939s user 0m2.347s sys 0m5.597s real 0m6.149s user 0m2.398s sys 0m5.761s real 0m5.945s user 0m2.403s sys 0m5.547s real 0m5.925s user 0m2.293s sys 0m5.651s real 0m6.017s user 0m2.367s sys 0m5.686s After: real 0m5.712s user 0m2.343s sys 0m5.307s real 0m5.885s user 0m2.326s sys 0m5.518s real 0m5.694s user 0m2.347s sys 0m5.264s real 0m5.865s user 0m2.300s sys 0m5.545s real 0m5.748s user 0m2.273s sys 0m5.424s real 0m5.756s user 0m2.318s sys 0m5.398s Signed-off-by: Kairui Song --- mm/list_lru.c | 182 ++++++++++++++++++++++---------------------------- mm/zswap.c | 7 +- 2 files changed, 81 insertions(+), 108 deletions(-) diff --git a/mm/list_lru.c b/mm/list_lru.c index 4c619857e916..ac8aec8451dd 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -59,6 +59,20 @@ list_lru_from_memcg_idx(struct list_lru *lru, int nid, int idx) } return &lru->node[nid].lru; } + +static inline struct list_lru_one * +list_lru_from_memcg(struct list_lru *lru, int nid, struct mem_cgroup *memcg) +{ + struct list_lru_one *l; +again: + l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg)); + if (likely(l)) + return l; + + memcg = parent_mem_cgroup(memcg); + WARN_ON(!css_is_dying(&memcg->css)); + goto again; +} #else static void list_lru_register(struct list_lru *lru) { @@ -83,6 +97,12 @@ list_lru_from_memcg_idx(struct list_lru *lru, int nid, int idx) { return &lru->node[nid].lru; } + +static inline struct list_lru_one * +list_lru_from_memcg(struct list_lru *lru, int nid, int idx) +{ + return &lru->node[nid].lru; +} #endif /* CONFIG_MEMCG_KMEM */ bool list_lru_add(struct list_lru *lru, struct list_head *item, int nid, @@ -93,7 +113,7 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item, int nid, spin_lock(&nlru->lock); if (list_empty(item)) { - l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg)); + l = list_lru_from_memcg(lru, nid, memcg); list_add_tail(item, &l->list); /* Set shrinker bit if the first element was added */ if (!l->nr_items++) @@ -124,7 +144,7 @@ bool list_lru_del(struct list_lru *lru, struct list_head *item, int nid, spin_lock(&nlru->lock); if (!list_empty(item)) { - l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg)); + l = list_lru_from_memcg(lru, nid, memcg); list_del_init(item); l->nr_items--; nlru->nr_items--; @@ -339,20 +359,6 @@ static struct list_lru_memcg *memcg_init_list_lru_one(gfp_t gfp) return mlru; } -static void memcg_list_lru_free(struct list_lru *lru, int src_idx) -{ - struct list_lru_memcg *mlru = xa_erase_irq(&lru->xa, src_idx); - - /* - * The __list_lru_walk_one() can walk the list of this node. - * We need kvfree_rcu() here. And the walking of the list - * is under lru->node[nid]->lock, which can serve as a RCU - * read-side critical section. - */ - if (mlru) - kvfree_rcu(mlru, rcu); -} - static inline void memcg_init_list_lru(struct list_lru *lru, bool memcg_aware) { if (memcg_aware) @@ -377,22 +383,18 @@ static void memcg_destroy_list_lru(struct list_lru *lru) } static void memcg_reparent_list_lru_node(struct list_lru *lru, int nid, - int src_idx, struct mem_cgroup *dst_memcg) + struct list_lru_one *src, + struct mem_cgroup *dst_memcg) { struct list_lru_node *nlru = &lru->node[nid]; - int dst_idx = dst_memcg->kmemcg_id; - struct list_lru_one *src, *dst; + struct list_lru_one *dst; /* * Since list_lru_{add,del} may be called under an IRQ-safe lock, * we have to use IRQ-safe primitives here to avoid deadlock. */ spin_lock_irq(&nlru->lock); - - src = list_lru_from_memcg_idx(lru, nid, src_idx); - if (!src) - goto out; - dst = list_lru_from_memcg_idx(lru, nid, dst_idx); + dst = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(dst_memcg)); list_splice_init(&src->list, &dst->list); @@ -401,46 +403,45 @@ static void memcg_reparent_list_lru_node(struct list_lru *lru, int nid, set_shrinker_bit(dst_memcg, nid, lru_shrinker_id(lru)); src->nr_items = 0; } -out: spin_unlock_irq(&nlru->lock); } void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *parent) { - struct cgroup_subsys_state *css; struct list_lru *lru; - int src_idx = memcg->kmemcg_id, i; - - /* - * Change kmemcg_id of this cgroup and all its descendants to the - * parent's id, and then move all entries from this cgroup's list_lrus - * to ones of the parent. - */ - rcu_read_lock(); - css_for_each_descendant_pre(css, &memcg->css) { - struct mem_cgroup *child; - - child = mem_cgroup_from_css(css); - WRITE_ONCE(child->kmemcg_id, parent->kmemcg_id); - } - rcu_read_unlock(); + int i; - /* - * With kmemcg_id set to parent, holding the lru lock below can - * prevent list_lru_{add,del,isolate} from touching the lru, safe - * to reparent. - */ mutex_lock(&list_lrus_mutex); list_for_each_entry(lru, &memcg_list_lrus, list) { + struct list_lru_memcg *mlru; + XA_STATE(xas, &lru->xa, memcg->kmemcg_id); + + /* + * Lock the Xarray to ensure no on going allocation and + * further allocation will see css_is_dying(). + */ + xas_lock_irq(&xas); + mlru = xas_load(&xas); + if (mlru) + xas_store(&xas, NULL); + xas_unlock_irq(&xas); + if (!mlru) + continue; + + /* + * With Xarray value set to NULL, holding the lru lock below + * prevents list_lru_{add,del,isolate} from touching the lru, + * safe to reparent. + */ for_each_node(i) - memcg_reparent_list_lru_node(lru, i, src_idx, parent); + memcg_reparent_list_lru_node(lru, i, &mlru->node[i], parent); /* * Here all list_lrus corresponding to the cgroup are guaranteed * to remain empty, we can safely free this lru, any further * memcg_list_lru_alloc() call will simply bail out. */ - memcg_list_lru_free(lru, src_idx); + kvfree_rcu(mlru, rcu); } mutex_unlock(&list_lrus_mutex); } @@ -456,78 +457,51 @@ static inline bool memcg_list_lru_allocated(struct mem_cgroup *memcg, int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru, gfp_t gfp) { - int i; unsigned long flags; - struct list_lru_memcg_table { - struct list_lru_memcg *mlru; - struct mem_cgroup *memcg; - } *table; + struct list_lru_memcg *mlru; + struct mem_cgroup *pos, *parent; XA_STATE(xas, &lru->xa, 0); if (!list_lru_memcg_aware(lru) || memcg_list_lru_allocated(memcg, lru)) return 0; gfp &= GFP_RECLAIM_MASK; - table = kmalloc_array(memcg->css.cgroup->level, sizeof(*table), gfp); - if (!table) - return -ENOMEM; - /* * Because the list_lru can be reparented to the parent cgroup's * list_lru, we should make sure that this cgroup and all its * ancestors have allocated list_lru_memcg. */ - for (i = 0; memcg; memcg = parent_mem_cgroup(memcg), i++) { - if (memcg_list_lru_allocated(memcg, lru)) - break; - - table[i].memcg = memcg; - table[i].mlru = memcg_init_list_lru_one(gfp); - if (!table[i].mlru) { - while (i--) - kfree(table[i].mlru); - kfree(table); - return -ENOMEM; + do { + /* + * Keep finding the farest parent that wasn't populated + * until found memcg itself. + */ + pos = memcg; + parent = parent_mem_cgroup(pos); + while (parent && !memcg_list_lru_allocated(parent, lru)) { + pos = parent; + parent = parent_mem_cgroup(pos); } - } - xas_lock_irqsave(&xas, flags); - while (i--) { - int index = READ_ONCE(table[i].memcg->kmemcg_id); - struct list_lru_memcg *mlru = table[i].mlru; - - xas_set(&xas, index); -retry: - if (unlikely(index < 0 || xas_error(&xas) || xas_load(&xas))) { - kfree(mlru); - } else { - xas_store(&xas, mlru); - if (xas_error(&xas) == -ENOMEM) { + mlru = memcg_init_list_lru_one(gfp); + do { + bool alloced = false; + + xas_set(&xas, pos->kmemcg_id); + xas_lock_irqsave(&xas, flags); + if (!css_is_dying(&pos->css) && !xas_load(&xas)) { + xas_store(&xas, mlru); + alloced = true; + } + if (!alloced || xas_error(&xas)) { xas_unlock_irqrestore(&xas, flags); - if (xas_nomem(&xas, gfp)) - xas_set_err(&xas, 0); - xas_lock_irqsave(&xas, flags); - /* - * The xas lock has been released, this memcg - * can be reparented before us. So reload - * memcg id. More details see the comments - * in memcg_reparent_list_lrus(). - */ - index = READ_ONCE(table[i].memcg->kmemcg_id); - if (index < 0) - xas_set_err(&xas, 0); - else if (!xas_error(&xas) && index != xas.xa_index) - xas_set(&xas, index); - goto retry; + kfree(mlru); + goto out; } - } - } - /* xas_nomem() is used to free memory instead of memory allocation. */ - if (xas.xa_alloc) - xas_nomem(&xas, gfp); - xas_unlock_irqrestore(&xas, flags); - kfree(table); - + xas_unlock_irqrestore(&xas, flags); + } while (xas_nomem(&xas, gfp)); + } while (pos != memcg); +out: return xas_error(&xas); } #else diff --git a/mm/zswap.c b/mm/zswap.c index a50e2986cd2f..c6e2256347ff 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -718,12 +718,11 @@ static void zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry) /* * Note that it is safe to use rcu_read_lock() here, even in the face of - * concurrent memcg offlining. Thanks to the memcg->kmemcg_id indirection - * used in list_lru lookup, only two scenarios are possible: + * concurrent memcg offlining: * - * 1. list_lru_add() is called before memcg->kmemcg_id is updated. The + * 1. list_lru_add() is called before list_lru_memcg is erased. The * new entry will be reparented to memcg's parent's list_lru. - * 2. list_lru_add() is called after memcg->kmemcg_id is updated. The + * 2. list_lru_add() is called after list_lru_memcg is erased. The * new entry will be added directly to memcg's parent's list_lru. * * Similar reasoning holds for list_lru_del(). From patchwork Mon Jun 24 17:53:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13709941 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C681AC2BD09 for ; Mon, 24 Jun 2024 17:53:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 636F46B019E; Mon, 24 Jun 2024 13:53:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5E44A6B02A9; Mon, 24 Jun 2024 13:53:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 45EFB6B019E; Mon, 24 Jun 2024 13:53:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 1FA456B00F9 for ; Mon, 24 Jun 2024 13:53:52 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id BA1B61212A8 for ; Mon, 24 Jun 2024 17:53:51 +0000 (UTC) X-FDA: 82266530262.05.D77F16F Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com [209.85.210.175]) by imf25.hostedemail.com (Postfix) with ESMTP id 9C7DFA001F for ; Mon, 24 Jun 2024 17:53:49 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=avyMojlr; spf=pass (imf25.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.175 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719251611; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=q5q5EWlDAAG5g1OoMCgFaQ3kAQajBwh1JJssHhmIdyk=; b=uSjiBzwPmu51C0K6ZCnOqlJFG5ch1Z39z6iTQ6UrkaM3+4Vt0PKF6yJPhfJd787kwfF9hS PkHPv9I+zCOdggZYYy6Ec9w22hqaBJ4vDABfl5s43+N9ROKV8KaWEKGXqEOIcgWYKkES7C nScB7rI8X2ZwI89H5Bc6q9fUMGAUhA8= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=avyMojlr; spf=pass (imf25.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.175 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719251611; a=rsa-sha256; cv=none; b=xTo5jds2hBYX/u0g3GseWMniRUZ4HB8FwC9gUEKd6S3/iLTndpJahmILrTYekczWkNIc5X i7mWKwsarqNhLWvZvrcICJUoSlle0xXzDhJzkg/Shc/hpPgskAcXEtVyZO8QMtioLYc4jN fUXf9BJ9/iaP8IyJkOYI5iKqNf5kfiI= Received: by mail-pf1-f175.google.com with SMTP id d2e1a72fcca58-7066a3229f4so1476818b3a.2 for ; Mon, 24 Jun 2024 10:53:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1719251628; x=1719856428; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=q5q5EWlDAAG5g1OoMCgFaQ3kAQajBwh1JJssHhmIdyk=; b=avyMojlr7jbaCzfqx0xJYp1cHmt7678O8v1B8ff8t9fEQ5mOTop0b06weIUWiNA7wj UDatxV9J3MCCxavYYVjwfhB1ykthhRrDqTE4jxWHDXdL83xDDhM96dMIIQzEvapYGqOR tG7XjMhEoFF+K9lFACl2VditZSlfv7xrB+FYTHJKHuzdcvj/sqv6iwJcrpKBM3xF2bPt hXVxbe6eOUF0P70+bHTxz6viS55YZb/UkyoAvPj9PjFymTSyCecepL1MjCbNbfKMBkDd nUMTDUcV5jTE+QxESM3pOo3aLyTMV5quk8MyUCxTNISz4+1wrklWWmzgqyoZ2v5LWLHi KzlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719251628; x=1719856428; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=q5q5EWlDAAG5g1OoMCgFaQ3kAQajBwh1JJssHhmIdyk=; b=Pz/BumaBgW2/K+0eZcb5ueic19MJ8bRkyJQod77aDfZFGmaPYcydvxPmMPqtYpv/bX bHqYOfXFfZQrCN/EiimopS73bQI3vTjD+rMTIUtR/JhbYayqiOyXiiTGZ+Cm/HGfrEUR /8ktYTh9utANVYCjE8gundxZlwytKQraulv8PYanJd4VVFLGjdH28lUELA7afAomko/w SbM1xcXbDLlXgcWNXTvO97x/p0icV4kZJgiS/uMwoT3ALGkBTgtMpsWv/S7m6vVrWR4j LaiDgVp20IjDyWb7/0gzt7c+RLO4iY0WIUqxu6rgZUikp8zLzE2CYHMsMeyJW/wu2Gqb llog== X-Gm-Message-State: AOJu0YzJOJ+tn8bRrqVle611Cub+07L4n/2xL7DF0cl21GqIdXmnI5m5 DdhvA3hl3KtmIkjNe39KZ4GuRar5S5mWZ3mz00kdZ0FsWaMf+Fi8ouJdUw1HEcE= X-Google-Smtp-Source: AGHT+IFVEcHTG4UQZf4FHB3Rr5haTYcK92BIC521sEFYHGm0LmvNCbTgKA3vHkX02yeMGGV1yDQvlw== X-Received: by 2002:a62:e817:0:b0:705:ddb0:5260 with SMTP id d2e1a72fcca58-706743ad900mr5129756b3a.0.1719251627796; Mon, 24 Jun 2024 10:53:47 -0700 (PDT) Received: from KASONG-MC4.tencent.com ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-70662ee117asm5008049b3a.211.2024.06.24.10.53.43 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 24 Jun 2024 10:53:47 -0700 (PDT) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Matthew Wilcox , Johannes Weiner , Roman Gushchin , Waiman Long , Shakeel Butt , Nhat Pham , Michal Hocko , Chengming Zhou , Qi Zheng , Muchun Song , Chris Li , Yosry Ahmed , "Huang, Ying" , Kairui Song Subject: [PATCH 6/7] mm/list_lru: split the lock to per-cgroup scope Date: Tue, 25 Jun 2024 01:53:12 +0800 Message-ID: <20240624175313.47329-7-ryncsn@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240624175313.47329-1-ryncsn@gmail.com> References: <20240624175313.47329-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 9C7DFA001F X-Stat-Signature: ayaenz8o3hhfb96pthjgewcpewe7u8sr X-Rspam-User: X-HE-Tag: 1719251629-182174 X-HE-Meta: U2FsdGVkX1+yIGOlvUOdDbBTpMA78cpjL6H5N9v+CGhMYkwRbuRT5Vam2SMFa65CyvPWLLUg752JCkCM6brwO8nFo6ofav2JqtN7kDooAOngWNdPWycRW5wsSwhncsQ2t15bi5WDfEI//SwCDL8jKLdrlE5h4RkojVl6estdx5LcLhxRSXqGxFrsicS/TUWunIDnzn3me9iUrC5ZE6Y670GS3ln4ZDAgNw/u329q1tvJB6VobAJnROAV45mByQ+YicnaOL655p+K1nSiDvvrqWHyvxr+6Vs3CmUZlC/+UnK/PWqBQO9+ACYUjPV9LZpnSrSEB2heYfD06Nhf2szMzxn9LyiBXkF0p6M8INFbJrRPOQ9P5Q5Lho4kvd4+hthO1rYSLMVVmG6y2rjdqbavCrHd2jEByqYymZZ5qg1zqiVMWrUq2kPm6lyDYQGG3M9+kFv9Y6T/yLfxmbl+zJWvIcogKZYAEge6dX9jqo0/q/aV704cb3VBuGLdpYB2roBFXU5Hdh251crT8sL8TfNUXhubxl0crL7ebD1Jx/ZmsLnzWmF4KBjFy2nWg4wlzqNcBGIQmSz8UVIhkqVH/asETOr1hu498PH8CyosDls4Uw3QJRGRwpjspU++aCTpGBBuS08ABTXwDAUf4/7u1Eboggc37yC5rY/ZUvS5su9QhkKfn9sYHMXVtG2+COolQCKEtRYhC3iiFO87X7b0w3t/SFbc7r17fBi4ZoJpKXCGAWUy7rK0v7RFPnxG1A17i8D11mJYMQm2jmowH3yq3Iiws2tFIE0YkPZQsFGkoinIgoG4nY++8QI1BF4dZXFSv4AaFdqZCPBuoqrNaN4QNM8DSEJtY6Sj6sQsP6GWc41fxX2cgZc7vmr/qUuCUbixOwOMNqneGdX2zbKMbKEtHEtRnsUSpHWhiO9abzws3kPnY3zJ4ipvoQNiAuyjexQZ8XZXxOer/9/47ICGKX/G434 y7DkGomX 4cEO6fBO8k3qDenNm6CVEcsE95xzuEACY+QPSPhTIGWp9XCVHoyki5gC2WinAvaTF/MDB2rXvWw8aqdORhG/jCczKv8iVWPRdE79wTwgtMaKybLLMWg6pyQugSGbZIYgqmQC0+e6/veGqujwniYBLtUmQ7HFVUYGcrjsKsT9ALePz4ZtjF/zH/m7Dn6A6hYT5NauVjUFnFgC6Z8fnpSzHs2E0hoUIteNkQJxVeblIwL+oyo5xtI8uV8LcXaAz4KR8Gtz9dFwSkmSY+MfMyzDfwdUxBOqm4aiIxaXW4Rf/23yaHmIj0JVNQXnvn6Cm3NUVhXkKlLEq1iLRDvbK7GzZ6pWY5aeOxlJ4KUkkklNu8RgkGyfs1REX0NGGpBuWJtZa7RbUkvhQk+NGLBmhjjwjoDZasg+4jhZxXTSdwc7Gr8XDL40sqewztvLLNgcn2xjjfia1G0QR0X0/S5U/+wKOm0bOYkES659bDMnG30LG5CM/25fNizhiSSszLiQKv6uzWmoldh754AV650Jw+i0T4PEusrn1VTRjKECpqsT+FPBqQH8zPDjC/8nOuhT/9TMs21P7eBnH9FW7+yY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song Currently, every list_lru has a per-node lock that protects adding, deletion, isolation, and reparenting of all list_lru_one instances belonging to this list_lru on this node. This lock contention is heavy when multiple cgroups modify the same list_lru. This lock can be split into per-cgroup scope to reduce contention. To achieve this, we need a stable list_lru_one for every cgroup. This commit adds a lock to each list_lru_one and introduced a helper function lock_list_lru_of_memcg, making it possible to pin the list_lru of a memcg. Then reworked the reparenting process. Reparenting will switch the list_lru_one instances one by one. By locking each instance and marking it dead using the nr_items counter, reparenting ensures that all items in the corresponding cgroup (on-list or not, because items have a stable cgroup, see below) will see the list_lru_one switch synchronously. Objcg reparent is also moved after list_lru reparent so items will have a stable mem cgroup until all list_lru_one instances are drained. The only caller that doesn't work the *_obj interfaces are direct calls to list_lru_{add,del}. But it's only used by zswap and that's also based on objcg, so it's fine. This also changes the bahaviour of the isolation function when LRU_RETRY or LRU_REMOVED_RETRY is returned, because now releasing the lock could unblock reparenting and free the list_lru_one, isolation function will have to return withoug re-lock the lru. I see a 25% performance gain for SWAP after this change: modprobe zram echo 128G > /sys/block/zram0/disksize mkswap /dev/zram0; swapon /dev/zram0 for i in $(seq 1 64); do mkdir /sys/fs/cgroup/benchmark-$i echo 64M > /sys/fs/cgroup/benchmark-$i/memory.max done do_test() { for i in $(seq 1 64); do cd /sys/fs/cgroup/benchmark-$i (exec sh -c 'echo "$PPID"') > cgroup.procs memhog 1G & done; wait } time do_test Before: real 0m20.328s user 0m4.315s sys 10m23.639s real 0m20.440s user 0m4.142s sys 10m34.756s real 0m20.381s user 0m4.164s sys 10m29.035s After: real 0m15.156s user 0m4.590s sys 7m34.361s real 0m15.161s user 0m4.776s sys 7m35.086s real 0m15.429s user 0m4.734s sys 7m42.919s Similar performance gain with inode / dentry workload: prepare() { mkdir /tmp/test-fs modprobe brd rd_nr=1 rd_size=16777216 mkfs.xfs -f /dev/ram0 mount -t xfs /dev/ram0 /tmp/test-fs for i in $(seq 1 256); do mkdir "/tmp/test-fs/$i" for j in $(seq 1 10000); do echo TEST-CONTENT > "/tmp/test-fs/$i/$j" done done } do_test() { read_worker() { for j in $(seq 1 10000); do read -r __TMP < "/tmp/test-fs/$1/$j" done } read_in_all() { for i in $(seq 1 256); do (exec sh -c 'echo "$PPID"') > "/sys/fs/cgroup/benchmark/$i/cgroup.procs" read_worker "$i" & done; wait } echo 3 > /proc/sys/vm/drop_caches mkdir -p /sys/fs/cgroup/benchmark echo +memory > /sys/fs/cgroup/benchmark/cgroup.subtree_control echo 512M > /sys/fs/cgroup/benchmark/memory.max for i in $(seq 1 256); do rmdir "/sys/fs/cgroup/benchmark/$i" &>/dev/null mkdir -p "/sys/fs/cgroup/benchmark/$i" done } Before this series: real 0m26.939s user 0m36.322s sys 6m30.248s real 0m15.111s user 0m33.749s sys 5m4.991s real 0m16.796s user 0m33.438s sys 5m22.865s real 0m15.256s user 0m34.060s sys 4m56.870s real 0m14.826s user 0m33.531s sys 4m55.907s real 0m15.664s user 0m35.619s sys 6m3.638s real 0m15.746s user 0m34.066s sys 4m56.519s After this commit (>10%): real 0m22.166s user 0m35.155s sys 6m21.045s real 0m13.753s user 0m34.554s sys 4m40.982s real 0m13.815s user 0m34.693s sys 4m39.605s real 0m13.495s user 0m34.372s sys 4m40.776s real 0m13.895s user 0m34.005s sys 4m39.061s real 0m13.629s user 0m33.476s sys 4m43.626s real 0m14.001s user 0m33.463s sys 4m41.261s Signed-off-by: Kairui Song --- drivers/android/binder_alloc.c | 1 - fs/inode.c | 1 - fs/xfs/xfs_qm.c | 1 - include/linux/list_lru.h | 6 +- mm/list_lru.c | 224 +++++++++++++++++++-------------- mm/memcontrol.c | 7 +- mm/workingset.c | 1 - mm/zswap.c | 5 +- 8 files changed, 141 insertions(+), 105 deletions(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index 2e1f261ec5c8..dd47d621e561 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -1106,7 +1106,6 @@ enum lru_status binder_alloc_free_page(struct list_head *item, mmput_async(mm); __free_page(page_to_free); - spin_lock(lock); return LRU_REMOVED_RETRY; err_invalid_vma: diff --git a/fs/inode.c b/fs/inode.c index 3a41f83a4ba5..35da4e54e365 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -856,7 +856,6 @@ static enum lru_status inode_lru_isolate(struct list_head *item, mm_account_reclaimed_pages(reap); } iput(inode); - spin_lock(lru_lock); return LRU_RETRY; } diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c index 47120b745c47..8d17099765ae 100644 --- a/fs/xfs/xfs_qm.c +++ b/fs/xfs/xfs_qm.c @@ -496,7 +496,6 @@ xfs_qm_dquot_isolate( trace_xfs_dqreclaim_busy(dqp); XFS_STATS_INC(dqp->q_mount, xs_qm_dqreclaim_misses); xfs_dqunlock(dqp); - spin_lock(lru_lock); return LRU_RETRY; } diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h index 2e5132905f42..b84483ef93a7 100644 --- a/include/linux/list_lru.h +++ b/include/linux/list_lru.h @@ -32,6 +32,8 @@ struct list_lru_one { struct list_head list; /* may become negative during memcg reparenting */ long nr_items; + /* protects all fields above */ + spinlock_t lock; }; struct list_lru_memcg { @@ -41,11 +43,9 @@ struct list_lru_memcg { }; struct list_lru_node { - /* protects all lists on the node, including per cgroup */ - spinlock_t lock; /* global list, used for the root cgroup in cgroup aware lrus */ struct list_lru_one lru; - long nr_items; + atomic_long_t nr_items; } ____cacheline_aligned_in_smp; struct list_lru { diff --git a/mm/list_lru.c b/mm/list_lru.c index ac8aec8451dd..c503921cbb13 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -61,18 +61,48 @@ list_lru_from_memcg_idx(struct list_lru *lru, int nid, int idx) } static inline struct list_lru_one * -list_lru_from_memcg(struct list_lru *lru, int nid, struct mem_cgroup *memcg) +lock_list_lru_of_memcg(struct list_lru *lru, int nid, struct mem_cgroup *memcg, + bool irq, bool skip_empty) { struct list_lru_one *l; + rcu_read_lock(); again: l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg)); - if (likely(l)) - return l; - - memcg = parent_mem_cgroup(memcg); + if (likely(l)) { + if (irq) + spin_lock_irq(&l->lock); + else + spin_lock(&l->lock); + if (likely(READ_ONCE(l->nr_items) != LONG_MIN)) { + WARN_ON(l->nr_items < 0); + rcu_read_unlock(); + return l; + } + if (irq) + spin_unlock_irq(&l->lock); + else + spin_unlock(&l->lock); + } + /* + * Caller may simply bail out if raced with reparenting or + * may iterate through the list_lru and expect empty slots. + */ + if (skip_empty) { + rcu_read_unlock(); + return NULL; + } WARN_ON(!css_is_dying(&memcg->css)); + memcg = parent_mem_cgroup(memcg); goto again; } + +static inline void unlock_list_lru(struct list_lru_one *l, bool irq_off) +{ + if (irq_off) + spin_unlock_irq(&l->lock); + else + spin_unlock(&l->lock); +} #else static void list_lru_register(struct list_lru *lru) { @@ -99,30 +129,47 @@ list_lru_from_memcg_idx(struct list_lru *lru, int nid, int idx) } static inline struct list_lru_one * -list_lru_from_memcg(struct list_lru *lru, int nid, int idx) +lock_list_lru_of_memcg(struct list_lru *lru, int nid, struct mem_cgroup *memcg, + bool irq, bool skip_empty) { - return &lru->node[nid].lru; + struct list_lru_one *l = &lru->node[nid].lru; + + if (irq) + spin_lock_irq(&l->lock); + else + spin_lock(&l->lock); + + return l; +} + +static inline void unlock_list_lru(struct list_lru_one *l, bool irq_off) +{ + if (irq_off) + spin_unlock_irq(&l->lock); + else + spin_unlock(&l->lock); } #endif /* CONFIG_MEMCG_KMEM */ bool list_lru_add(struct list_lru *lru, struct list_head *item, int nid, - struct mem_cgroup *memcg) + struct mem_cgroup *memcg) { struct list_lru_node *nlru = &lru->node[nid]; struct list_lru_one *l; - spin_lock(&nlru->lock); + l = lock_list_lru_of_memcg(lru, nid, memcg, false, false); + if (!l) + return false; if (list_empty(item)) { - l = list_lru_from_memcg(lru, nid, memcg); list_add_tail(item, &l->list); /* Set shrinker bit if the first element was added */ if (!l->nr_items++) set_shrinker_bit(memcg, nid, lru_shrinker_id(lru)); - nlru->nr_items++; - spin_unlock(&nlru->lock); + unlock_list_lru(l, false); + atomic_long_inc(&nlru->nr_items); return true; } - spin_unlock(&nlru->lock); + unlock_list_lru(l, false); return false; } @@ -137,24 +184,23 @@ bool list_lru_add_obj(struct list_lru *lru, struct list_head *item) EXPORT_SYMBOL_GPL(list_lru_add_obj); bool list_lru_del(struct list_lru *lru, struct list_head *item, int nid, - struct mem_cgroup *memcg) + struct mem_cgroup *memcg) { struct list_lru_node *nlru = &lru->node[nid]; struct list_lru_one *l; - - spin_lock(&nlru->lock); + l = lock_list_lru_of_memcg(lru, nid, memcg, false, false); + if (!l) + return false; if (!list_empty(item)) { - l = list_lru_from_memcg(lru, nid, memcg); list_del_init(item); l->nr_items--; - nlru->nr_items--; - spin_unlock(&nlru->lock); + unlock_list_lru(l, false); + atomic_long_dec(&nlru->nr_items); return true; } - spin_unlock(&nlru->lock); + unlock_list_lru(l, false); return false; } -EXPORT_SYMBOL_GPL(list_lru_del); bool list_lru_del_obj(struct list_lru *lru, struct list_head *item) { @@ -204,25 +250,24 @@ unsigned long list_lru_count_node(struct list_lru *lru, int nid) struct list_lru_node *nlru; nlru = &lru->node[nid]; - return nlru->nr_items; + return atomic_long_read(&nlru->nr_items); } EXPORT_SYMBOL_GPL(list_lru_count_node); static unsigned long -__list_lru_walk_one(struct list_lru *lru, int nid, int memcg_idx, +__list_lru_walk_one(struct list_lru *lru, int nid, struct mem_cgroup *memcg, list_lru_walk_cb isolate, void *cb_arg, - unsigned long *nr_to_walk) + unsigned long *nr_to_walk, bool irq_off) { struct list_lru_node *nlru = &lru->node[nid]; - struct list_lru_one *l; + struct list_lru_one *l = NULL; struct list_head *item, *n; unsigned long isolated = 0; restart: - l = list_lru_from_memcg_idx(lru, nid, memcg_idx); + l = lock_list_lru_of_memcg(lru, nid, memcg, irq_off, true); if (!l) - goto out; - + return isolated; list_for_each_safe(item, n, &l->list) { enum lru_status ret; @@ -234,19 +279,20 @@ __list_lru_walk_one(struct list_lru *lru, int nid, int memcg_idx, break; --*nr_to_walk; - ret = isolate(item, l, &nlru->lock, cb_arg); + ret = isolate(item, l, &l->lock, cb_arg); switch (ret) { + /* + * LRU_RETRY and LRU_REMOVED_RETRY will drop the lru lock, + * the list traversal will be invalid and have to restart from + * scratch. + */ + case LRU_RETRY: + goto restart; case LRU_REMOVED_RETRY: - assert_spin_locked(&nlru->lock); fallthrough; case LRU_REMOVED: isolated++; - nlru->nr_items--; - /* - * If the lru lock has been dropped, our list - * traversal is now invalid and so we have to - * restart from scratch. - */ + atomic_long_dec(&nlru->nr_items); if (ret == LRU_REMOVED_RETRY) goto restart; break; @@ -255,21 +301,15 @@ __list_lru_walk_one(struct list_lru *lru, int nid, int memcg_idx, break; case LRU_SKIP: break; - case LRU_RETRY: - /* - * The lru lock has been dropped, our list traversal is - * now invalid and so we have to restart from scratch. - */ - assert_spin_locked(&nlru->lock); - goto restart; case LRU_STOP: - assert_spin_locked(&nlru->lock); + assert_spin_locked(&l->lock); goto out; default: BUG(); } } out: + unlock_list_lru(l, irq_off); return isolated; } @@ -278,14 +318,8 @@ list_lru_walk_one(struct list_lru *lru, int nid, struct mem_cgroup *memcg, list_lru_walk_cb isolate, void *cb_arg, unsigned long *nr_to_walk) { - struct list_lru_node *nlru = &lru->node[nid]; - unsigned long ret; - - spin_lock(&nlru->lock); - ret = __list_lru_walk_one(lru, nid, memcg_kmem_id(memcg), isolate, - cb_arg, nr_to_walk); - spin_unlock(&nlru->lock); - return ret; + return __list_lru_walk_one(lru, nid, memcg, isolate, + cb_arg, nr_to_walk, false); } EXPORT_SYMBOL_GPL(list_lru_walk_one); @@ -294,14 +328,8 @@ list_lru_walk_one_irq(struct list_lru *lru, int nid, struct mem_cgroup *memcg, list_lru_walk_cb isolate, void *cb_arg, unsigned long *nr_to_walk) { - struct list_lru_node *nlru = &lru->node[nid]; - unsigned long ret; - - spin_lock_irq(&nlru->lock); - ret = __list_lru_walk_one(lru, nid, memcg_kmem_id(memcg), isolate, - cb_arg, nr_to_walk); - spin_unlock_irq(&nlru->lock); - return ret; + return __list_lru_walk_one(lru, nid, memcg, isolate, + cb_arg, nr_to_walk, true); } unsigned long list_lru_walk_node(struct list_lru *lru, int nid, @@ -316,16 +344,21 @@ unsigned long list_lru_walk_node(struct list_lru *lru, int nid, #ifdef CONFIG_MEMCG_KMEM if (*nr_to_walk > 0 && list_lru_memcg_aware(lru)) { struct list_lru_memcg *mlru; + struct mem_cgroup *memcg; unsigned long index; xa_for_each(&lru->xa, index, mlru) { - struct list_lru_node *nlru = &lru->node[nid]; - - spin_lock(&nlru->lock); - isolated += __list_lru_walk_one(lru, nid, index, + rcu_read_lock(); + memcg = mem_cgroup_from_id(index); + if (!mem_cgroup_tryget(memcg)) { + rcu_read_unlock(); + continue; + } + rcu_read_unlock(); + isolated += __list_lru_walk_one(lru, nid, memcg, isolate, cb_arg, - nr_to_walk); - spin_unlock(&nlru->lock); + nr_to_walk, false); + mem_cgroup_put(memcg); if (*nr_to_walk <= 0) break; @@ -337,14 +370,19 @@ unsigned long list_lru_walk_node(struct list_lru *lru, int nid, } EXPORT_SYMBOL_GPL(list_lru_walk_node); -static void init_one_lru(struct list_lru_one *l) +static void init_one_lru(struct list_lru *lru, struct list_lru_one *l) { INIT_LIST_HEAD(&l->list); + spin_lock_init(&l->lock); l->nr_items = 0; +#ifdef CONFIG_LOCKDEP + if (lru->key) + lockdep_set_class(&l->lock, lru->key); +#endif } #ifdef CONFIG_MEMCG_KMEM -static struct list_lru_memcg *memcg_init_list_lru_one(gfp_t gfp) +static struct list_lru_memcg *memcg_init_list_lru_one(struct list_lru *lru, gfp_t gfp) { int nid; struct list_lru_memcg *mlru; @@ -354,7 +392,7 @@ static struct list_lru_memcg *memcg_init_list_lru_one(gfp_t gfp) return NULL; for_each_node(nid) - init_one_lru(&mlru->node[nid]); + init_one_lru(lru, &mlru->node[nid]); return mlru; } @@ -382,28 +420,27 @@ static void memcg_destroy_list_lru(struct list_lru *lru) xas_unlock_irq(&xas); } -static void memcg_reparent_list_lru_node(struct list_lru *lru, int nid, - struct list_lru_one *src, - struct mem_cgroup *dst_memcg) +static void memcg_reparent_list_lru_one(struct list_lru *lru, int nid, + struct list_lru_one *src, + struct mem_cgroup *dst_memcg) { - struct list_lru_node *nlru = &lru->node[nid]; + int dst_idx = dst_memcg->kmemcg_id; struct list_lru_one *dst; - /* - * Since list_lru_{add,del} may be called under an IRQ-safe lock, - * we have to use IRQ-safe primitives here to avoid deadlock. - */ - spin_lock_irq(&nlru->lock); - dst = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(dst_memcg)); + spin_lock_irq(&src->lock); + dst = list_lru_from_memcg_idx(lru, nid, dst_idx); + spin_lock_nested(&dst->lock, SINGLE_DEPTH_NESTING); list_splice_init(&src->list, &dst->list); - if (src->nr_items) { dst->nr_items += src->nr_items; set_shrinker_bit(dst_memcg, nid, lru_shrinker_id(lru)); - src->nr_items = 0; } - spin_unlock_irq(&nlru->lock); + /* Mark the list_lru_one dead */ + src->nr_items = LONG_MIN; + + spin_unlock(&dst->lock); + spin_unlock_irq(&src->lock); } void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *parent) @@ -422,8 +459,6 @@ void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *paren */ xas_lock_irq(&xas); mlru = xas_load(&xas); - if (mlru) - xas_store(&xas, NULL); xas_unlock_irq(&xas); if (!mlru) continue; @@ -434,13 +469,20 @@ void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *paren * safe to reparent. */ for_each_node(i) - memcg_reparent_list_lru_node(lru, i, &mlru->node[i], parent); + memcg_reparent_list_lru_one(lru, i, &mlru->node[i], parent); /* * Here all list_lrus corresponding to the cgroup are guaranteed * to remain empty, we can safely free this lru, any further * memcg_list_lru_alloc() call will simply bail out. + * + * To ensure callers see a stable list_lru_one, have to set it + * to NULL after memcg_reparent_list_lru_one(). */ + xas_lock_irq(&xas); + xas_reset(&xas); + xas_store(&xas, NULL); + xas_unlock_irq(&xas); kvfree_rcu(mlru, rcu); } mutex_unlock(&list_lrus_mutex); @@ -483,7 +525,7 @@ int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru, parent = parent_mem_cgroup(pos); } - mlru = memcg_init_list_lru_one(gfp); + mlru = memcg_init_list_lru_one(lru, gfp); do { bool alloced = false; @@ -532,14 +574,8 @@ int __list_lru_init(struct list_lru *lru, bool memcg_aware, struct shrinker *shr if (!lru->node) return -ENOMEM; - for_each_node(i) { - spin_lock_init(&lru->node[i].lock); -#ifdef CONFIG_LOCKDEP - if (lru->key) - lockdep_set_class(&lru->node[i].lock, lru->key); -#endif - init_one_lru(&lru->node[i].lru); - } + for_each_node(i) + init_one_lru(lru, &lru->node[i].lru); memcg_init_list_lru(lru, memcg_aware); list_lru_register(lru); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index fc35c1d3f109..945290a53bf1 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -4186,8 +4186,13 @@ static void memcg_offline_kmem(struct mem_cgroup *memcg) if (!parent) parent = root_mem_cgroup; - memcg_reparent_objcgs(memcg, parent); memcg_reparent_list_lrus(memcg, parent); + + /* + * Objcg's reparenting must be after list_lru's, make sure list_lru + * helpers won't use parent's list_lru until child is drained. + */ + memcg_reparent_objcgs(memcg, parent); } #else static int memcg_online_kmem(struct mem_cgroup *memcg) diff --git a/mm/workingset.c b/mm/workingset.c index 1801fbe5183c..947423c3e719 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -769,7 +769,6 @@ static enum lru_status shadow_lru_isolate(struct list_head *item, ret = LRU_REMOVED_RETRY; out: cond_resched(); - spin_lock_irq(lru_lock); return ret; } diff --git a/mm/zswap.c b/mm/zswap.c index c6e2256347ff..f7a2afaeea53 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -720,9 +720,9 @@ static void zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry) * Note that it is safe to use rcu_read_lock() here, even in the face of * concurrent memcg offlining: * - * 1. list_lru_add() is called before list_lru_memcg is erased. The + * 1. list_lru_add() is called before list_lru_one is dead. The * new entry will be reparented to memcg's parent's list_lru. - * 2. list_lru_add() is called after list_lru_memcg is erased. The + * 2. list_lru_add() is called after list_lru_one is dead. The * new entry will be added directly to memcg's parent's list_lru. * * Similar reasoning holds for list_lru_del(). @@ -1164,7 +1164,6 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o zswap_written_back_pages++; } - spin_lock(lock); return ret; } From patchwork Mon Jun 24 17:53:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13709942 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15E85C2BD09 for ; Mon, 24 Jun 2024 17:53:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 96BE66B03A1; Mon, 24 Jun 2024 13:53:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 919A46B03A4; Mon, 24 Jun 2024 13:53:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 770216B03A5; Mon, 24 Jun 2024 13:53:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 547FA6B03A1 for ; Mon, 24 Jun 2024 13:53:56 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id CD55181287 for ; Mon, 24 Jun 2024 17:53:55 +0000 (UTC) X-FDA: 82266530430.10.1533C13 Received: from mail-pf1-f173.google.com (mail-pf1-f173.google.com [209.85.210.173]) by imf23.hostedemail.com (Postfix) with ESMTP id EBBE614001E for ; Mon, 24 Jun 2024 17:53:53 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ndz8SwF4; spf=pass (imf23.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.173 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719251623; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ptGWaBdSVZzJ+AqKuzcUZFu1raNFqIlkIOrE6FycX5I=; b=EXqYymY75EYHdlIZD2GHvaFCBDl9AJK1ck6SWBOyOXAfT70lTWlMJH48ivwQmd+gtKKG06 UldSwWEjRQenlbvhqHnhAIXbl52VZOYMrPZ83+59DMxjWWjuXwXiCpXc0KY5wZvzy4brHl X7hrWHY+aCHGy6Xqwn0j9vIACtgC18o= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ndz8SwF4; spf=pass (imf23.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.173 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719251623; a=rsa-sha256; cv=none; b=bfzycmZGYjApOA5VGVnaau8vvQoM3zEF2hLuHNYTMBxHjEjuNEnWTc+m7UbGjYx1+cdJo1 JPAYlB75kay7n1d26+5uta/vNSZbY8rcz+GZhifcUxIv6s/J+ktFSB9tiEeczrLTU6IrHp yJXIUctFh3S/SEGZSWb+E30altMu6lk= Received: by mail-pf1-f173.google.com with SMTP id d2e1a72fcca58-70679845d69so1124806b3a.1 for ; Mon, 24 Jun 2024 10:53:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1719251632; x=1719856432; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=ptGWaBdSVZzJ+AqKuzcUZFu1raNFqIlkIOrE6FycX5I=; b=ndz8SwF48oa87+/U2Aas/yshM4q9vxo97JJ2x8Txy3V7oMzJflGmkmJGC+LgDTmkN+ LkuLvxNR0OrBiFIMG4Dd0ReZBWc91qaWVrxdp62XqKEGiNs9ew+OkBhHriPw9N0OtFt9 lVWJ60L7ap7abVYebxyZ9Mamj8QMLtcxba4d8oNOVMh/Hvz/RXNnB0sf94f2UVD1Ifyp R8Gzd+heqmWROqapr9rP4GNnji0soRqPUeFVDmGnoGwvsUqeW1UX0AZ954ikonDu+mkt dtBIwJ3QCLf67ymdJ/VkQY1EFavHaeaBIO01gVww3vqy41mg70nsx2Z2kkaWyir1v2I3 /tUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719251632; x=1719856432; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=ptGWaBdSVZzJ+AqKuzcUZFu1raNFqIlkIOrE6FycX5I=; b=JAudDzC/ICOdjmqTbnouh7N6mC55yzbR01et6TVs3nGFItR6l7/LzrxGDxoVpYxTvd iHc5Cgu72Uuyd1NOtsNyM1Ju7JM8E3lxcbGrSpAxdiaw/eB9HAyPPk+s5fem1boUb1Nr P9FGAl81Wp71/4yXzFSxmIhv6lIYmy3LxffaA1U0F6VaZknEobkdlfk4svYrVgM1tqzg 51Y0Z+5u1Lcvh6/GtIP5WFO55T4g2zi9esHXpvt0vkZEeK6Eu0zRihZlWKIZrlMIi80m Q78DD6Un3HqpJmumzxLKBr1hHrcDLcGsfofRDGFwR2gjV04NtFNIaVbtb7Jgh7UQOlDQ ArGQ== X-Gm-Message-State: AOJu0Yzu3BeoJDykT2aL0eN+P2cdZkwz3GyRh86nzVCmFyTWrG3f6NqA NSpAmeNXjLkSyJpV4nKjQuDgabxPVaR2fwMknazhFM/TmBFgz+GGmnO58m2PGew= X-Google-Smtp-Source: AGHT+IEjGQMZjY1G92YtLTsuxQd2VmQ+Uz3y9H00Qo8FqHJJ8Q1XjGE9dYmYtW3KXDaIGGOBWsZdWw== X-Received: by 2002:a05:6a20:158e:b0:1bc:e978:8bf2 with SMTP id adf61e73a8af0-1bcf7eae9f3mr4771907637.23.1719251632204; Mon, 24 Jun 2024 10:53:52 -0700 (PDT) Received: from KASONG-MC4.tencent.com ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-70662ee117asm5008049b3a.211.2024.06.24.10.53.48 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 24 Jun 2024 10:53:51 -0700 (PDT) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Matthew Wilcox , Johannes Weiner , Roman Gushchin , Waiman Long , Shakeel Butt , Nhat Pham , Michal Hocko , Chengming Zhou , Qi Zheng , Muchun Song , Chris Li , Yosry Ahmed , "Huang, Ying" , Kairui Song Subject: [PATCH 7/7] mm/list_lru: Simplify the list_lru walk callback function Date: Tue, 25 Jun 2024 01:53:13 +0800 Message-ID: <20240624175313.47329-8-ryncsn@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240624175313.47329-1-ryncsn@gmail.com> References: <20240624175313.47329-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Stat-Signature: hamjah5gwcxap5roy6e3oskjfwenhif8 X-Rspam-User: X-Rspamd-Queue-Id: EBBE614001E X-Rspamd-Server: rspam02 X-HE-Tag: 1719251633-738002 X-HE-Meta: U2FsdGVkX1+GcTstpNLnqroW3sq4Xakg+0p4/qZcGmGOpA5agAurydKaB3qyErZfF1YQdaBlRZZRZ+jxYk3U+eUy9DwU0l3BKVB0RdagRT8YgKluKGwEFXJh1M50UaM+61RvxpCn2Jw2Wg4+1aq2AYJoHrX7RoksoeRbdPobiqquRjzv/B2NC88q0Sa5QiJxIg2lsGq22NXINLPTpJ+j1lOoFOQsrnDo7dkxIj6pGVNrYtR54x2oDVW/iqCr1QawIhw5p66TjN0mvqNmlJeMGynDPwZU/qWhokgKiVmIvyppoT8m4foQA+Kl9aK1hSN/q0XipJUo+FglLm6Q3Pm1CiBen6GYQUIb8nEvd5kXjnJKL89tIIULSE5W4eaa/1UpzaNvWwZsUpiMqZm5P8CANgAuGKbjVdCj/rJnKmfh5YFqtoeDW9EpTsjq9nkMRYx87A+8S7K+e5iQWeyaer626e+OaQC8mZAu4UWcgk1gaSwh9HAKEVXeLUvkPjMH09GXm2SNGZr1Hyw7yEOjiTcxeQrZ+LS81DBff9vXFG8b6ANhfclOiZGmBEen+9yieNnpdv0tA5ChAjjxKBsYtK0I7e1qi6EZ1iSyMKa2AmELJxU5Gzb7b+8zvOZWm3BLm9AUYhFceNz/WQ/UohmsEhFviQQgEVziP7fQPtE+V6Ro7S/UNwKltxeXlwd8DZhc2Xs2qmlZJaCpaHQ/vY1doyosNkHPEzzxq8VlFUawz+02niMUFiuY7mym/PxdjQ39D7w3hRYrTnYPSKMx2audhkUhzDk0MdtWM2YrC8B+eTANrZFfyMmeuQoneFu4VZ8YgZpqXK1jFheogIk0U8470y15ewI5dhHioFbtUa0pzvHdIg9xPOwWwXGokmiwmIyGyZ68CAijbbDjxdaUgt8Q3/oPTRNyszKrOEZULT3pJkByuhvw4jNzZKB5zQP1V6yRR4pa1j7DQXGtLlp5pYrEjvy Wl9NpCAE m9zzyAU1tVqnyLlXyHhFlSM0y55AS7LD8fSczW7CN7/Ri/JyuKWDR5auTsklp8gS5QDZAXM1RJIF7jS8NavH0TqtkS9GLPbUi0gtCJu+pQK7jcXNv8cqdJNVaZ0+LdmLf9C7LaQPWJmcFTnnwwmKSKdFUV3cA46hxsLYDPtfHje6J7jPGeIshJ8K9UEzg8UeXOAqgCPieamzi6vlXK3Gyz2CvKBVb7fE8LXXyoHbjOwXAhMrvx48D9YjxVeWHNUT6GnNjtUcrSagDBkt6UpIWTZepMc4ZFjBHTBSdo2XqAAdjd+41CNT3MiAXu+eiGvX4iXCd+kfXE7z58tIrOkTc+JhbaRtFMnePqls7JKOzTbMyamTngugDbxKOSt68hy5beoVX/IJVScLGLwB/3Qz8fsyob6MVur470/wTYMjZFozYDTWFwlGFXeVCkfuO7vhtDdMMMiaBYoRSLoD1DNmPG292n0TUa5g2piwM/a7SDG7VVZEcIQzLJfoKwsMcC3QKCNMIARO/a34QoO8IRj7b0xEz68o2NgpKbF7fhgr0pGwFkYD69XcBLjtv0Jq7BbUSSfhZNDAjx2eoxlE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song Now isolation no longer takes the list_lru global node lock, only use the per-cgroup lock instead. And this lock is inside the list_lru_one being walked, no longer needed to pass the lock explicitly. Signed-off-by: Kairui Song --- drivers/android/binder_alloc.c | 5 ++--- drivers/android/binder_alloc.h | 2 +- fs/dcache.c | 4 ++-- fs/gfs2/quota.c | 2 +- fs/inode.c | 4 ++-- fs/nfs/nfs42xattr.c | 4 ++-- fs/nfsd/filecache.c | 5 +---- fs/xfs/xfs_buf.c | 2 -- fs/xfs/xfs_qm.c | 5 ++--- include/linux/list_lru.h | 2 +- mm/list_lru.c | 2 +- mm/workingset.c | 15 +++++++-------- mm/zswap.c | 4 ++-- 13 files changed, 24 insertions(+), 32 deletions(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index dd47d621e561..c55cce54f20c 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -1055,9 +1055,8 @@ void binder_alloc_vma_close(struct binder_alloc *alloc) */ enum lru_status binder_alloc_free_page(struct list_head *item, struct list_lru_one *lru, - spinlock_t *lock, void *cb_arg) - __must_hold(lock) + __must_hold(&lru->lock) { struct binder_lru_page *page = container_of(item, typeof(*page), lru); struct binder_alloc *alloc = page->alloc; @@ -1092,7 +1091,7 @@ enum lru_status binder_alloc_free_page(struct list_head *item, list_lru_isolate(lru, item); spin_unlock(&alloc->lock); - spin_unlock(lock); + spin_unlock(&lru->lock); if (vma) { trace_binder_unmap_user_start(alloc, index); diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h index 70387234477e..c02c8ebcb466 100644 --- a/drivers/android/binder_alloc.h +++ b/drivers/android/binder_alloc.h @@ -118,7 +118,7 @@ static inline void binder_selftest_alloc(struct binder_alloc *alloc) {} #endif enum lru_status binder_alloc_free_page(struct list_head *item, struct list_lru_one *lru, - spinlock_t *lock, void *cb_arg); + void *cb_arg); struct binder_buffer *binder_alloc_new_buf(struct binder_alloc *alloc, size_t data_size, size_t offsets_size, diff --git a/fs/dcache.c b/fs/dcache.c index 407095188f83..4e5f8382ee3f 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -1077,7 +1077,7 @@ void shrink_dentry_list(struct list_head *list) } static enum lru_status dentry_lru_isolate(struct list_head *item, - struct list_lru_one *lru, spinlock_t *lru_lock, void *arg) + struct list_lru_one *lru, void *arg) { struct list_head *freeable = arg; struct dentry *dentry = container_of(item, struct dentry, d_lru); @@ -1158,7 +1158,7 @@ long prune_dcache_sb(struct super_block *sb, struct shrink_control *sc) } static enum lru_status dentry_lru_isolate_shrink(struct list_head *item, - struct list_lru_one *lru, spinlock_t *lru_lock, void *arg) + struct list_lru_one *lru, void *arg) { struct list_head *freeable = arg; struct dentry *dentry = container_of(item, struct dentry, d_lru); diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c index aa9cf0102848..31aece125a75 100644 --- a/fs/gfs2/quota.c +++ b/fs/gfs2/quota.c @@ -152,7 +152,7 @@ static void gfs2_qd_list_dispose(struct list_head *list) static enum lru_status gfs2_qd_isolate(struct list_head *item, - struct list_lru_one *lru, spinlock_t *lru_lock, void *arg) + struct list_lru_one *lru, void *arg) { struct list_head *dispose = arg; struct gfs2_quota_data *qd = diff --git a/fs/inode.c b/fs/inode.c index 35da4e54e365..1fb52253a843 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -803,7 +803,7 @@ void invalidate_inodes(struct super_block *sb) * with this flag set because they are the inodes that are out of order. */ static enum lru_status inode_lru_isolate(struct list_head *item, - struct list_lru_one *lru, spinlock_t *lru_lock, void *arg) + struct list_lru_one *lru, void *arg) { struct list_head *freeable = arg; struct inode *inode = container_of(item, struct inode, i_lru); @@ -845,7 +845,7 @@ static enum lru_status inode_lru_isolate(struct list_head *item, if (inode_has_buffers(inode) || !mapping_empty(&inode->i_data)) { __iget(inode); spin_unlock(&inode->i_lock); - spin_unlock(lru_lock); + spin_unlock(&lru->lock); if (remove_inode_buffers(inode)) { unsigned long reap; reap = invalidate_mapping_pages(&inode->i_data, 0, -1); diff --git a/fs/nfs/nfs42xattr.c b/fs/nfs/nfs42xattr.c index b6e3d8f77b91..37d79400e5f4 100644 --- a/fs/nfs/nfs42xattr.c +++ b/fs/nfs/nfs42xattr.c @@ -802,7 +802,7 @@ static struct shrinker *nfs4_xattr_large_entry_shrinker; static enum lru_status cache_lru_isolate(struct list_head *item, - struct list_lru_one *lru, spinlock_t *lru_lock, void *arg) + struct list_lru_one *lru, void *arg) { struct list_head *dispose = arg; struct inode *inode; @@ -867,7 +867,7 @@ nfs4_xattr_cache_count(struct shrinker *shrink, struct shrink_control *sc) static enum lru_status entry_lru_isolate(struct list_head *item, - struct list_lru_one *lru, spinlock_t *lru_lock, void *arg) + struct list_lru_one *lru, void *arg) { struct list_head *dispose = arg; struct nfs4_xattr_bucket *bucket; diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c index ad9083ca144b..f68c4a1c529f 100644 --- a/fs/nfsd/filecache.c +++ b/fs/nfsd/filecache.c @@ -456,7 +456,6 @@ void nfsd_file_net_dispose(struct nfsd_net *nn) * nfsd_file_lru_cb - Examine an entry on the LRU list * @item: LRU entry to examine * @lru: controlling LRU - * @lock: LRU list lock (unused) * @arg: dispose list * * Return values: @@ -466,9 +465,7 @@ void nfsd_file_net_dispose(struct nfsd_net *nn) */ static enum lru_status nfsd_file_lru_cb(struct list_head *item, struct list_lru_one *lru, - spinlock_t *lock, void *arg) - __releases(lock) - __acquires(lock) + void *arg) { struct list_head *head = arg; struct nfsd_file *nf = list_entry(item, struct nfsd_file, nf_lru); diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index aa4dbda7b536..43b914c1f621 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -1857,7 +1857,6 @@ static enum lru_status xfs_buftarg_drain_rele( struct list_head *item, struct list_lru_one *lru, - spinlock_t *lru_lock, void *arg) { @@ -1956,7 +1955,6 @@ static enum lru_status xfs_buftarg_isolate( struct list_head *item, struct list_lru_one *lru, - spinlock_t *lru_lock, void *arg) { struct xfs_buf *bp = container_of(item, struct xfs_buf, b_lru); diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c index 8d17099765ae..f1b6e73c0e68 100644 --- a/fs/xfs/xfs_qm.c +++ b/fs/xfs/xfs_qm.c @@ -412,9 +412,8 @@ static enum lru_status xfs_qm_dquot_isolate( struct list_head *item, struct list_lru_one *lru, - spinlock_t *lru_lock, void *arg) - __releases(lru_lock) __acquires(lru_lock) + __releases(&lru->lock) __acquires(&lru->lock) { struct xfs_dquot *dqp = container_of(item, struct xfs_dquot, q_lru); @@ -460,7 +459,7 @@ xfs_qm_dquot_isolate( trace_xfs_dqreclaim_dirty(dqp); /* we have to drop the LRU lock to flush the dquot */ - spin_unlock(lru_lock); + spin_unlock(&lru->lock); error = xfs_qm_dqflush(dqp, &bp); if (error) diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h index b84483ef93a7..df6b9374ca68 100644 --- a/include/linux/list_lru.h +++ b/include/linux/list_lru.h @@ -184,7 +184,7 @@ void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item, struct list_head *head); typedef enum lru_status (*list_lru_walk_cb)(struct list_head *item, - struct list_lru_one *list, spinlock_t *lock, void *cb_arg); + struct list_lru_one *list, void *cb_arg); /** * list_lru_walk_one: walk a @lru, isolating and disposing freeable items. diff --git a/mm/list_lru.c b/mm/list_lru.c index c503921cbb13..d8d653317c2c 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -279,7 +279,7 @@ __list_lru_walk_one(struct list_lru *lru, int nid, struct mem_cgroup *memcg, break; --*nr_to_walk; - ret = isolate(item, l, &l->lock, cb_arg); + ret = isolate(item, l, cb_arg); switch (ret) { /* * LRU_RETRY and LRU_REMOVED_RETRY will drop the lru lock, diff --git a/mm/workingset.c b/mm/workingset.c index 947423c3e719..e3552e7318a5 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -704,8 +704,7 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker, static enum lru_status shadow_lru_isolate(struct list_head *item, struct list_lru_one *lru, - spinlock_t *lru_lock, - void *arg) __must_hold(lru_lock) + void *arg) __must_hold(lru->lock) { struct xa_node *node = container_of(item, struct xa_node, private_list); struct address_space *mapping; @@ -714,20 +713,20 @@ static enum lru_status shadow_lru_isolate(struct list_head *item, /* * Page cache insertions and deletions synchronously maintain * the shadow node LRU under the i_pages lock and the - * lru_lock. Because the page cache tree is emptied before - * the inode can be destroyed, holding the lru_lock pins any + * &lru->lock. Because the page cache tree is emptied before + * the inode can be destroyed, holding the &lru->lock pins any * address_space that has nodes on the LRU. * * We can then safely transition to the i_pages lock to * pin only the address_space of the particular node we want - * to reclaim, take the node off-LRU, and drop the lru_lock. + * to reclaim, take the node off-LRU, and drop the &lru->lock. */ mapping = container_of(node->array, struct address_space, i_pages); /* Coming from the list, invert the lock order */ if (!xa_trylock(&mapping->i_pages)) { - spin_unlock_irq(lru_lock); + spin_unlock_irq(&lru->lock); ret = LRU_RETRY; goto out; } @@ -736,7 +735,7 @@ static enum lru_status shadow_lru_isolate(struct list_head *item, if (mapping->host != NULL) { if (!spin_trylock(&mapping->host->i_lock)) { xa_unlock(&mapping->i_pages); - spin_unlock_irq(lru_lock); + spin_unlock_irq(&lru->lock); ret = LRU_RETRY; goto out; } @@ -745,7 +744,7 @@ static enum lru_status shadow_lru_isolate(struct list_head *item, list_lru_isolate(lru, item); __dec_node_page_state(virt_to_page(node), WORKINGSET_NODES); - spin_unlock(lru_lock); + spin_unlock(&lru->lock); /* * The nodes should only contain one or more shadow entries, diff --git a/mm/zswap.c b/mm/zswap.c index f7a2afaeea53..24e1e0c87172 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1097,7 +1097,7 @@ static int zswap_writeback_entry(struct zswap_entry *entry, * shrinker functions **********************************/ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l, - spinlock_t *lock, void *arg) + void *arg) { struct zswap_entry *entry = container_of(item, struct zswap_entry, lru); bool *encountered_page_in_swapcache = (bool *)arg; @@ -1143,7 +1143,7 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o * It's safe to drop the lock here because we return either * LRU_REMOVED_RETRY or LRU_RETRY. */ - spin_unlock(lock); + spin_unlock(&l->lock); writeback_result = zswap_writeback_entry(entry, swpentry);