From patchwork Sat Dec 7 22:15:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 13898422 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACFE6E7717D for ; Sat, 7 Dec 2024 22:15:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2A97F6B0362; Sat, 7 Dec 2024 17:15:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2591E6B0364; Sat, 7 Dec 2024 17:15:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 120DB6B0365; Sat, 7 Dec 2024 17:15:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id E662B6B0362 for ; Sat, 7 Dec 2024 17:15:31 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 8D4DC120315 for ; Sat, 7 Dec 2024 22:15:31 +0000 (UTC) X-FDA: 82869570252.02.2EE520E Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) by imf20.hostedemail.com (Postfix) with ESMTP id EB7881C0002 for ; Sat, 7 Dec 2024 22:15:09 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=QEP1HRls; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf20.hostedemail.com: domain of 3AMlUZwYKCCcbXcKDRJRRJOH.FRPOLQXa-PPNYDFN.RUJ@flex--yuzhao.bounces.google.com designates 209.85.210.202 as permitted sender) smtp.mailfrom=3AMlUZwYKCCcbXcKDRJRRJOH.FRPOLQXa-PPNYDFN.RUJ@flex--yuzhao.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733609712; a=rsa-sha256; cv=none; b=YA5jTk/m7Eq94y7u3aP18Ws0agFyKgFE0xClIPMOrZ+9sZagSOr2UEOM+y00C4m3b8sTJ8 ekoSyJoBGRLM0RHxNkfzu7pq1F0jL3h5SV0/f/X2K4Uk8IEgPnyUXrNdKI5dJhJTXPwJR9 tAK77gczzVLyXVeURH0vtazVMPsshsU= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=QEP1HRls; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf20.hostedemail.com: domain of 3AMlUZwYKCCcbXcKDRJRRJOH.FRPOLQXa-PPNYDFN.RUJ@flex--yuzhao.bounces.google.com designates 209.85.210.202 as permitted sender) smtp.mailfrom=3AMlUZwYKCCcbXcKDRJRRJOH.FRPOLQXa-PPNYDFN.RUJ@flex--yuzhao.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733609712; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=gY9A2hZ07C4gph5EkXou5eK7gRFH6gXGFcx+0iC/Exo=; b=O+14OzPN1H1GeqTSCBU+tCGhoHFRDpBanPLAVBEQvzzg0FmqlL2D0vzq5UNh6zQBg1uH7n 1oSiAEAZtnx7OdkAILGmxviitXcr7PVhnDuKAjx/l2VnGV4/4HvJsxqwEuTZgxIdkaBb72 u5F1dR2LFx4XEJFO57T3w5M7VUv9GJM= Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-725e4bee252so58807b3a.1 for ; Sat, 07 Dec 2024 14:15:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733609728; x=1734214528; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=gY9A2hZ07C4gph5EkXou5eK7gRFH6gXGFcx+0iC/Exo=; b=QEP1HRlsyp9QdogPiYqlydDr7XSW6Is2BQSS1y2uq4vjTjuQgDrD3AhEln8YB+v1zG v1Zt0RhjsP0tFWfO1lZByxx2ZZhR9/+98wkextSb+QKwZT31SHKge+RtnKpdCsJkJ5nn YgE8rh1/1+uXCA6CRrgC625FPDpGpapO4QtLn4fe7azldZdPsEjbzX/y6JN7sBsHmDTy yzdE6nx3EnjEEQhJGAdlofoxx9bM78UYx74PJfF2KJ0G0SL7qXl98Dw/fZOPJT+c4ael K22dXpBkFzLoH3o6fJs3dBkAEas4uWTGT5xp0fvmLMAnH2mM5IHrjOF/XVEFGiEnTR0n 8tAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733609728; x=1734214528; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=gY9A2hZ07C4gph5EkXou5eK7gRFH6gXGFcx+0iC/Exo=; b=TuzJHhdwfNWyMoMJhmDwRw2Ks2VOYbDP+O9znM5i2Uku53iQK4UiGOQ8OtmXObvL95 QjLwlO0ntkaWxY31EwL0loMyJSf7RfF9Ounwob6N3i1wgJ96HuLo8A+L37cDzr2n4EvV T6hGD2kjkpNs/atWQ59sUChgI/quiQ3dbQMGm5O+LXSzlz0iesDQgvxmqkWYS+rSahTa aharr5BoRBZE1Jk7tNQ5m8ZnQRkysDfZmf7rzvleuWCIRrdstQlHv+5clcJcpE+dgnu4 y/wDKVRhadBTTOX9ek7bG1L+a/IuiKE9wJrF1br5B5f5Xwqrjl4zqBS+AyQbEUTz8RDg E3/g== X-Gm-Message-State: AOJu0YzTtuarMGsVjX59NVhVYhaCqe0aqLw1sYcV41N0/JrfuSqUa+MU K14STZkkF1vSayy7TTiZTgKeEFMyA4Kqur9t6r2yoIOqXemSK966yzAeGQhGT0dgeeO/f4WdRgJ H5g== X-Google-Smtp-Source: AGHT+IHCXiEm/R2rY0sBQTBrLZxsmJB5U1ibMxl9sF0XXm74TaiiQnoh1RCoPUcB5Jt5hjXX9XZSsqYn8nU= X-Received: from pfwz36.prod.google.com ([2002:a05:6a00:1da4:b0:724:f614:656f]) (user=yuzhao job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:1911:b0:724:f6a2:7b77 with SMTP id d2e1a72fcca58-725b81a3eb6mr12675093b3a.17.1733609728505; Sat, 07 Dec 2024 14:15:28 -0800 (PST) Date: Sat, 7 Dec 2024 15:15:17 -0700 In-Reply-To: <20241207221522.2250311-1-yuzhao@google.com> Mime-Version: 1.0 References: <20241207221522.2250311-1-yuzhao@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241207221522.2250311-2-yuzhao@google.com> Subject: [PATCH mm-unstable v3 1/6] mm/mglru: clean up workingset From: Yu Zhao To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao , Kalesh Singh X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: EB7881C0002 X-Stat-Signature: 9dazb41o7njubgxuqa9msdpkqabkfmgt X-Rspam-User: X-HE-Tag: 1733609709-84567 X-HE-Meta: U2FsdGVkX1+u4LEPntNiR41QX3C+3/jd56YXZG29j7pAy9inw9t4da8XhnTdPW4CpR92ymvwm14ejbkKFXtxYQrsMAm68uTQKAo2JYXv/teCs5b9S/tbj22TNejjoun6Su4nQordbSSDYmBkOCGUQtt0W/Q1H7BUTM4hzKRmR7VK6pLVR0P1xYcSVY6jJuDeR76sc/cfIvgo4DBVYyQ65URdMSKpTxcyIKQIwCOS6Bfkn3qPPon5jcyvgztfW6QtsRKuIGbcXsfxndALsBmNVivC5SX+ZGAIdjUVZkIHWdti6cmqATTVCMCjZsrsJm+1K1JK7eRnTHzXQu2Afsv5uJC1TOiwU1r4TfU2U+TrLzqlrGiT6GqhCtq5Dmj1L74NWWXZYr84uwrEGwqaRvDb0jGySKxkjT48pauLEe4UyqJ8mZ7OjxvcoZKq9Tdbsr4LjmZLMOTCC9WaLhAiDjY5lq7m0DumqLA6ovXan67288ldS9fubX1OXxq12md1dd9Xt8lLVP2PGm4J6eWrEAEbj4ryVZwConmgDUBPWKelThU1efwIZ6Ii0x2PtDX0ANSwrSf1cFjZzuBjLM0IK8KaJbNT9pU82SRhYSOVM55L08ByNJ0AcEIZyG1sbuvGohKTzfZQcEfI08pHSE81X/L2N8kbd2p8IRHGnZTD3J4Cq5CxJ+GHqJdZl8m7twboBrsnEFa2ChN4A64fB22vl1w5J9TK9Mg6rwlUrIAPdNTKni+ZOLu/Ma2bWN7UxE9CZbo5uJHbmnOsi3QVf9JWa2E8O/gGKUdJ7R1Js8mk1DO9OvGw7C2H0Ekr78KieKiy/1aCZDUHsoH2eonaMisTQoVEU9ga4g4VBghCtUCKUsMFgpNqqQvjdTCZG9bcWNNS7DjHBS+lTl0aCyM2goBrj33EnGsacdUvhrdL4NJVE9wBvx9ibsU7TNYEdQO7NDm7Io/SPF5FJecirj5Sprzyv3u z0bNRDSf FSD5lcDGqfxIaCFDAXF0CjPQu7R+Vv2E30y3I8BWPaG6Uw5nDKBZiHTpo0x73TFmLJtZaylZPv+mIFQNE64Aa6rDpBu2w4NLwct695N7bJaJ2F/1j2JTvw7g1+OUYUEBoiPVIhlEMgucLgGSpHWAOpwzJa53y26TiU9Sa7qvZXojo7+KPXV4UFlSO1S2yvA4QVlom7SDFsUwtOusU33sJiSZ2/VGiLoSTd2Ask4d0Il92+OHLOKwRE/6lgHLOL4mD6huRfgqGrc6wuy2q+varf6HUlitd9DZCgLEACCaf0sjjcgQId2VHo9MA6Yh4y3lZ7ces9J9kmkvHrENH/PnsdrWjs9ptPOHKHb/akY9i8xfQPdJzZootUFBFlLLwdWYMJoz2GajK4IyN96Ok1ZiOX6at3SPSydKh4aj7a6MXBDZoOFxEgZdNtAb/s1ZClKny/gXgTHsIc0hVG+Xx9LtowizfBDEGCXHeuhKYfwTmoCFs1iAdFhwCZY0oYw9/OmyJKHtwq7dSSCBzWdk1ZQgDFRRiY7IxRc6IT3JJ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000992, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Move VM_BUG_ON_FOLIO() to cover both the default and MGLRU paths. Also use a pair of rcu_read_lock() and rcu_read_unlock() within each path, to improve readability. This change should not have any side effects. Signed-off-by: Yu Zhao Tested-by: Kalesh Singh --- mm/workingset.c | 25 ++++++++++++------------- 1 file changed, 12 insertions(+), 13 deletions(-) diff --git a/mm/workingset.c b/mm/workingset.c index a4705e196545..ad181d1b8cf1 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -428,17 +428,17 @@ bool workingset_test_recent(void *shadow, bool file, bool *workingset, struct pglist_data *pgdat; unsigned long eviction; - rcu_read_lock(); - if (lru_gen_enabled()) { - bool recent = lru_gen_test_recent(shadow, file, - &eviction_lruvec, &eviction, workingset); + bool recent; + rcu_read_lock(); + recent = lru_gen_test_recent(shadow, file, &eviction_lruvec, + &eviction, workingset); rcu_read_unlock(); return recent; } - + rcu_read_lock(); unpack_shadow(shadow, &memcgid, &pgdat, &eviction, workingset); eviction <<= bucket_order; @@ -459,14 +459,12 @@ bool workingset_test_recent(void *shadow, bool file, bool *workingset, * configurations instead. */ eviction_memcg = mem_cgroup_from_id(memcgid); - if (!mem_cgroup_disabled() && - (!eviction_memcg || !mem_cgroup_tryget(eviction_memcg))) { - rcu_read_unlock(); + if (!mem_cgroup_tryget(eviction_memcg)) + eviction_memcg = NULL; + rcu_read_unlock(); + + if (!mem_cgroup_disabled() && !eviction_memcg) return false; - } - - rcu_read_unlock(); - /* * Flush stats (and potentially sleep) outside the RCU read section. * @@ -544,6 +542,8 @@ void workingset_refault(struct folio *folio, void *shadow) bool workingset; long nr; + VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); + if (lru_gen_enabled()) { lru_gen_refault(folio, shadow); return; @@ -558,7 +558,6 @@ void workingset_refault(struct folio *folio, void *shadow) * is actually experiencing the refault event. Make sure the folio is * locked to guarantee folio_memcg() stability throughout. */ - VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); nr = folio_nr_pages(folio); memcg = folio_memcg(folio); pgdat = folio_pgdat(folio); From patchwork Sat Dec 7 22:15:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 13898423 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E187CE7717D for ; Sat, 7 Dec 2024 22:15:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 394AB6B0364; Sat, 7 Dec 2024 17:15:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 343588D0015; Sat, 7 Dec 2024 17:15:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1E4196B0367; Sat, 7 Dec 2024 17:15:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id F2EBC6B0364 for ; Sat, 7 Dec 2024 17:15:35 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id A31EB42835 for ; Sat, 7 Dec 2024 22:15:35 +0000 (UTC) X-FDA: 82869570210.05.3F55F92 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) by imf17.hostedemail.com (Postfix) with ESMTP id 4677D40019 for ; Sat, 7 Dec 2024 22:15:18 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=kYvM+LwZ; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf17.hostedemail.com: domain of 3A8lUZwYKCCoeafNGUMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--yuzhao.bounces.google.com designates 209.85.210.201 as permitted sender) smtp.mailfrom=3A8lUZwYKCCoeafNGUMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--yuzhao.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733609724; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=CCT5euYl/qJQPeGGazPVJuZRHShIdP4mKhuDso53GD0=; b=picuV8ZFHx5NCopsuWkAuNpTlQS7bFaexx8QUWOK7SKIwAGtA4WnBRrMi2S/EvqZA5wVot 77A8AdeLChEjPK/sWIxNWrzk02Q+b3Q6+4p99PoRXnqmcE3lxKn62bugrZ0GqMJyyZ3eW3 xNh4t8O2iJOegdFa2zZiHSyhsnh6KXc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733609724; a=rsa-sha256; cv=none; b=lKfvBVtnrMUS4UfyTelw3VI+WgdHhc6WG8XJwvT010uwc7Z4doz102Z601f0/vYWH0raL+ b3mzW5f/SaOOQwiDUwDYuvzbmoVLCq9w4P52GbxtEABJM9JofmgP/rjjmQST1prXpuayro pXCA2HsQ5IynMAaQrgArb+IXILV7IOE= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=kYvM+LwZ; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf17.hostedemail.com: domain of 3A8lUZwYKCCoeafNGUMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--yuzhao.bounces.google.com designates 209.85.210.201 as permitted sender) smtp.mailfrom=3A8lUZwYKCCoeafNGUMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--yuzhao.bounces.google.com Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-7259d005dc5so3401967b3a.3 for ; Sat, 07 Dec 2024 14:15:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733609731; x=1734214531; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=CCT5euYl/qJQPeGGazPVJuZRHShIdP4mKhuDso53GD0=; b=kYvM+LwZA83hqjPZTkraFlDWAY/nVaneVoUr+WHdykFbJjSvv+MgBmDufNOlruaHTy P6QOTBul5IiC3ovaiKjanMxkFBIa6Vxw6bRYfA1p3nV+BvCTcGSHmDREI3gG26OInyo9 CAZWXkKUCogFZO8wW79BifNKkeZcD3j3sFTG30g5WpM0xC5S4CmxnxvdzpD9ugfELYWT 78Z1Kd7QZ9W/1ZyCSqsysfs1cLoEKJdhCjrAUxegK4wBMKmMQDZiRNbJTJvzcbh4xAJw qSMf/9oYWQvNGRVQCDrEO+hBmlLqtgKrcTO+cOyvHtukk64N1b6nR8902lb+R+llBGUf Buzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733609731; x=1734214531; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CCT5euYl/qJQPeGGazPVJuZRHShIdP4mKhuDso53GD0=; b=l0r3d4xxvdiP+N8Q/GwhM8S0107XkZQu/QfE8LpLYjx8R0mnmERgjqBAkCofW0w/qv LuDvBqcvDS62WFXLhgfMfISzQqGHQifeXhoICdggaiuOp7F+UQ6b3akOcaKl8AB//pKA vS/qtnElytwyIRRUoDNOL0TLlgWiKASYuNGFBSCfaeRSMQs2eu2qiZg9dJxoPkwKafvE +zXnu6A1LD9PLNAHP+L4WdQYqCF9dIV1pRiOCLNQpZ4nQkQfDouaQAOWr6G6+szi63F4 anK/K8Py8XYo2MHlPoyhIcyWUSJw0kAj1Q+lMcFSihYQYVertoNR3Vbr452OBYV+u01T zOrQ== X-Gm-Message-State: AOJu0YyVzrQx2XZTNLF1fIUFhTdOeFtOCmABSC/aDz6G5PoIN1y/ZrkD WdRyp1CqCCcIh4VTRIQn8oQFqrS2jagkYw1ZAYjAGAk2NjIXyaMU100L09OhvuXlCHhDoHb+vm5 pLw== X-Google-Smtp-Source: AGHT+IEX0/0XlihpBl2av7TPcnGj//5IebmVqNnUYSl/vdQwzILN65v6v1O4wV+Mrra4vxLhPsMYzWB6m58= X-Received: from pfbbe12.prod.google.com ([2002:a05:6a00:1f0c:b0:724:ec9d:6e1a]) (user=yuzhao job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:2381:b0:71e:573f:5673 with SMTP id d2e1a72fcca58-725b8144a42mr8420603b3a.15.1733609731167; Sat, 07 Dec 2024 14:15:31 -0800 (PST) Date: Sat, 7 Dec 2024 15:15:18 -0700 In-Reply-To: <20241207221522.2250311-1-yuzhao@google.com> Mime-Version: 1.0 References: <20241207221522.2250311-1-yuzhao@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241207221522.2250311-3-yuzhao@google.com> Subject: [PATCH mm-unstable v3 2/6] mm/mglru: optimize deactivation From: Yu Zhao To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao , Bharata B Rao , Kalesh Singh X-Stat-Signature: woogp57atmtr1jtgmaymwxs6szexdcu4 X-Rspamd-Queue-Id: 4677D40019 X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1733609718-264678 X-HE-Meta: U2FsdGVkX195FKmcEe6DqNqq9l1va2guXiXqGjYsYexm/Oc9wfV5xPCb8z8MID3zQ5liGJxQUMJfJpb1dhQ7YYlvExqZvVJjztPRWMfiYrZkSnc/dF/zsYYshy0+UIyhgYRRbrb1nvGfutS4BEBGmLV+IuZbT84VPRo8Rrk4c1EA45KRirFNxwT8hnBf7ODxxDt4kF7vEn9F6GzOttAG50dq4+z3pi320JNheYqXjRpsP7egv1bIeWOrGj8WeKGIkedxS8mY1p3sMzh2xZtxYUEMIudEheQZkYdVzGachDyo+47yU6lqyhkdtUCpkRHE1o8VPrDC0aHhwklyZvWC4/dgYgU339rEen3orifYfr4BO3mMRMZRZf/wQeXjcXlHmEC7BE+xjH9BWUHBNsePt3jO3wsDxjjpHNWY8Pxj7et5O6oDCfDzdGcuPj/jltkUySX5RhH0uuEaV+PB6IASUx4AFkvAQ2xBrqrrflcgnAGSNzu0m6OUbKYgp1ddrTugC6b46cRo7KIwcGzTsBEmu0+LiVFKDsJgPtf7vMOwJU4CbxePWKPa36ogSGtoyJ3nJzihw2xXm5j58AAWFMBnu26k1Z8ScJWxWygvs+TdqAZcCoi6THDbqssQCyZTv1hGJZ7iOM9Da6BWMK7FRc4llcNfxu1l3IBKzVwjSjmOKAXbZ8EV5mardhukCPpUD+Oe/nzaqn+0xWbs39fqeXWHbaruFfSPUdhrEhfYdC725Q94IJ1DII4spXzkcFz67vpDxIC3E8cbtZmv/M4bFalUKt5fTkcp1LiNNeXBZ7B/Z75050q5BhCWPqDehP6Cv1F8J7oEzfUlgQS1qc5UlEIlvBmJO8bkFJMob8RiJH9ZTPiCxp/mi5G0F+IfTrm2hauUw8lasSxl4fpgAJ/v+krfFwJ6+OdO2EN9KKyYyAmgvuhHPsgRt8hxAViJAv6Cz9dsDHZTV/LQARgBXlLSisJ Mv5mdlDh X6xYDtyjKbdAysUspY0jmjZbg1oB785BYfTOb9Bode6y/2VyE1H8TOthe3hWbc04xcPQ2+Z49bdE92KxJ7Y3hwUGd6UDLI03CNsep6p5LFBGqKZ91nrYxmQ0CWkVBDWoFXCG3wCpw47wj1fjCRzEf93ZcdWCG3ML/wa7uzAlPggtdwETb6xKwse8tVJi5Nme5413R+SOi0Y2OAOWy51NG1Zv3BmGslcmyLYpRuRyh6243P2UbaVz5VnPuDGrux6wr9vzw6RRBGu9iTATDqaOdcLAzRDb0NDvNIiThaMwYzqMla0NK04/XUx63V9pQ9xbUtrdcDo48fQ2vcuJebdaZ/TX1b5m6nhBQq0StMh/fTPtoGVRr2DuPz5jRUXPWtj1Sm/9xnufpd/Bw1KynyRa7wQDqXnFjzsj1JX9IDDhptQJ+1ornua75jfnpoC9pnSdCV7lP7ujO3KYJeABCIYWVfcYWgIGGEGHRTrdnIWnqgXx8qspHzbu9Scu3upqlzE0bxEMkiVRRHt3J7iRdwCSZbIxSNetWsq4Wt2GiNvc1iFycGnCrRhhpzj3yi3UtVSI+s3nBq0fh1JvYbGKVITlYtAYHCmukKddNKaxJsDOnm0PNL+Y= X-Bogosity: Ham, tests=bogofilter, spamicity=0.011409, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Do not shuffle a folio in the deactivation paths if it is already in the oldest generation. This reduces the LRU lock contention. Before this patch, the contention is reproducible by FIO, e.g., fio -filename=/dev/nvme1n1p2 -direct=0 -thread -size=1024G \ -rwmixwrite=30 --norandommap --randrepeat=0 -ioengine=sync \ -bs=4k -numjobs=400 -runtime=25000 --time_based \ -group_reporting -name=mglru 98.96%--_raw_spin_lock_irqsave folio_lruvec_lock_irqsave | --98.78%--folio_batch_move_lru | --98.63%--deactivate_file_folio mapping_try_invalidate invalidate_mapping_pages invalidate_bdev blkdev_common_ioctl blkdev_ioctl After this patch, deactivate_file_folio() bails out early without taking the LRU lock. A side effect is that a folio can be left at the head of the oldest generation, rather than the tail. If reclaim happens at the same time, it cannot reclaim this folio immediately. Since there is no known correlation between truncation and reclaim, this side effect is considered insignificant. Reported-by: Bharata B Rao Closes: https://lore.kernel.org/CAOUHufawNerxqLm7L9Yywp3HJFiYVrYO26ePUb1jH-qxNGWzyA@mail.gmail.com/ Signed-off-by: Yu Zhao Tested-by: Kalesh Singh --- mm/swap.c | 49 +++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 41 insertions(+), 8 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index 3a01acfd5a89..756b6c5b9af7 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -379,11 +379,12 @@ static void __lru_cache_activate_folio(struct folio *folio) } #ifdef CONFIG_LRU_GEN -static void folio_inc_refs(struct folio *folio) + +static void lru_gen_inc_refs(struct folio *folio) { unsigned long new_flags, old_flags = READ_ONCE(folio->flags); - if (folio_test_unevictable(folio)) + if (!folio_test_lru(folio) || folio_test_unevictable(folio)) return; if (!folio_test_referenced(folio)) { @@ -406,10 +407,33 @@ static void folio_inc_refs(struct folio *folio) new_flags |= old_flags & ~LRU_REFS_MASK; } while (!try_cmpxchg(&folio->flags, &old_flags, new_flags)); } -#else -static void folio_inc_refs(struct folio *folio) + +static bool lru_gen_clear_refs(struct folio *folio) { + struct lru_gen_folio *lrugen; + int type = folio_is_file_lru(folio); + + if (!folio_test_lru(folio) || folio_test_unevictable(folio)) + return true; + + set_mask_bits(&folio->flags, LRU_REFS_MASK | LRU_REFS_FLAGS, 0); + + lrugen = &folio_lruvec(folio)->lrugen; + /* whether can do without shuffling under the LRU lock */ + return folio_lru_gen(folio) == lru_gen_from_seq(READ_ONCE(lrugen->min_seq[type])); } + +#else /* !CONFIG_LRU_GEN */ + +static void lru_gen_inc_refs(struct folio *folio) +{ +} + +static bool lru_gen_clear_refs(struct folio *folio) +{ + return false; +} + #endif /* CONFIG_LRU_GEN */ /** @@ -428,7 +452,7 @@ static void folio_inc_refs(struct folio *folio) void folio_mark_accessed(struct folio *folio) { if (lru_gen_enabled()) { - folio_inc_refs(folio); + lru_gen_inc_refs(folio); return; } @@ -524,7 +548,7 @@ void folio_add_lru_vma(struct folio *folio, struct vm_area_struct *vma) */ static void lru_deactivate_file(struct lruvec *lruvec, struct folio *folio) { - bool active = folio_test_active(folio); + bool active = folio_test_active(folio) || lru_gen_enabled(); long nr_pages = folio_nr_pages(folio); if (folio_test_unevictable(folio)) @@ -589,7 +613,10 @@ static void lru_lazyfree(struct lruvec *lruvec, struct folio *folio) lruvec_del_folio(lruvec, folio); folio_clear_active(folio); - folio_clear_referenced(folio); + if (lru_gen_enabled()) + lru_gen_clear_refs(folio); + else + folio_clear_referenced(folio); /* * Lazyfree folios are clean anonymous folios. They have * the swapbacked flag cleared, to distinguish them from normal @@ -657,6 +684,9 @@ void deactivate_file_folio(struct folio *folio) if (folio_test_unevictable(folio)) return; + if (lru_gen_enabled() && lru_gen_clear_refs(folio)) + return; + folio_batch_add_and_move(folio, lru_deactivate_file, true); } @@ -670,7 +700,10 @@ void deactivate_file_folio(struct folio *folio) */ void folio_deactivate(struct folio *folio) { - if (folio_test_unevictable(folio) || !(folio_test_active(folio) || lru_gen_enabled())) + if (folio_test_unevictable(folio)) + return; + + if (lru_gen_enabled() ? lru_gen_clear_refs(folio) : !folio_test_active(folio)) return; folio_batch_add_and_move(folio, lru_deactivate, true); From patchwork Sat Dec 7 22:15:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 13898424 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4DE5DE77180 for ; Sat, 7 Dec 2024 22:15:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C6FBA6B0367; Sat, 7 Dec 2024 17:15:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C1D286B0368; Sat, 7 Dec 2024 17:15:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9D4226B0369; Sat, 7 Dec 2024 17:15:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 6ED026B0367 for ; Sat, 7 Dec 2024 17:15:37 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 04BA042296 for ; Sat, 7 Dec 2024 22:15:36 +0000 (UTC) X-FDA: 82869569580.09.3C083D0 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) by imf11.hostedemail.com (Postfix) with ESMTP id E1EF540017 for ; Sat, 7 Dec 2024 22:15:17 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=0bgeirY9; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf11.hostedemail.com: domain of 3BclUZwYKCCwgchPIWOWWOTM.KWUTQVcf-UUSdIKS.WZO@flex--yuzhao.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3BclUZwYKCCwgchPIWOWWOTM.KWUTQVcf-UUSdIKS.WZO@flex--yuzhao.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733609726; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=x/WXMwNfw0sDUCuQfYx8lzIKxCIUKsshP78UrWZEISs=; b=HgNVTK/ngMknXQL1khjWknBSYS8TCdtILQivxGUH3Jzl9fDP85v6mxmG6oAwlvFY/ihrhm 3Ea5KsM7Fj/09xO4+Re6tdoADAs67e3GSZfJlNT16YDTU4VD2YKvt0bD6MfCFfTmmQ0KNH OLXEx+/ZNTMEy/1vPuuD5ZApssn+TjU= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733609726; a=rsa-sha256; cv=none; b=fnzvFDMNiux2VIMtnNtg4w7WVDKs8K6sOj2cLBWU8wd2rPmZLV2f0sD5ocV0mJTnh6womd KS00G608/qFXxucNkgqd9EiSeIF8KFrSpksstYXV0OLjpBFeVzmy8Sx1XUbxIQIgY+7C8I EdEmvjJz/jrshHXsYL5uDAbx4aZm4BE= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=0bgeirY9; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf11.hostedemail.com: domain of 3BclUZwYKCCwgchPIWOWWOTM.KWUTQVcf-UUSdIKS.WZO@flex--yuzhao.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3BclUZwYKCCwgchPIWOWWOTM.KWUTQVcf-UUSdIKS.WZO@flex--yuzhao.bounces.google.com Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ef9e38b0cfso47335a91.0 for ; Sat, 07 Dec 2024 14:15:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733609734; x=1734214534; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=x/WXMwNfw0sDUCuQfYx8lzIKxCIUKsshP78UrWZEISs=; b=0bgeirY9zQ6LlJvR3suWloeQ8e5GkM3okpqSJVg1ytNOxzKOkZuKxV6Nz/l8JCnnI1 BGVFW+Cxusz1MQ8brb2jQ79dt1GDpMLoJ5DbjemPRgvpaXuWdSbo1QlTBrRxPGjs11Ni Ik5XzpscxvEXlOvNSdpoipHWA5dMMOL0lj7f5TiBq6IG5wxAEVxvzkdw4UeNrP74pNJf 3bKClVHXIyaf8cobjdW+OQdwKyOdCC2qQACzJsD751pu0jC5uxn2reaqpJnJUMSvWOgN kD35B1qURk/asFOczzIJ77ay/qfdtQCOfDN0YbRJIe+53jMZrnUYzYwd9InjjZEq2PLC n3Ew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733609734; x=1734214534; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=x/WXMwNfw0sDUCuQfYx8lzIKxCIUKsshP78UrWZEISs=; b=sU+lIneFlwMOQcMF/Q5kt0UNeZdgHprI3Bc4jeUI9XtxzCZ+4gHz6KtQussJoFEVhu 2pogzOBgCMOOMDokzH1YjKE4fEQL0K3zEv+xk7SvrYmfZaIUsmATozPQy6p7d68P3NJ2 Ij1x4jl9YJYgqmd52A2aMyUusVCVTsuNbXyXWApYIxxHsIECxZQ8pe3nAraDZ3i8xtVn 3FVakYpJ0BoNrCpnf1sTJOAtcxGbwrYI0TKNAykDhGLfa6JdCt0bwnd63eGsFnYQUBCr bD1v2bYxsjlBLLeepTbZtgYT1kcSmxMDKqivwZi4pI5L38Wsg/usJZJ+n4nUsVHk93Ke ImtA== X-Gm-Message-State: AOJu0YwXIqJ8ku06A2mlh5AI4prGItQvxlFbx210bmnxYwefJHIV+/x5 i+qI6lgJzvzkxKN9/gPuIX7rWzFGqqGKnlQg+hne8dNMBJeZIXxigRFm8xe5BxC4scaQEWreNtb ilg== X-Google-Smtp-Source: AGHT+IHUcmdXko/vEWEcYvOS+i3+2ODHT93QS00ngr4ZFLZJtDjyIBl6DFu+/mYipa85hT4Fnkiy+FUdlgI= X-Received: from pjh6.prod.google.com ([2002:a17:90b:3f86:b0:2ee:2761:b67a]) (user=yuzhao job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90a:d88e:b0:2ee:b66d:6576 with SMTP id 98e67ed59e1d1-2ef6ab10612mr12423648a91.30.1733609733903; Sat, 07 Dec 2024 14:15:33 -0800 (PST) Date: Sat, 7 Dec 2024 15:15:19 -0700 In-Reply-To: <20241207221522.2250311-1-yuzhao@google.com> Mime-Version: 1.0 References: <20241207221522.2250311-1-yuzhao@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241207221522.2250311-4-yuzhao@google.com> Subject: [PATCH mm-unstable v3 3/6] mm/mglru: rework aging feedback From: Yu Zhao To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao , David Stevens , Kalesh Singh X-Stat-Signature: j93rokqarw71zp4jimjbg1919f5fza53 X-Rspamd-Queue-Id: E1EF540017 X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1733609717-819738 X-HE-Meta: U2FsdGVkX1/8sIRyCiniJPCwP1cXnMlAjXW39x74biALNpfvHEMbmvDs93N6wLGVMwGNeyCVWNFMDrfnO5e60MXrvuES1NIjUDkWQ5LER6BeTqKZShcHGRqQ0sk3Y6HOwMAvYbxahJvNYaXIZj3jwk7rw5jaxl905PKC+UBzWtdLIYWMZVCSSZQ3JNvpobIzwq9GLTJQi/mNvJ3KUDiMKMYlRlzVYWfQ6GXPDq1XTZ5cipSD8KYA9BMqpq84umRWxk2NbW98RD+iN4iS1OHeM0pIRBn8EM7C6Klfe/zO54NNRO/6YIojIjJk4WC5p1qpk13UqCP0eNjWt//NgUNPfw0J6vdEgdZd2thHF/3bGBqKJfujOudA8MvjMy8zI577C9E4T2vpIyRaqS8hHaqYLa2fBrWDb/gLv0d4N101Gf6YeYitHIWkb2gbEvz0DTFSFWRezsetQcCAgm772iZWcBnY2xmFznGydb5haczDyBpMnVL1Zgvjy1DTEi7xztUDZSASxYU0FxGk6uuBF03leLNDFaERXwDS5AFtLvSlU0yJaLSBQjKm1BsWaSW+0hGLMOK6Jw+bwNMwENyOMfW6r/1rVcca+vggFYxDmUowkhaw2kkwf7db3YYBNhBK0u75HA2IdFEZCv4s9oBYvwenQhgFjSUfbM69RMz16dhrfjA82knk5du9TrhNyv9KG1BY7SmCndJnZdTIGjv9DwBGtPgZRRG6ukLMLxoQso/yt6imYz9sPQOWOKmYolVx8EGuHHc9Du63eJymEyQ+4tfgKIDOXQ5Cm1MarQl/LcTC+1FFqG18Ig/bLAqmwB1fyH230YP4hSYVJ+O7t7rv3LgOlEgjR1OGA+ijwfITzR1vffZyXZ/1hlPFdhznR1q3ubyi0i584Wohw1+NSIAoOSsbyVIe6cbGX48WEw/Yo8wBpy1FcdVJDHRMp8vgRHFyNuN4DpVD6EW7K1SWw34W5+0 y0mgAk68 lZWEkmZxrFYii2PyWJVQDYATW1LiAsuHTOzaCBtMX9JO2gkhjb4YyZeIMnOfOwkK2gGLK+hx07LNq1PrnFQjk+1YP2KTkbWYNjKwRnMavpWB9SxpRtfcKvZv3XHFxx+cRPDAOa0wTCDOycBsSpjr9EQykfx8LXyEeyeIlJUpKmZWlQqZONPzBdzdpgl3JEn484JX1lL+9SGpk5r9aeJuDMnodNE0riacl1WxmhUNwW9S7DJ+e2MsscWQr/3n/c+jeJH0Ft6D1Ue9csPCeGTBGrzNgJDeo/BpWKrPVdCfHM1jzu4IIi+dbIPyfXuLp1qQiZ8zZrtX3raXTcEFcJ3YGO+/rz2d3s/0Ex//xCnng1uiSbpiJOuw567jNqyuS+JX6xbOH3D/IN2/kU+P+3gMIQJLEbJKU0rNkPLabPwLY9RpP2sXTa2Pv+fHmcnFjavPmyfjVcGV+hD3q8NNKwmg50MIxn32rWZmv0vnu9sKPmAEldaopEU7/c+28qUfcacYAcKULwD+6oPvppWLSYvDKUo9bcJ2rfbSWsZLcZHpgKInimuo1BJBkMrSZXWnCwSCHXLHx X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The aging feedback is based on both the number of generations and the distribution of folios in each generation. The number of generations is currently the distance between max_seq and anon min_seq. This is because anon min_seq is not allowed to move past file min_seq. The rationale for that is that file is always evictable whereas anon is not. However, for use cases where anon is a lot cheaper than file: 1. Anon in the second oldest generation can be a better choice than file in the oldest generation. 2. A large amount of file in the oldest generation can skew the distribution, making should_run_aging() return false negative. Allow anon and file min_seq to move independently, and use solely the number of generations as the feedback for aging. Specifically, when both anon and file are evictable, anon min_seq can now be greater than file min_seq, and therefore the number of generations becomes the distance between max_seq and min(min_seq[0],min_seq[1]). And should_run_aging() returns true if and only if the number of generations is less than MAX_NR_GENS. As the first step to the final optimization, this change by itself should not have userspace-visiable effects beyond performance. The next twos patch will take advantage of this change; the last patch in this series will better distribute folios across MAX_NR_GENS. Reported-by: David Stevens Signed-off-by: Yu Zhao Tested-by: Kalesh Singh --- include/linux/mmzone.h | 6 +- mm/vmscan.c | 217 +++++++++++++++++------------------------ 2 files changed, 91 insertions(+), 132 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index b36124145a16..b998ccc5c341 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -446,8 +446,8 @@ struct lru_gen_folio { unsigned long avg_refaulted[ANON_AND_FILE][MAX_NR_TIERS]; /* the exponential moving average of evicted+protected */ unsigned long avg_total[ANON_AND_FILE][MAX_NR_TIERS]; - /* the first tier doesn't need protection, hence the minus one */ - unsigned long protected[NR_HIST_GENS][ANON_AND_FILE][MAX_NR_TIERS - 1]; + /* can only be modified under the LRU lock */ + unsigned long protected[NR_HIST_GENS][ANON_AND_FILE][MAX_NR_TIERS]; /* can be modified without holding the LRU lock */ atomic_long_t evicted[NR_HIST_GENS][ANON_AND_FILE][MAX_NR_TIERS]; atomic_long_t refaulted[NR_HIST_GENS][ANON_AND_FILE][MAX_NR_TIERS]; @@ -498,7 +498,7 @@ struct lru_gen_mm_walk { int mm_stats[NR_MM_STATS]; /* total batched items */ int batched; - bool can_swap; + int swappiness; bool force_scan; }; diff --git a/mm/vmscan.c b/mm/vmscan.c index 2a8db048d581..00a5aff3db42 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2623,11 +2623,17 @@ static bool should_clear_pmd_young(void) READ_ONCE((lruvec)->lrugen.min_seq[LRU_GEN_FILE]), \ } +#define evictable_min_seq(min_seq, swappiness) \ + min((min_seq)[!(swappiness)], (min_seq)[(swappiness) != MAX_SWAPPINESS]) + #define for_each_gen_type_zone(gen, type, zone) \ for ((gen) = 0; (gen) < MAX_NR_GENS; (gen)++) \ for ((type) = 0; (type) < ANON_AND_FILE; (type)++) \ for ((zone) = 0; (zone) < MAX_NR_ZONES; (zone)++) +#define for_each_evictable_type(type, swappiness) \ + for ((type) = !(swappiness); (type) <= ((swappiness) != MAX_SWAPPINESS); (type)++) + #define get_memcg_gen(seq) ((seq) % MEMCG_NR_GENS) #define get_memcg_bin(bin) ((bin) % MEMCG_NR_BINS) @@ -2673,10 +2679,16 @@ static int get_nr_gens(struct lruvec *lruvec, int type) static bool __maybe_unused seq_is_valid(struct lruvec *lruvec) { - /* see the comment on lru_gen_folio */ - return get_nr_gens(lruvec, LRU_GEN_FILE) >= MIN_NR_GENS && - get_nr_gens(lruvec, LRU_GEN_FILE) <= get_nr_gens(lruvec, LRU_GEN_ANON) && - get_nr_gens(lruvec, LRU_GEN_ANON) <= MAX_NR_GENS; + int type; + + for (type = 0; type < ANON_AND_FILE; type++) { + int n = get_nr_gens(lruvec, type); + + if (n < MIN_NR_GENS || n > MAX_NR_GENS) + return false; + } + + return true; } /****************************************************************************** @@ -3083,9 +3095,8 @@ static void read_ctrl_pos(struct lruvec *lruvec, int type, int tier, int gain, pos->refaulted = lrugen->avg_refaulted[type][tier] + atomic_long_read(&lrugen->refaulted[hist][type][tier]); pos->total = lrugen->avg_total[type][tier] + + lrugen->protected[hist][type][tier] + atomic_long_read(&lrugen->evicted[hist][type][tier]); - if (tier) - pos->total += lrugen->protected[hist][type][tier - 1]; pos->gain = gain; } @@ -3112,17 +3123,15 @@ static void reset_ctrl_pos(struct lruvec *lruvec, int type, bool carryover) WRITE_ONCE(lrugen->avg_refaulted[type][tier], sum / 2); sum = lrugen->avg_total[type][tier] + + lrugen->protected[hist][type][tier] + atomic_long_read(&lrugen->evicted[hist][type][tier]); - if (tier) - sum += lrugen->protected[hist][type][tier - 1]; WRITE_ONCE(lrugen->avg_total[type][tier], sum / 2); } if (clear) { atomic_long_set(&lrugen->refaulted[hist][type][tier], 0); atomic_long_set(&lrugen->evicted[hist][type][tier], 0); - if (tier) - WRITE_ONCE(lrugen->protected[hist][type][tier - 1], 0); + WRITE_ONCE(lrugen->protected[hist][type][tier], 0); } } } @@ -3257,7 +3266,7 @@ static int should_skip_vma(unsigned long start, unsigned long end, struct mm_wal return true; if (vma_is_anonymous(vma)) - return !walk->can_swap; + return !walk->swappiness; if (WARN_ON_ONCE(!vma->vm_file || !vma->vm_file->f_mapping)) return true; @@ -3267,7 +3276,10 @@ static int should_skip_vma(unsigned long start, unsigned long end, struct mm_wal return true; if (shmem_mapping(mapping)) - return !walk->can_swap; + return !walk->swappiness; + + if (walk->swappiness == MAX_SWAPPINESS) + return true; /* to exclude special mappings like dax, etc. */ return !mapping->a_ops->read_folio; @@ -3355,7 +3367,7 @@ static unsigned long get_pmd_pfn(pmd_t pmd, struct vm_area_struct *vma, unsigned } static struct folio *get_pfn_folio(unsigned long pfn, struct mem_cgroup *memcg, - struct pglist_data *pgdat, bool can_swap) + struct pglist_data *pgdat) { struct folio *folio; @@ -3366,10 +3378,6 @@ static struct folio *get_pfn_folio(unsigned long pfn, struct mem_cgroup *memcg, if (folio_memcg(folio) != memcg) return NULL; - /* file VMAs can contain anon pages from COW */ - if (!folio_is_file_lru(folio) && !can_swap) - return NULL; - return folio; } @@ -3425,7 +3433,7 @@ static bool walk_pte_range(pmd_t *pmd, unsigned long start, unsigned long end, if (pfn == -1) continue; - folio = get_pfn_folio(pfn, memcg, pgdat, walk->can_swap); + folio = get_pfn_folio(pfn, memcg, pgdat); if (!folio) continue; @@ -3510,7 +3518,7 @@ static void walk_pmd_range_locked(pud_t *pud, unsigned long addr, struct vm_area if (pfn == -1) goto next; - folio = get_pfn_folio(pfn, memcg, pgdat, walk->can_swap); + folio = get_pfn_folio(pfn, memcg, pgdat); if (!folio) goto next; @@ -3722,22 +3730,26 @@ static void clear_mm_walk(void) kfree(walk); } -static bool inc_min_seq(struct lruvec *lruvec, int type, bool can_swap) +static bool inc_min_seq(struct lruvec *lruvec, int type, int swappiness) { int zone; int remaining = MAX_LRU_BATCH; struct lru_gen_folio *lrugen = &lruvec->lrugen; + int hist = lru_hist_from_seq(lrugen->min_seq[type]); int new_gen, old_gen = lru_gen_from_seq(lrugen->min_seq[type]); - if (type == LRU_GEN_ANON && !can_swap) + if (type ? swappiness == MAX_SWAPPINESS : !swappiness) goto done; - /* prevent cold/hot inversion if force_scan is true */ + /* prevent cold/hot inversion if the type is evictable */ for (zone = 0; zone < MAX_NR_ZONES; zone++) { struct list_head *head = &lrugen->folios[old_gen][type][zone]; while (!list_empty(head)) { struct folio *folio = lru_to_folio(head); + int refs = folio_lru_refs(folio); + int tier = lru_tier_from_refs(refs); + int delta = folio_nr_pages(folio); VM_WARN_ON_ONCE_FOLIO(folio_test_unevictable(folio), folio); VM_WARN_ON_ONCE_FOLIO(folio_test_active(folio), folio); @@ -3747,6 +3759,9 @@ static bool inc_min_seq(struct lruvec *lruvec, int type, bool can_swap) new_gen = folio_inc_gen(lruvec, folio, false); list_move_tail(&folio->lru, &lrugen->folios[new_gen][type][zone]); + WRITE_ONCE(lrugen->protected[hist][type][tier], + lrugen->protected[hist][type][tier] + delta); + if (!--remaining) return false; } @@ -3758,51 +3773,37 @@ static bool inc_min_seq(struct lruvec *lruvec, int type, bool can_swap) return true; } -static bool try_to_inc_min_seq(struct lruvec *lruvec, bool can_swap) +static bool try_to_inc_min_seq(struct lruvec *lruvec, int swappiness) { int gen, type, zone; bool success = false; struct lru_gen_folio *lrugen = &lruvec->lrugen; - DEFINE_MIN_SEQ(lruvec); VM_WARN_ON_ONCE(!seq_is_valid(lruvec)); - /* find the oldest populated generation */ - for (type = !can_swap; type < ANON_AND_FILE; type++) { - while (min_seq[type] + MIN_NR_GENS <= lrugen->max_seq) { - gen = lru_gen_from_seq(min_seq[type]); + for_each_evictable_type(type, swappiness) { + unsigned long seq; + + for (seq = lrugen->min_seq[type]; seq + MIN_NR_GENS <= lrugen->max_seq; seq++) { + gen = lru_gen_from_seq(seq); for (zone = 0; zone < MAX_NR_ZONES; zone++) { if (!list_empty(&lrugen->folios[gen][type][zone])) goto next; } - - min_seq[type]++; } next: - ; - } - - /* see the comment on lru_gen_folio */ - if (can_swap) { - min_seq[LRU_GEN_ANON] = min(min_seq[LRU_GEN_ANON], min_seq[LRU_GEN_FILE]); - min_seq[LRU_GEN_FILE] = max(min_seq[LRU_GEN_ANON], lrugen->min_seq[LRU_GEN_FILE]); - } - - for (type = !can_swap; type < ANON_AND_FILE; type++) { - if (min_seq[type] == lrugen->min_seq[type]) - continue; - - reset_ctrl_pos(lruvec, type, true); - WRITE_ONCE(lrugen->min_seq[type], min_seq[type]); - success = true; + if (seq != lrugen->min_seq[type]) { + reset_ctrl_pos(lruvec, type, true); + WRITE_ONCE(lrugen->min_seq[type], seq); + success = true; + } } return success; } -static bool inc_max_seq(struct lruvec *lruvec, unsigned long seq, - bool can_swap, bool force_scan) +static bool inc_max_seq(struct lruvec *lruvec, unsigned long seq, int swappiness) { bool success; int prev, next; @@ -3820,13 +3821,11 @@ static bool inc_max_seq(struct lruvec *lruvec, unsigned long seq, if (!success) goto unlock; - for (type = ANON_AND_FILE - 1; type >= 0; type--) { + for (type = 0; type < ANON_AND_FILE; type++) { if (get_nr_gens(lruvec, type) != MAX_NR_GENS) continue; - VM_WARN_ON_ONCE(!force_scan && (type == LRU_GEN_FILE || can_swap)); - - if (inc_min_seq(lruvec, type, can_swap)) + if (inc_min_seq(lruvec, type, swappiness)) continue; spin_unlock_irq(&lruvec->lru_lock); @@ -3870,7 +3869,7 @@ static bool inc_max_seq(struct lruvec *lruvec, unsigned long seq, } static bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long seq, - bool can_swap, bool force_scan) + int swappiness, bool force_scan) { bool success; struct lru_gen_mm_walk *walk; @@ -3881,7 +3880,7 @@ static bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long seq, VM_WARN_ON_ONCE(seq > READ_ONCE(lrugen->max_seq)); if (!mm_state) - return inc_max_seq(lruvec, seq, can_swap, force_scan); + return inc_max_seq(lruvec, seq, swappiness); /* see the comment in iterate_mm_list() */ if (seq <= READ_ONCE(mm_state->seq)) @@ -3906,7 +3905,7 @@ static bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long seq, walk->lruvec = lruvec; walk->seq = seq; - walk->can_swap = can_swap; + walk->swappiness = swappiness; walk->force_scan = force_scan; do { @@ -3916,7 +3915,7 @@ static bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long seq, } while (mm); done: if (success) { - success = inc_max_seq(lruvec, seq, can_swap, force_scan); + success = inc_max_seq(lruvec, seq, swappiness); WARN_ON_ONCE(!success); } @@ -3957,13 +3956,13 @@ static bool lruvec_is_sizable(struct lruvec *lruvec, struct scan_control *sc) { int gen, type, zone; unsigned long total = 0; - bool can_swap = get_swappiness(lruvec, sc); + int swappiness = get_swappiness(lruvec, sc); struct lru_gen_folio *lrugen = &lruvec->lrugen; struct mem_cgroup *memcg = lruvec_memcg(lruvec); DEFINE_MAX_SEQ(lruvec); DEFINE_MIN_SEQ(lruvec); - for (type = !can_swap; type < ANON_AND_FILE; type++) { + for_each_evictable_type(type, swappiness) { unsigned long seq; for (seq = min_seq[type]; seq <= max_seq; seq++) { @@ -3983,6 +3982,7 @@ static bool lruvec_is_reclaimable(struct lruvec *lruvec, struct scan_control *sc { int gen; unsigned long birth; + int swappiness = get_swappiness(lruvec, sc); struct mem_cgroup *memcg = lruvec_memcg(lruvec); DEFINE_MIN_SEQ(lruvec); @@ -3992,8 +3992,7 @@ static bool lruvec_is_reclaimable(struct lruvec *lruvec, struct scan_control *sc if (!lruvec_is_sizable(lruvec, sc)) return false; - /* see the comment on lru_gen_folio */ - gen = lru_gen_from_seq(min_seq[LRU_GEN_FILE]); + gen = lru_gen_from_seq(evictable_min_seq(min_seq, swappiness)); birth = READ_ONCE(lruvec->lrugen.timestamps[gen]); return time_is_before_jiffies(birth + min_ttl); @@ -4060,7 +4059,6 @@ bool lru_gen_look_around(struct page_vma_mapped_walk *pvmw) unsigned long addr = pvmw->address; struct vm_area_struct *vma = pvmw->vma; struct folio *folio = pfn_folio(pvmw->pfn); - bool can_swap = !folio_is_file_lru(folio); struct mem_cgroup *memcg = folio_memcg(folio); struct pglist_data *pgdat = folio_pgdat(folio); struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat); @@ -4113,7 +4111,7 @@ bool lru_gen_look_around(struct page_vma_mapped_walk *pvmw) if (pfn == -1) continue; - folio = get_pfn_folio(pfn, memcg, pgdat, can_swap); + folio = get_pfn_folio(pfn, memcg, pgdat); if (!folio) continue; @@ -4329,8 +4327,8 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c gen = folio_inc_gen(lruvec, folio, false); list_move_tail(&folio->lru, &lrugen->folios[gen][type][zone]); - WRITE_ONCE(lrugen->protected[hist][type][tier - 1], - lrugen->protected[hist][type][tier - 1] + delta); + WRITE_ONCE(lrugen->protected[hist][type][tier], + lrugen->protected[hist][type][tier] + delta); return true; } @@ -4529,7 +4527,6 @@ static int isolate_folios(struct lruvec *lruvec, struct scan_control *sc, int sw { int i; int type; - int scanned; int tier = -1; DEFINE_MIN_SEQ(lruvec); @@ -4554,21 +4551,23 @@ static int isolate_folios(struct lruvec *lruvec, struct scan_control *sc, int sw else type = get_type_to_scan(lruvec, swappiness, &tier); - for (i = !swappiness; i < ANON_AND_FILE; i++) { + for_each_evictable_type(i, swappiness) { + int scanned; + if (tier < 0) tier = get_tier_idx(lruvec, type); + *type_scanned = type; + scanned = scan_folios(lruvec, sc, type, tier, list); if (scanned) - break; + return scanned; type = !type; tier = -1; } - *type_scanned = type; - - return scanned; + return 0; } static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swappiness) @@ -4584,6 +4583,7 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap struct reclaim_stat stat; struct lru_gen_mm_walk *walk; bool skip_retry = false; + struct lru_gen_folio *lrugen = &lruvec->lrugen; struct mem_cgroup *memcg = lruvec_memcg(lruvec); struct pglist_data *pgdat = lruvec_pgdat(lruvec); @@ -4593,7 +4593,7 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap scanned += try_to_inc_min_seq(lruvec, swappiness); - if (get_nr_gens(lruvec, !swappiness) == MIN_NR_GENS) + if (evictable_min_seq(lrugen->min_seq, swappiness) + MIN_NR_GENS > lrugen->max_seq) scanned = 0; spin_unlock_irq(&lruvec->lru_lock); @@ -4665,63 +4665,32 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap } static bool should_run_aging(struct lruvec *lruvec, unsigned long max_seq, - bool can_swap, unsigned long *nr_to_scan) + int swappiness, unsigned long *nr_to_scan) { int gen, type, zone; - unsigned long old = 0; - unsigned long young = 0; - unsigned long total = 0; + unsigned long size = 0; struct lru_gen_folio *lrugen = &lruvec->lrugen; DEFINE_MIN_SEQ(lruvec); - /* whether this lruvec is completely out of cold folios */ - if (min_seq[!can_swap] + MIN_NR_GENS > max_seq) { - *nr_to_scan = 0; + *nr_to_scan = 0; + /* have to run aging, since eviction is not possible anymore */ + if (evictable_min_seq(min_seq, swappiness) + MIN_NR_GENS > max_seq) return true; - } - for (type = !can_swap; type < ANON_AND_FILE; type++) { + for_each_evictable_type(type, swappiness) { unsigned long seq; for (seq = min_seq[type]; seq <= max_seq; seq++) { - unsigned long size = 0; - gen = lru_gen_from_seq(seq); for (zone = 0; zone < MAX_NR_ZONES; zone++) size += max(READ_ONCE(lrugen->nr_pages[gen][type][zone]), 0L); - - total += size; - if (seq == max_seq) - young += size; - else if (seq + MIN_NR_GENS == max_seq) - old += size; } } - *nr_to_scan = total; - - /* - * The aging tries to be lazy to reduce the overhead, while the eviction - * stalls when the number of generations reaches MIN_NR_GENS. Hence, the - * ideal number of generations is MIN_NR_GENS+1. - */ - if (min_seq[!can_swap] + MIN_NR_GENS < max_seq) - return false; - - /* - * It's also ideal to spread pages out evenly, i.e., 1/(MIN_NR_GENS+1) - * of the total number of pages for each generation. A reasonable range - * for this average portion is [1/MIN_NR_GENS, 1/(MIN_NR_GENS+2)]. The - * aging cares about the upper bound of hot pages, while the eviction - * cares about the lower bound of cold pages. - */ - if (young * MIN_NR_GENS > total) - return true; - if (old * (MIN_NR_GENS + 2) < total) - return true; - - return false; + *nr_to_scan = size; + /* better to run aging even though eviction is still possible */ + return evictable_min_seq(min_seq, swappiness) + MIN_NR_GENS == max_seq; } /* @@ -4729,7 +4698,7 @@ static bool should_run_aging(struct lruvec *lruvec, unsigned long max_seq, * 1. Defer try_to_inc_max_seq() to workqueues to reduce latency for memcg * reclaim. */ -static long get_nr_to_scan(struct lruvec *lruvec, struct scan_control *sc, bool can_swap) +static long get_nr_to_scan(struct lruvec *lruvec, struct scan_control *sc, int swappiness) { bool success; unsigned long nr_to_scan; @@ -4739,7 +4708,7 @@ static long get_nr_to_scan(struct lruvec *lruvec, struct scan_control *sc, bool if (mem_cgroup_below_min(sc->target_mem_cgroup, memcg)) return -1; - success = should_run_aging(lruvec, max_seq, can_swap, &nr_to_scan); + success = should_run_aging(lruvec, max_seq, swappiness, &nr_to_scan); /* try to scrape all its memory if this memcg was deleted */ if (nr_to_scan && !mem_cgroup_online(memcg)) @@ -4750,7 +4719,7 @@ static long get_nr_to_scan(struct lruvec *lruvec, struct scan_control *sc, bool return nr_to_scan >> sc->priority; /* stop scanning this lruvec as it's low on cold folios */ - return try_to_inc_max_seq(lruvec, max_seq, can_swap, false) ? -1 : 0; + return try_to_inc_max_seq(lruvec, max_seq, swappiness, false) ? -1 : 0; } static bool should_abort_scan(struct lruvec *lruvec, struct scan_control *sc) @@ -5294,8 +5263,7 @@ static void lru_gen_seq_show_full(struct seq_file *m, struct lruvec *lruvec, s = "rep"; n[0] = atomic_long_read(&lrugen->refaulted[hist][type][tier]); n[1] = atomic_long_read(&lrugen->evicted[hist][type][tier]); - if (tier) - n[2] = READ_ONCE(lrugen->protected[hist][type][tier - 1]); + n[2] = READ_ONCE(lrugen->protected[hist][type][tier]); } for (i = 0; i < 3; i++) @@ -5350,7 +5318,7 @@ static int lru_gen_seq_show(struct seq_file *m, void *v) seq_printf(m, " node %5d\n", nid); if (!full) - seq = min_seq[LRU_GEN_ANON]; + seq = evictable_min_seq(min_seq, MAX_SWAPPINESS / 2); else if (max_seq >= MAX_NR_GENS) seq = max_seq - MAX_NR_GENS + 1; else @@ -5390,23 +5358,14 @@ static const struct seq_operations lru_gen_seq_ops = { }; static int run_aging(struct lruvec *lruvec, unsigned long seq, - bool can_swap, bool force_scan) + int swappiness, bool force_scan) { DEFINE_MAX_SEQ(lruvec); - DEFINE_MIN_SEQ(lruvec); - - if (seq < max_seq) - return 0; if (seq > max_seq) return -EINVAL; - if (!force_scan && min_seq[!can_swap] + MAX_NR_GENS - 1 <= max_seq) - return -ERANGE; - - try_to_inc_max_seq(lruvec, max_seq, can_swap, force_scan); - - return 0; + return try_to_inc_max_seq(lruvec, max_seq, swappiness, force_scan) ? 0 : -EEXIST; } static int run_eviction(struct lruvec *lruvec, unsigned long seq, struct scan_control *sc, @@ -5422,7 +5381,7 @@ static int run_eviction(struct lruvec *lruvec, unsigned long seq, struct scan_co while (!signal_pending(current)) { DEFINE_MIN_SEQ(lruvec); - if (seq < min_seq[!swappiness]) + if (seq < evictable_min_seq(min_seq, swappiness)) return 0; if (sc->nr_reclaimed >= nr_to_reclaim) From patchwork Sat Dec 7 22:15:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 13898425 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9A91E7717D for ; Sat, 7 Dec 2024 22:15:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4A8A16B0368; Sat, 7 Dec 2024 17:15:40 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 456176B036A; Sat, 7 Dec 2024 17:15:40 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2F59C6B036B; Sat, 7 Dec 2024 17:15:40 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 067DD6B0368 for ; Sat, 7 Dec 2024 17:15:40 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 9E4731A026F for ; Sat, 7 Dec 2024 22:15:39 +0000 (UTC) X-FDA: 82869570210.10.B6B6377 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) by imf15.hostedemail.com (Postfix) with ESMTP id D3590A0011 for ; Sat, 7 Dec 2024 22:15:18 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=ZUpreLSG; spf=pass (imf15.hostedemail.com: domain of 3CMlUZwYKCC8jfkSLZRZZRWP.NZXWTYfi-XXVgLNV.ZcR@flex--yuzhao.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3CMlUZwYKCC8jfkSLZRZZRWP.NZXWTYfi-XXVgLNV.ZcR@flex--yuzhao.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733609725; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=h1mvjlX5nT1rLjjtVWZn8iR4hG3sSfAtSE80GuqS3ww=; b=IUo62l8pt4He1wfEccUTPDno7Pv79GEtDd+CMmcyqwM7oYde5mhfJ4V99ad9cqfRQQJLec kkfmYDIUs0qZ6mMCPnffpL7RhOwMwuRkoW7Mi5aH2FNoEBKSg73TzUa8FE5w/4pRxERt5t /LfgLX2oZFZ20/D7vA1e0Pgfx8fa5mw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733609725; a=rsa-sha256; cv=none; b=4PnEMRxbgRJLyYT+cey74VAAQ9Ex4A7Ak6K+uB6jqEqwbsxMKnvjQQMDvcpLSeAs9/i+w7 t8lLbyl17NHTjBhX24rimKtoCcb6sl/hzeQDqW7wrlDRtzTkrpJe739hTmjIsPUQ+lCaML fLM6KPzfTBUB1+MoUMx9dvMDOyrSMOI= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=ZUpreLSG; spf=pass (imf15.hostedemail.com: domain of 3CMlUZwYKCC8jfkSLZRZZRWP.NZXWTYfi-XXVgLNV.ZcR@flex--yuzhao.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3CMlUZwYKCC8jfkSLZRZZRWP.NZXWTYfi-XXVgLNV.ZcR@flex--yuzhao.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ee8ced572eso3449817a91.0 for ; Sat, 07 Dec 2024 14:15:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733609736; x=1734214536; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=h1mvjlX5nT1rLjjtVWZn8iR4hG3sSfAtSE80GuqS3ww=; b=ZUpreLSGqkSEjtDjzteOiarask+Qd0XvjreiREpT8nDGnqXXuDCwFB8Vf2OiBh+oEB qzoAvOUHvRc2hdUUg5jy7b2VYRP7DMn094950o3zAySIwKClJpz5++D/E76b6hO/Qsia aDTPW1DS9KRxaoUaqxw2moTw3zZ5OCr8rqK6cJfDTfHBU43OHtKJ7k29O4Z2zxFc3zx6 PusfOtDKKDfu9FEG77Iyjcul6tReljeISZpuqnPCpMKgFjP1e4TQvYdTVAZEvIR9/5Sd gz3OPJRJc7xU6U65O8UKHpIVFfymaUsITF6F8dle/dX3Q6yhVArus1LnfRpcL4O911ov iEMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733609736; x=1734214536; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=h1mvjlX5nT1rLjjtVWZn8iR4hG3sSfAtSE80GuqS3ww=; b=nGyvtC5e46dmos8M5n/p7/l9ah06XlzaUjVkPF/DzFZOzhIsDCbyK55YrfbLWwRZ0L t73PK6sS49s4OtyYYGDQzj+/ZM9KTseKruPda8bJ5EJ2uqKYQAVcNsPMsxoYf+GiFF3o xJJNYKjhckcOnSr22JKbeRfPjF8GHFiPHE3KKAt0mdvYXO9qcXXbi83+hri5fWQuLh1v hbF/e3dZnmE6o2FioqB4FU9kyqQIhPI4i0z9OG6w2cOQOvfkdwci/rn9Yqd3dLKqcf+A OZthhSnotgyADBRryrMX+5JUO6tlhRcvnPfzrxFQbKchBV+2YbZTOfl8KhE1RIDU1mEN PAvA== X-Gm-Message-State: AOJu0YzwsRyMkxWMazSHyeiU5EQYwvAOKvlxPTCOmZ7/ZLnBBqnjZrvP QX4YHAP0zq+m5Vbd17it7rmyV316E0grhFGy6vLxqYM7LwU573s95LdLfu2Gvmpl5fCY7kZtfni 6wA== X-Google-Smtp-Source: AGHT+IE1cK9olUWYZZEb01UAnSqV/H0pU5jqrRIyuw+rrX9BP0rn/osA43XuWh8gcnpTcn+qx/sjV1NZo0M= X-Received: from pjm5.prod.google.com ([2002:a17:90b:2fc5:b0:2ea:448a:8cd1]) (user=yuzhao job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:558f:b0:2ee:4b8f:a5b1 with SMTP id 98e67ed59e1d1-2ef6aadb4cbmr13013029a91.24.1733609736695; Sat, 07 Dec 2024 14:15:36 -0800 (PST) Date: Sat, 7 Dec 2024 15:15:20 -0700 In-Reply-To: <20241207221522.2250311-1-yuzhao@google.com> Mime-Version: 1.0 References: <20241207221522.2250311-1-yuzhao@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241207221522.2250311-5-yuzhao@google.com> Subject: [PATCH mm-unstable v3 4/6] mm/mglru: rework type selection From: Yu Zhao To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao , David Stevens , Kalesh Singh X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: D3590A0011 X-Stat-Signature: ybfoeebm55qniux7eyiex7qmmfwozs91 X-Rspam-User: X-HE-Tag: 1733609718-186325 X-HE-Meta: U2FsdGVkX1/jlH0CtVzdkouIHHSYffP3nCCv6IlPE/A8PG2vpRkbT2G2w2oAqL7KV7KDwDWqzdo7nK6NxZr6jp5sS7e8Xv9jK7+TB0tK57WiHNP8zzoWHtOqDKdjvV702MD8D92wLifD6LV01Xg82aEiQpUVRcYsBWBqBD/cuHIHRQCLi8lv2AqlDd1eVsr5Q1NZ3oU056Zf8YWQWMplVctt8cFHG+n1GW4F1RHBnJIfPZhndOsV2yxR+6NRyWlfHnx/wUbEuSPjj8Fqvv9aLQperfCGr06xGDMIYUNHHs5w2iYVKsLXQHUX/g2JKXVAFNG8ZveXsPJIDtE58r/u4sbmLMTIzRHim4OHnCNIH98fo2zRBVFa4bt4ZjgARFZO5vlKxv2uw34ggnepZwgftV0y+SvX6qXma6AtFzCZwaDxOLyFj0U/Uari9ytn6iZ060NKXNSY9toZxlZRP+Do9vE9XhWJHhOrw7l7Q3uFTEDlVbsOP3Gu4u8T2SK9yawiDl+iqBhvI79iVZ3F3I/Q5Xb1HYXxApjIg0dbKIkp8g1WML75YoE9DDU7nomSgTF4+fJ1Mk/4F4II/bJ2J8cfP4clAS/iMrxBdpepJ6fI/0zcCrRm86O5Xqarj2wU8ert+CZU+pGJoRba8x3LPo23j1u1vRMB4dxSJVflPBpZxQMYPelcnFJM5/VHnk8UssRntqMQBYoa7VZcNya3YksLqZoMmCC9shB2epRnINCpEigszWt90M0aUApI2jWYfU+MXgE8eBr2T4vzHUVCu8cgmliG0lm3n7jbNby2APHbWAXekOXNX22EtQTFUZrBPQyEIAwCnqi2V5Ygf3FIQePihUy0GENwAvvtqCFXFYzrG00NI8sXV74qyCne3TGOIqIyUFV/rzElmLPzIBcZ/D1ciM0u0HapfVjqWZGt05N3yeLdHUK2PPbgZmCFPfgyl2dzc4lPhmbm0IfT5gzMJi8 AMKc1Pvx XGyo7kLwVsmIaEWxU7TimnHZr85KoNLrscua31UVVCCfHm+ySqKQUV+g9X5r2PYi6Kiey9Viaknu+Gw1rwV+/vibyxIiZE0EZXhj/ShBAl/EmP0nekfWoIboUQM5E048f/JSP8Gl67vQiDI4r0L5iRMa3el212qkUs5B01NRrOQlUryf8eiEgQ+2JkObXzBuR4feF9hFFD+MOtQFnSoEdKyZzhBU9WCc+Fh5bOJIiEEPP65E6Ie6tKiYu/QWCbRlnlO9/kCI9q5RvKbfRyoO193wwf95FOjiqq5Fnp/Qen1EM2/tiF5kkKYCLONoDDoFaQE8pYFQ6VE33ul0E8LbhoOQ79sTKw3DTEiyuUk3VrKdNXi52LccFIz6vVkMH9TJNEV2IAVkhi1NkBlURzZgLF8k8A4STr0utDcyyyPNmKIRR3Z3+hJT9+u29TRmMNn8PzJ/8t6CjU/xddGDXNwqcjuKPp/XkgIOU4amkOONQkrn3T3pU6+rrIq02lknVhWJeJjUtgWUK8OOy/kdX8J4CY2EKYnQRyeWFxvBpAotWgCoa1GT8QG8OgKA40g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.105525, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: With anon and file min_seq being able to move independently, rework type selection so that it is based on the total refaults from all tiers of each type. Also allow a type to be selected until that type reaches MIN_NR_GENS, and therefore abs_diff(min_seq[0],min_seq[1]) now can be 2 (MAX_NR_GENS-MIN_NR_GENS) instead of 1. Since some tiers of a selected type can have higher refaults than the first tier of the other type, use a less larger gain factor 2:3 instead of 1:2, in order for those tiers in the selected type to be better protected. As an intermediate step to the final optimization, this change by itself should not have userspace-visiable effects beyond performance. Reported-by: David Stevens Signed-off-by: Yu Zhao Tested-by: Kalesh Singh --- mm/vmscan.c | 82 +++++++++++++++++------------------------------------ 1 file changed, 26 insertions(+), 56 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 00a5aff3db42..02b01ae2bdbb 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3089,15 +3089,20 @@ struct ctrl_pos { static void read_ctrl_pos(struct lruvec *lruvec, int type, int tier, int gain, struct ctrl_pos *pos) { + int i; struct lru_gen_folio *lrugen = &lruvec->lrugen; int hist = lru_hist_from_seq(lrugen->min_seq[type]); - pos->refaulted = lrugen->avg_refaulted[type][tier] + - atomic_long_read(&lrugen->refaulted[hist][type][tier]); - pos->total = lrugen->avg_total[type][tier] + - lrugen->protected[hist][type][tier] + - atomic_long_read(&lrugen->evicted[hist][type][tier]); pos->gain = gain; + pos->refaulted = pos->total = 0; + + for (i = tier % MAX_NR_TIERS; i <= min(tier, MAX_NR_TIERS - 1); i++) { + pos->refaulted += lrugen->avg_refaulted[type][i] + + atomic_long_read(&lrugen->refaulted[hist][type][i]); + pos->total += lrugen->avg_total[type][i] + + lrugen->protected[hist][type][i] + + atomic_long_read(&lrugen->evicted[hist][type][i]); + } } static void reset_ctrl_pos(struct lruvec *lruvec, int type, bool carryover) @@ -4480,13 +4485,13 @@ static int get_tier_idx(struct lruvec *lruvec, int type) struct ctrl_pos sp, pv; /* - * To leave a margin for fluctuations, use a larger gain factor (1:2). + * To leave a margin for fluctuations, use a larger gain factor (2:3). * This value is chosen because any other tier would have at least twice * as many refaults as the first tier. */ - read_ctrl_pos(lruvec, type, 0, 1, &sp); + read_ctrl_pos(lruvec, type, 0, 2, &sp); for (tier = 1; tier < MAX_NR_TIERS; tier++) { - read_ctrl_pos(lruvec, type, tier, 2, &pv); + read_ctrl_pos(lruvec, type, tier, 3, &pv); if (!positive_ctrl_err(&sp, &pv)) break; } @@ -4494,68 +4499,34 @@ static int get_tier_idx(struct lruvec *lruvec, int type) return tier - 1; } -static int get_type_to_scan(struct lruvec *lruvec, int swappiness, int *tier_idx) +static int get_type_to_scan(struct lruvec *lruvec, int swappiness) { - int type, tier; struct ctrl_pos sp, pv; - int gain[ANON_AND_FILE] = { swappiness, MAX_SWAPPINESS - swappiness }; + if (!swappiness) + return LRU_GEN_FILE; + + if (swappiness == MAX_SWAPPINESS) + return LRU_GEN_ANON; /* - * Compare the first tier of anon with that of file to determine which - * type to scan. Also need to compare other tiers of the selected type - * with the first tier of the other type to determine the last tier (of - * the selected type) to evict. + * Compare the sum of all tiers of anon with that of file to determine + * which type to scan. */ - read_ctrl_pos(lruvec, LRU_GEN_ANON, 0, gain[LRU_GEN_ANON], &sp); - read_ctrl_pos(lruvec, LRU_GEN_FILE, 0, gain[LRU_GEN_FILE], &pv); - type = positive_ctrl_err(&sp, &pv); + read_ctrl_pos(lruvec, LRU_GEN_ANON, MAX_NR_TIERS, swappiness, &sp); + read_ctrl_pos(lruvec, LRU_GEN_FILE, MAX_NR_TIERS, MAX_SWAPPINESS - swappiness, &pv); - read_ctrl_pos(lruvec, !type, 0, gain[!type], &sp); - for (tier = 1; tier < MAX_NR_TIERS; tier++) { - read_ctrl_pos(lruvec, type, tier, gain[type], &pv); - if (!positive_ctrl_err(&sp, &pv)) - break; - } - - *tier_idx = tier - 1; - - return type; + return positive_ctrl_err(&sp, &pv); } static int isolate_folios(struct lruvec *lruvec, struct scan_control *sc, int swappiness, int *type_scanned, struct list_head *list) { int i; - int type; - int tier = -1; - DEFINE_MIN_SEQ(lruvec); - - /* - * Try to make the obvious choice first, and if anon and file are both - * available from the same generation, - * 1. Interpret swappiness 1 as file first and MAX_SWAPPINESS as anon - * first. - * 2. If !__GFP_IO, file first since clean pagecache is more likely to - * exist than clean swapcache. - */ - if (!swappiness) - type = LRU_GEN_FILE; - else if (min_seq[LRU_GEN_ANON] < min_seq[LRU_GEN_FILE]) - type = LRU_GEN_ANON; - else if (swappiness == 1) - type = LRU_GEN_FILE; - else if (swappiness == MAX_SWAPPINESS) - type = LRU_GEN_ANON; - else if (!(sc->gfp_mask & __GFP_IO)) - type = LRU_GEN_FILE; - else - type = get_type_to_scan(lruvec, swappiness, &tier); + int type = get_type_to_scan(lruvec, swappiness); for_each_evictable_type(i, swappiness) { int scanned; - - if (tier < 0) - tier = get_tier_idx(lruvec, type); + int tier = get_tier_idx(lruvec, type); *type_scanned = type; @@ -4564,7 +4535,6 @@ static int isolate_folios(struct lruvec *lruvec, struct scan_control *sc, int sw return scanned; type = !type; - tier = -1; } return 0; From patchwork Sat Dec 7 22:15:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 13898426 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4568E7717F for ; Sat, 7 Dec 2024 22:15:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4F43D6B036A; Sat, 7 Dec 2024 17:15:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4A4676B036C; Sat, 7 Dec 2024 17:15:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 234646B036D; Sat, 7 Dec 2024 17:15:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id F28866B036A for ; Sat, 7 Dec 2024 17:15:42 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 77502802B6 for ; Sat, 7 Dec 2024 22:15:42 +0000 (UTC) X-FDA: 82869570756.29.14B1170 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) by imf22.hostedemail.com (Postfix) with ESMTP id A221FC0010 for ; Sat, 7 Dec 2024 22:15:20 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=nOzZE1Oe; spf=pass (imf22.hostedemail.com: domain of 3C8lUZwYKCDIminVOcUccUZS.QcaZWbil-aaYjOQY.cfU@flex--yuzhao.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3C8lUZwYKCDIminVOcUccUZS.QcaZWbil-aaYjOQY.cfU@flex--yuzhao.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733609726; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=KW2cgcwHB51Y5IYZaisd+Bp0HxAyG3VEQ4AUms4DlWs=; b=mmfeBdA2V5raFNBIyiafdRV7E93kfMU6iN1ntUhCSKPD1A1F3BgXvt7ESh4aAOHd0IELiQ reU1yGBqjapFeJrJMO3SPR77lT5VhmSl1pbHbnG6Hc2mlrDridXnlvq3NNMPFXn1CPZyka X06Nu4rFTZDKMS8TQI3CsBuy6zAml3k= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=nOzZE1Oe; spf=pass (imf22.hostedemail.com: domain of 3C8lUZwYKCDIminVOcUccUZS.QcaZWbil-aaYjOQY.cfU@flex--yuzhao.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3C8lUZwYKCDIminVOcUccUZS.QcaZWbil-aaYjOQY.cfU@flex--yuzhao.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733609726; a=rsa-sha256; cv=none; b=qjw4UaasPJeyG0qVRFxTD5Gcb3iXEEGRsR8uu8SExNggYdYrSI9xZc9YSP/65lU409ybP4 8eDIQjVBJTtfKEh9fg7UpuBVeEFBX1z/JIx2OuymAiZOnZ2qbKM1JuOAqmPiegogJMvUcT Azmoeg7/jycHVHM4Y0MceSwuIpu912g= Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2ee6b027a90so5027152a91.1 for ; Sat, 07 Dec 2024 14:15:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733609739; x=1734214539; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=KW2cgcwHB51Y5IYZaisd+Bp0HxAyG3VEQ4AUms4DlWs=; b=nOzZE1Oe/Us/vhLafHntA9ArD8CDnRotNoYUx2H67jT9Klf3Ef+6WW1jIrzlsA+zNv HeG+bN7LqPyoXcJeuG8enDfZhFQFAGy4uRO1vChcXjBT2e31jE6LNjPD7LDViC7d3XuZ +OkdwyU8Q5ddPIVlnKc3aT7nCtYCaV3/1hLWYwHe9Rj5LINZBCNikzO2NT4TkniNvwJX PzKdZKaPSvlkjJaZqmSEnOMmKxsS7migtVT07nIfOSkar6eFde3C+aRAQR36Fh0JS4QB ku3hJOAVuG6rxo17j7kmJlbkw1DXJIlQ/nddyYEuu5VH/MvTyxlyOLTDcj9G37gVW4Sz +b7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733609739; x=1734214539; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KW2cgcwHB51Y5IYZaisd+Bp0HxAyG3VEQ4AUms4DlWs=; b=WbR8IB1l9EUHF/78r43Zm89vrxHcmcfPxx3XLvFSaCMXZ7JC4L0DPM4j86MKA2IX9I ySlBSrXCKQW33eDZgfnntBvvk+SBUH2ilfM59NW3kk4A7afuGqKUarFkJrAaMqRk/v5i gvzfLDpaJ15PkiKx+RPt1N/AYF7rjqrnkIUQ8PERAGoGkCUTJjFU2EVNEEn2Dv3MuAwU rHM02g74iJfswEu8/GFn21V0Kew9iExXzWidXEu4nrt69yNVZAyKGYtk7yvg61EOOkEx WdqF2KpadvvLmvD9g4EXucEe8N+nQBMlqsE81+6gYSIfCTnfy0ekh4GP/yyqSHUbnjxT 51Hg== X-Gm-Message-State: AOJu0Yz1rJyJ0o82JD0IXfO7tlN8NsUinuScukAoMe5wpNWJa2+9lbuh t5Jmx2/gWJkEFZbIfO2z6lZPO28LwRgQAHM9i+QmotvU/fZVE4nO5aVQ+C7ZO/K4VvnfIJGpZVu Ipw== X-Google-Smtp-Source: AGHT+IGLJjHyW+Ykmm2+aL9lWknHp1gQ6xAIWeFiqeIX5Kn/EtyOEYWZv/R88rJ8RijCnornkXxmr9Is4Ew= X-Received: from pjboi8.prod.google.com ([2002:a17:90b:3a08:b0:2ea:4a74:ac2]) (user=yuzhao job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:2d05:b0:2ee:7c02:8f08 with SMTP id 98e67ed59e1d1-2ef6ab29d43mr12407603a91.37.1733609739414; Sat, 07 Dec 2024 14:15:39 -0800 (PST) Date: Sat, 7 Dec 2024 15:15:21 -0700 In-Reply-To: <20241207221522.2250311-1-yuzhao@google.com> Mime-Version: 1.0 References: <20241207221522.2250311-1-yuzhao@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241207221522.2250311-6-yuzhao@google.com> Subject: [PATCH mm-unstable v3 5/6] mm/mglru: rework refault detection From: Yu Zhao To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao , Kairui Song , Kalesh Singh X-Rspamd-Queue-Id: A221FC0010 X-Rspamd-Server: rspam12 X-Stat-Signature: iydzedepnh74jxmbu6q9g65an97a1hnr X-Rspam-User: X-HE-Tag: 1733609720-386006 X-HE-Meta: U2FsdGVkX19Z6oYhfbwg0ppJBYaWJVCFZMmKKux508bhxdhY5G7cdj/Ptek0v100WON1THl2fRYTtciA0rv9rBVHlRMnL/EHF0p03qrBzt0oSu1i9YyX8taq3gEQuo1vDK8PUj5Bs6vlp7SqF0pb+9O1F2DGQ5o2yqu2fbgoFhzDnxiag4Rd6VYOSeMIfRAC7g1Q/T1FwfNPC+JNH9JIodtzoOOFNfov+NuhVauUMN4v0DVC6anVLhvt/eH3EdVoipWeJeI6pqUICCtrna5Yle0bPZG62K5JVx2kRG4XfFQXsVe9yp91+MiaZZdPGpuuicCKVMqniTBwTnmjncL8Wvs/6nA4ZopKYXl7Irn3eRqe8gCdhc8slgd0lJIYdAyh3CL2PUzZvrfAQ4wIaJe9F+15dq1iQagVD4OQKoG3BTHKgfx+r0yS3RmuC98xzFAVObXn7dXRmxNPaL0uIyiMJLQiro8hhyS6Tq+eEhnxFK2D9YHkITZ5SFnhTUzK/wNpKYILEfJ5xBl4jvuT9fz5YJsvM5/vQhdIRPIXoZwNHiyTlFSu6uc8n7aCq8W2eedQTuaiLvTqrMA5ZsGJ3dkArCxE1tMgxOkFg0TOUBczlQgbalESEiDuGvWzfFGmTdrbl2NJL7uDxIl69MWJETUjrpdTRO55/vqDzJ5aBb+IHBQY7pW7Zr+EFtkFFyspvPmeWUATnzvY1QWFd/nIB9X+gDnvGo5EEoecI8H8uFm8Q+MQCVO3NE40HQdRsI8zXoyOPkDP5/w263kca592O5+VaNO0tu4pgUzv4FQr210S/NTZlYUnjHwMi9lGgP91p2UUYt7AyHU0qvpFKKpdC5YIPvJ5K6fdmpLgysjPAkioDA8Z0F0wqfo9Xp59AnTNScXt13xELuB3W1GvD8HEmFUV2BbWBdzVe6etyAHNqgGSSNa4950slfYdejQPR4HuWsJ+S6cJl/FU1/EoyuW1ftu MGoj9wxw qnmELZMD0DVrKXAxztLLSCChGhZrZTbdBESZy/OGRu/bB1Xl8nn4BJfigIC/0UPWg1A7vPztKpgJ6szcYD6ylI3yHsyKsth0wH49CE3jYR/+D3sN3Kton+l0aazrXmPCmeK8D4fscdtcFYnTaEREt+QEOGkzOsToUojgH4dyBVBSGsQ8C1o0UYLChkxWZzaqlkkaP1F0etcyX/xQ0xyzZ0XmhSJ1oiDH36Vapd7mGQyDTUbAE1K6I1Zg8PZy1Hi/EPUS6PuXwK1eZyGwBb06qsryJxXkAdquEUCwNTMIx2fE8yDSeJ64EE8eW8826TMiON3mmXlLaVVSuNQhx9O7mPWCgcfUHJPiH6g3gEG4LNu0Ic+iZGatAaKLmMLoLRIQEZ4ALcVoUOLhfWItGr7M57rOBiChhZfHmBiT2F6mgs+XaOD4YVGIxFFZ5NCOEFcYWtG8SCvwTsk4SywKDR06165uISr/Y3yiBBB6MYtPSX04xgyvyMUc1LFRaoGqTwscSTfIxD6dlUTSey6+bPYDiP8oDQYBFYh0nLNX2yLKepzUDs3AS3XJ8tq3qxeK4rM0yihzZbDT5s5pneF5kgryBXvS9X3OqmiKZBgE+fyuiJ9bnkks= X-Bogosity: Ham, tests=bogofilter, spamicity=0.010037, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: With anon and file min_seq being able to move independently, rework workingset protection as well so that the comparison of refaults between anon and file is always on an equal footing. Specifically, make lru_gen_test_recent() return true for refaults happening within the distance of MAX_NR_GENS. For example, if min_seq of a type is max_seq-MIN_NR_GENS, refaults from min_seq-1, i.e., max_seq-MIN_NR_GENS-1, are also considered recent, since the distance max_seq-(max_seq-MIN_NR_GENS-1), i.e., MIN_NR_GENS+1 is less than MAX_NR_GENS. As an intermediate step to the final optimization, this change by itself should not have userspace-visiable effects beyond performance. Reported-by: Kairui Song Closes: https://lore.kernel.org/CAOUHufahuWcKf5f1Sg3emnqX+cODuR=2TQo7T4Gr-QYLujn4RA@mail.gmail.com/ Signed-off-by: Yu Zhao Tested-by: Kalesh Singh --- mm/workingset.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/mm/workingset.c b/mm/workingset.c index ad181d1b8cf1..2c310c29f51e 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -260,11 +260,11 @@ static void *lru_gen_eviction(struct folio *folio) * Tests if the shadow entry is for a folio that was recently evicted. * Fills in @lruvec, @token, @workingset with the values unpacked from shadow. */ -static bool lru_gen_test_recent(void *shadow, bool file, struct lruvec **lruvec, +static bool lru_gen_test_recent(void *shadow, struct lruvec **lruvec, unsigned long *token, bool *workingset) { int memcg_id; - unsigned long min_seq; + unsigned long max_seq; struct mem_cgroup *memcg; struct pglist_data *pgdat; @@ -273,8 +273,10 @@ static bool lru_gen_test_recent(void *shadow, bool file, struct lruvec **lruvec, memcg = mem_cgroup_from_id(memcg_id); *lruvec = mem_cgroup_lruvec(memcg, pgdat); - min_seq = READ_ONCE((*lruvec)->lrugen.min_seq[file]); - return (*token >> LRU_REFS_WIDTH) == (min_seq & (EVICTION_MASK >> LRU_REFS_WIDTH)); + max_seq = READ_ONCE((*lruvec)->lrugen.max_seq); + max_seq &= EVICTION_MASK >> LRU_REFS_WIDTH; + + return abs_diff(max_seq, *token >> LRU_REFS_WIDTH) < MAX_NR_GENS; } static void lru_gen_refault(struct folio *folio, void *shadow) @@ -290,7 +292,7 @@ static void lru_gen_refault(struct folio *folio, void *shadow) rcu_read_lock(); - recent = lru_gen_test_recent(shadow, type, &lruvec, &token, &workingset); + recent = lru_gen_test_recent(shadow, &lruvec, &token, &workingset); if (lruvec != folio_lruvec(folio)) goto unlock; @@ -331,7 +333,7 @@ static void *lru_gen_eviction(struct folio *folio) return NULL; } -static bool lru_gen_test_recent(void *shadow, bool file, struct lruvec **lruvec, +static bool lru_gen_test_recent(void *shadow, struct lruvec **lruvec, unsigned long *token, bool *workingset) { return false; @@ -432,8 +434,7 @@ bool workingset_test_recent(void *shadow, bool file, bool *workingset, bool recent; rcu_read_lock(); - recent = lru_gen_test_recent(shadow, file, &eviction_lruvec, - &eviction, workingset); + recent = lru_gen_test_recent(shadow, &eviction_lruvec, &eviction, workingset); rcu_read_unlock(); return recent; } From patchwork Sat Dec 7 22:15:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 13898427 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90CAEE7717D for ; Sat, 7 Dec 2024 22:15:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1C23F6B036C; Sat, 7 Dec 2024 17:15:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 125EE8D0015; Sat, 7 Dec 2024 17:15:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E68B06B036F; Sat, 7 Dec 2024 17:15:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id BF9C66B036C for ; Sat, 7 Dec 2024 17:15:45 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 40BB8120383 for ; Sat, 7 Dec 2024 22:15:45 +0000 (UTC) X-FDA: 82869570630.05.FB5DDD0 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) by imf25.hostedemail.com (Postfix) with ESMTP id E7B62A0010 for ; Sat, 7 Dec 2024 22:15:29 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=AU8l8h+9; spf=pass (imf25.hostedemail.com: domain of 3DslUZwYKCDUplqYRfXffXcV.TfdcZelo-ddbmRTb.fiX@flex--yuzhao.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3DslUZwYKCDUplqYRfXffXcV.TfdcZelo-ddbmRTb.fiX@flex--yuzhao.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733609735; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zdSP+7bjwkvzpRB2PSh99IE+m8RuCZsNUL2DdE4uQBQ=; b=ADmvrlSd9cElFSuDsJk377a0j6TqeixzFKThmqWMgVeoa8NuB3X2F+FZwl+ep/VUF9KecA Q2sLwTt+uxnzLoCNC2uHQDPbOGz/n2WnH4DTPHabjZW3g8xb7puX06P/CWNDdMlZX4sSbc kVd9JRT+h+epOKzUYJYcTvVk2tZ5Cjg= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=AU8l8h+9; spf=pass (imf25.hostedemail.com: domain of 3DslUZwYKCDUplqYRfXffXcV.TfdcZelo-ddbmRTb.fiX@flex--yuzhao.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3DslUZwYKCDUplqYRfXffXcV.TfdcZelo-ddbmRTb.fiX@flex--yuzhao.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733609735; a=rsa-sha256; cv=none; b=4KDTsNvO7ofsZZaoScMTF64n9dY04jPvP7W+ZR5toJFc8mn5l3j7FDEWjyXbnrXypHh0vW OLo1CVtB4gyAbmXOspP7FHiHIkOVPzWbaRSKLJvAynYNLfwBCsj/25I8JqW0LhGpodilKP FBj15pAtORmJmXRqr6E3xpNMBBvPgfc= Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ef91d5c863so374688a91.2 for ; Sat, 07 Dec 2024 14:15:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733609742; x=1734214542; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=zdSP+7bjwkvzpRB2PSh99IE+m8RuCZsNUL2DdE4uQBQ=; b=AU8l8h+9sfmvpdpCQ+VvBcw+5cmDFdMxdgQjCbkKn0ukgxScEZmskJsETcEbmRvDDG vCrlElCfnvrkRWnY5tJDs/Td+5zKTjHq5srrDX05lK2XrJZZd6TtLQuUsMogiaYMygrZ kn4wrElH69KoPmheEK10B4C9rZVn9PFUZgjSCA5/SO7HcuVkkftOAaS1Rlwplv6vIeLA 5TaWHGWDltzPZ4lUGblaDfKlwSZmMNdtY3oeZjgK4GsbjrqkLY5/sAveAE3WZ4DBc6wg 6z4YyHT/wxjP5MRaW6PTlQf7P8eGtbfm9EoKmCdtEztqWFLX3EhgmI2BVjcHiKRVSuWl 2BBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733609742; x=1734214542; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zdSP+7bjwkvzpRB2PSh99IE+m8RuCZsNUL2DdE4uQBQ=; b=XlmIt9asaUuVVGJZoabYvXtsyeBtOvGYvgJE7ZwnkMFqcz0KqRGi7iPuBOoqJLHKWl c9vmibWY6QS79ptrPQnZm09YFL2Byvh8g25r7kisBVCO3aXjpDZvMc1QSFqxFC2vNQX2 qKFVvMKYMvgXedSjc1WI7rs6GrUoKphrjV3wnvfRFZOzL4EGkqEpBBf500PYfY2Qooa1 Vy8FWYSjC5lUaw0WwBm+MWKWkSidj0V31PC2uhCULHpAehkZbGmg2rcq+4bzSyrDExUQ r0DTRkAegVr/MonIHcLwAfQOQ5V9CZ1ze0ENXZOkzLqI3r+b/lSagjYnfVOSSuIATtSy gOTQ== X-Gm-Message-State: AOJu0Yz5LgUtCIMPCbH10r+GtPJoKwy1aaDX1IudOS0EWTVEOWI/vx2/ EaWX/1fpHN0HumoPHPc6tfQG3JSB8d40oszfrY9VUFWPnLmebxyZMfH5m9X8IOrkCamMHr/ZnXc 9rw== X-Google-Smtp-Source: AGHT+IERU9qwPmX8l6nSHeE+rRW5qB+apC/LbFjVMpXOGf/YBbeh8mwrqP/BiewL8r/X9v+x5VJGbczUnJc= X-Received: from pjbnr18.prod.google.com ([2002:a17:90b:2412:b0:2ef:6d06:31e4]) (user=yuzhao job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:2b47:b0:2ee:dd79:e03c with SMTP id 98e67ed59e1d1-2ef6a6baf62mr12546047a91.20.1733609742190; Sat, 07 Dec 2024 14:15:42 -0800 (PST) Date: Sat, 7 Dec 2024 15:15:22 -0700 In-Reply-To: <20241207221522.2250311-1-yuzhao@google.com> Mime-Version: 1.0 References: <20241207221522.2250311-1-yuzhao@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241207221522.2250311-7-yuzhao@google.com> Subject: [PATCH mm-unstable v3 6/6] mm/mglru: rework workingset protection From: Yu Zhao To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao , Kairui Song , Kalesh Singh X-Rspamd-Queue-Id: E7B62A0010 X-Stat-Signature: u11k5me3qqat3769fp4k89snz1wkjja6 X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1733609729-310290 X-HE-Meta: U2FsdGVkX1/I5ZldBlzhyyhqxpb5pHZQ6N+EFsZ01Jg8aLTH3MBtwLP4hi/Si/bYXm114R5P0VeddYOF7u3S+GFBXu8tdKhsw90s+s3WaGzKXnJ1gcwiIkSccy3A6zn/msUWG1J11wQ+1+3EGxkmhEUeUlQzemYMOV9WICDFNGh61vMcnmRHkA1RYtqCMaRLAxzaMmVQnnKCQcl5laSnKyclR9AUxUUoUlJlvce4mzS4UwLDxuvfpZS9JAIZWZ9K6SadEd2ckDfpinMc52KjwB3Rpoc2JTjD/uSTYUQRuh5rbif417qlFS49UZ5CXBD4tCkEXFwxopY1fSNzOQY5cs9V18lb3BCpiPa/CxTpVAaSdQggUjiBQPMS/vvdKiKkSAXf24fvogDMBvwdVGI20HfQTPzXOZpkVPRO7i74Q7GcuS84/zGVuDWZGQ5aGEyE13MexO1sV5Pli4nUQnasRzGdcIf1OqrWyGTJpQqoA0Gyj/hUE4RXcKBs39L/Vfrsofpp7zrNiXuBJ3qP7vrBdXMQ2839o0zN8OQfBy6LpZFn3artImlo3uMWt7O7Uj65Q8JVhdE0asedqWJsWh4YjvW+WjRWmQ6PJwy53DIN4H8KpAOg6J0x9yddJ/4SjNnS9CFSjF5/44yJRIPqzTrkqlGHogm76xfx++fPqE/l66Fm7yLkwX9KvVBVSzpLqHuR5jfWk/re3dQPxwfevrSont7MdMC9XZIt+cQ3IlB9NxSx5hziM1IJ4qoyQdlnNKxUukPXZiMD1VRyY6WnbUBFqRd/XTPPmhz6hiLUsJ+rpF0RrOABMyEMkEUkfQ88ITQPwFcQ1r+nZSJXGmSE5PiyDTeL0u7CMnhmaErIXM9Bga1D1qKNwFdLPVATPYoLu8GfS7t/Rv0PpW5gWHuqYrjcoR7isqlSXQ9uIlI92oXhdxbeV6tYCQ38tEB9ct35i3piGN7q7mN634K1Tge111N vSajC++k Ub6K6Ht4oNFnLacqoSS7CB2KBWbtgpUgBE3lYRz/s57Tgt4fMFsKvzWrt5RMtfZvPsIwB99aF+5yxrvSLEbKW+iO4WbHUKY1LZqOnsL5dwpIGbN9Vnv3c2nU8aFmlPyZXRqRqYma6Abib+qrKeKK9e07GF1RMCsqqYn9bn/H/y6DNB6L5kns1QpFTIaGIuloycWDA46JJzSSfy8Ot8Kybh8tRPVtwI8KqprnU1ej9HP6jTnwuy8Ue7KWVY+seEFb4ZHrPohgqfDdwYS4Gcmtd0h0WXmrXigfzUssHCmadJnnFrKKu0JYItMqDMsyPseKwHmxIscMz2BJMT9pQsRTwiOOSYkjJa6zN92rz1MuPO0AOny+sAlBxeBEzNOG8zrsFDzDyvLrI9jEz4QdPTnzfuVqB/KEVKhPHbmgc2xpeWqrFlVspH5F/X1L2EPlF4XmntLgk5HjF0wMtbzptP+Xhc/Dmme/JpOKr4ZiUK9QIQK+tNrrB1HXpU0hQ2Wno8bgEaa8ZJRYnz/sNZpK8DotGjoHDI3x5DZRKnzTsv/g6Lf9WCHW2NUHW1Hd9zTDNxYMg+dMiO/op/GrfyV6strNOTQIpJ2l8IFhkVva2 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: With the aging feedback no longer considering the distribution of folios in each generation, rework workingset protection to better distribute folios across MAX_NR_GENS. This is achieved by reusing PG_workingset and PG_referenced/LRU_REFS_FLAGS in a slightly different way. For folios accessed multiple times through file descriptors, make lru_gen_inc_refs() set additional bits of LRU_REFS_WIDTH in folio->flags after PG_referenced, then PG_workingset after LRU_REFS_WIDTH. After all its bits are set, i.e., LRU_REFS_FLAGS|BIT(PG_workingset), a folio is lazily promoted into the second oldest generation in the eviction path. And when folio_inc_gen() does that, it clears LRU_REFS_FLAGS so that lru_gen_inc_refs() can start over. For this case, LRU_REFS_MASK is only valid when PG_referenced is set. For folios accessed multiple times through page tables, folio_update_gen() from a page table walk or lru_gen_set_refs() from a rmap walk sets PG_referenced after the accessed bit is cleared for the first time. Thereafter, those two paths set PG_workingset and promote folios to the youngest generation. Like folio_inc_gen(), when folio_update_gen() does that, it also clears PG_referenced. For this case, LRU_REFS_MASK is not used. For both of the cases, after PG_workingset is set on a folio, it remains until this folio is either reclaimed, or "deactivated" by lru_gen_clear_refs(). It can be set again if lru_gen_test_recent() returns true upon a refault. When adding folios to the LRU lists, lru_gen_distance() distributes them as follows: +---------------------------------+---------------------------------+ | Accessed thru page tables | Accessed thru file descriptors | +---------------------------------+---------------------------------+ | PG_active (set while isolated) | | +----------------+----------------+----------------+----------------+ | PG_workingset | PG_referenced | PG_workingset | LRU_REFS_FLAGS | +---------------------------------+---------------------------------+ |<--------- MIN_NR_GENS --------->| | |<-------------------------- MAX_NR_GENS -------------------------->| After this patch, some typical client and server workloads showed improvements under heavy memory pressure. For example, Python TPC-C, which was used to benchmark a different approach [1] to better detect refault distances, showed a significant decrease in total refaults: Before After Change Time (seconds) 10801 10801 0% Executed (transactions) 41472 43663 +5% workingset_nodes 109070 120244 +10% workingset_refault_anon 5019627 7281831 +45% workingset_refault_file 1294678786 554855564 -57% workingset_refault_total 1299698413 562137395 -57% [1] https://lore.kernel.org/20230920190244.16839-1-ryncsn@gmail.com/ Reported-by: Kairui Song Closes: https://lore.kernel.org/CAOUHufahuWcKf5f1Sg3emnqX+cODuR=2TQo7T4Gr-QYLujn4RA@mail.gmail.com/ Signed-off-by: Yu Zhao Tested-by: Kalesh Singh Tested-by: kernel test robot --- include/linux/mm_inline.h | 94 ++++++++++++------------ include/linux/mmzone.h | 82 +++++++++++++-------- mm/swap.c | 23 +++--- mm/vmscan.c | 145 ++++++++++++++++++++++---------------- mm/workingset.c | 29 ++++---- 5 files changed, 208 insertions(+), 165 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index 34e5097182a0..3fcf5fa797fe 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -133,31 +133,25 @@ static inline int lru_hist_from_seq(unsigned long seq) return seq % NR_HIST_GENS; } -static inline int lru_tier_from_refs(int refs) +static inline int lru_tier_from_refs(int refs, bool workingset) { VM_WARN_ON_ONCE(refs > BIT(LRU_REFS_WIDTH)); - /* see the comment in folio_lru_refs() */ - return order_base_2(refs + 1); + /* see the comment on MAX_NR_TIERS */ + return workingset ? MAX_NR_TIERS - 1 : order_base_2(refs); } static inline int folio_lru_refs(struct folio *folio) { unsigned long flags = READ_ONCE(folio->flags); - bool workingset = flags & BIT(PG_workingset); + if (!(flags & BIT(PG_referenced))) + return 0; /* - * Return the number of accesses beyond PG_referenced, i.e., N-1 if the - * total number of accesses is N>1, since N=0,1 both map to the first - * tier. lru_tier_from_refs() will account for this off-by-one. Also see - * the comment on MAX_NR_TIERS. + * Return the total number of accesses including PG_referenced. Also see + * the comment on LRU_REFS_FLAGS. */ - return ((flags & LRU_REFS_MASK) >> LRU_REFS_PGOFF) + workingset; -} - -static inline void folio_clear_lru_refs(struct folio *folio) -{ - set_mask_bits(&folio->flags, LRU_REFS_MASK | LRU_REFS_FLAGS, 0); + return ((flags & LRU_REFS_MASK) >> LRU_REFS_PGOFF) + 1; } static inline int folio_lru_gen(struct folio *folio) @@ -223,11 +217,46 @@ static inline void lru_gen_update_size(struct lruvec *lruvec, struct folio *foli VM_WARN_ON_ONCE(lru_gen_is_active(lruvec, old_gen) && !lru_gen_is_active(lruvec, new_gen)); } +static inline int lru_gen_distance(struct folio *folio, bool reclaiming) +{ + /* + * Distance until eviction (larger values provide stronger protection): + * +-------------------------------------+-------------------------------------+ + * | Accessed through page tables and | Accessed through file descriptors | + * | promoted by folio_update_gen() | and protected by folio_inc_gen() | + * +-------------------------------------+-------------------------------------+ + * | PG_active (only set while isolated) | | + * +------------------+------------------+------------------+------------------+ + * | PG_workingset | PG_referenced | PG_workingset | LRU_REFS_FLAGS | + * +-------------------------------------+-------------------------------------+ + * | 3 | 2 | 1 | 0 | + * +-------------------------------------+-------------------------------------+ + * |<----------- MIN_NR_GENS ----------->| | + * |<------------------------------ MAX_NR_GENS ------------------------------>| + */ + if (reclaiming) + return 0; + + if (folio_test_active(folio)) + return MIN_NR_GENS + folio_test_workingset(folio); + + if (folio_test_workingset(folio)) + return MIN_NR_GENS - 1; + + if (!folio_is_file_lru(folio) && !folio_test_swapcache(folio)) + return MIN_NR_GENS - 1; + + if (folio_test_reclaim(folio) && (folio_test_dirty(folio) || folio_test_writeback(folio))) + return MIN_NR_GENS - 1; + + return 0; +} + static inline bool lru_gen_add_folio(struct lruvec *lruvec, struct folio *folio, bool reclaiming) { + int dist; unsigned long seq; unsigned long flags; - unsigned long mask; int gen = folio_lru_gen(folio); int type = folio_is_file_lru(folio); int zone = folio_zonenum(folio); @@ -237,40 +266,17 @@ static inline bool lru_gen_add_folio(struct lruvec *lruvec, struct folio *folio, if (folio_test_unevictable(folio) || !lrugen->enabled) return false; - /* - * There are four common cases for this page: - * 1. If it's hot, i.e., freshly faulted in, add it to the youngest - * generation, and it's protected over the rest below. - * 2. If it can't be evicted immediately, i.e., a dirty page pending - * writeback, add it to the second youngest generation. - * 3. If it should be evicted first, e.g., cold and clean from - * folio_rotate_reclaimable(), add it to the oldest generation. - * 4. Everything else falls between 2 & 3 above and is added to the - * second oldest generation if it's considered inactive, or the - * oldest generation otherwise. See lru_gen_is_active(). - */ - if (folio_test_active(folio)) - seq = lrugen->max_seq; - else if ((type == LRU_GEN_ANON && !folio_test_swapcache(folio)) || - (folio_test_reclaim(folio) && - (folio_test_dirty(folio) || folio_test_writeback(folio)))) - seq = lrugen->max_seq - 1; - else if (reclaiming || lrugen->min_seq[type] + MIN_NR_GENS >= lrugen->max_seq) - seq = lrugen->min_seq[type]; + + dist = lru_gen_distance(folio, reclaiming); + if (dist < MIN_NR_GENS) + seq = lrugen->min_seq[type] + dist; else - seq = lrugen->min_seq[type] + 1; + seq = lrugen->max_seq + dist - MIN_NR_GENS - 1; gen = lru_gen_from_seq(seq); flags = (gen + 1UL) << LRU_GEN_PGOFF; /* see the comment on MIN_NR_GENS about PG_active */ - mask = LRU_GEN_MASK; - /* - * Don't clear PG_workingset here because it can affect PSI accounting - * if the activation is due to workingset refault. - */ - if (folio_test_active(folio)) - mask |= LRU_REFS_MASK | BIT(PG_referenced) | BIT(PG_active); - set_mask_bits(&folio->flags, mask, flags); + set_mask_bits(&folio->flags, LRU_GEN_MASK | BIT(PG_active), flags); lru_gen_update_size(lruvec, folio, -1, gen); /* for folio_rotate_reclaimable() */ diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index b998ccc5c341..c7ad4d6e1618 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -332,66 +332,88 @@ enum lruvec_flags { #endif /* !__GENERATING_BOUNDS_H */ /* - * Evictable pages are divided into multiple generations. The youngest and the + * Evictable folios are divided into multiple generations. The youngest and the * oldest generation numbers, max_seq and min_seq, are monotonically increasing. * They form a sliding window of a variable size [MIN_NR_GENS, MAX_NR_GENS]. An * offset within MAX_NR_GENS, i.e., gen, indexes the LRU list of the * corresponding generation. The gen counter in folio->flags stores gen+1 while - * a page is on one of lrugen->folios[]. Otherwise it stores 0. + * a folio is on one of lrugen->folios[]. Otherwise it stores 0. * - * A page is added to the youngest generation on faulting. The aging needs to - * check the accessed bit at least twice before handing this page over to the - * eviction. The first check takes care of the accessed bit set on the initial - * fault; the second check makes sure this page hasn't been used since then. - * This process, AKA second chance, requires a minimum of two generations, - * hence MIN_NR_GENS. And to maintain ABI compatibility with the active/inactive - * LRU, e.g., /proc/vmstat, these two generations are considered active; the - * rest of generations, if they exist, are considered inactive. See - * lru_gen_is_active(). + * After a folio is faulted in, the aging needs to check the accessed bit at + * least twice before handing this folio over to the eviction. The first check + * clears the accessed bit from the initial fault; the second check makes sure + * this folio hasn't been used since then. This process, AKA second chance, + * requires a minimum of two generations, hence MIN_NR_GENS. And to maintain ABI + * compatibility with the active/inactive LRU, e.g., /proc/vmstat, these two + * generations are considered active; the rest of generations, if they exist, + * are considered inactive. See lru_gen_is_active(). * - * PG_active is always cleared while a page is on one of lrugen->folios[] so - * that the aging needs not to worry about it. And it's set again when a page - * considered active is isolated for non-reclaiming purposes, e.g., migration. - * See lru_gen_add_folio() and lru_gen_del_folio(). + * PG_active is always cleared while a folio is on one of lrugen->folios[] so + * that the sliding window needs not to worry about it. And it's set again when + * a folio considered active is isolated for non-reclaiming purposes, e.g., + * migration. See lru_gen_add_folio() and lru_gen_del_folio(). * * MAX_NR_GENS is set to 4 so that the multi-gen LRU can support twice the * number of categories of the active/inactive LRU when keeping track of * accesses through page tables. This requires order_base_2(MAX_NR_GENS+1) bits - * in folio->flags. + * in folio->flags, masked by LRU_GEN_MASK. */ #define MIN_NR_GENS 2U #define MAX_NR_GENS 4U /* - * Each generation is divided into multiple tiers. A page accessed N times - * through file descriptors is in tier order_base_2(N). A page in the first tier - * (N=0,1) is marked by PG_referenced unless it was faulted in through page - * tables or read ahead. A page in any other tier (N>1) is marked by - * PG_referenced and PG_workingset. This implies a minimum of two tiers is - * supported without using additional bits in folio->flags. + * Each generation is divided into multiple tiers. A folio accessed N times + * through file descriptors is in tier order_base_2(N). A folio in the first + * tier (N=0,1) is marked by PG_referenced unless it was faulted in through page + * tables or read ahead. A folio in the last tier (MAX_NR_TIERS-1) is marked by + * PG_workingset. A folio in any other tier (1flags. * * In contrast to moving across generations which requires the LRU lock, moving * across tiers only involves atomic operations on folio->flags and therefore * has a negligible cost in the buffered access path. In the eviction path, - * comparisons of refaulted/(evicted+protected) from the first tier and the - * rest infer whether pages accessed multiple times through file descriptors - * are statistically hot and thus worth protecting. + * comparisons of refaulted/(evicted+protected) from the first tier and the rest + * infer whether folios accessed multiple times through file descriptors are + * statistically hot and thus worth protecting. * * MAX_NR_TIERS is set to 4 so that the multi-gen LRU can support twice the * number of categories of the active/inactive LRU when keeping track of * accesses through file descriptors. This uses MAX_NR_TIERS-2 spare bits in - * folio->flags. + * folio->flags, masked by LRU_REFS_MASK. */ #define MAX_NR_TIERS 4U #ifndef __GENERATING_BOUNDS_H -struct lruvec; -struct page_vma_mapped_walk; - #define LRU_GEN_MASK ((BIT(LRU_GEN_WIDTH) - 1) << LRU_GEN_PGOFF) #define LRU_REFS_MASK ((BIT(LRU_REFS_WIDTH) - 1) << LRU_REFS_PGOFF) +/* + * For folios accessed multiple times through file descriptors, + * lru_gen_inc_refs() sets additional bits of LRU_REFS_WIDTH in folio->flags + * after PG_referenced, then PG_workingset after LRU_REFS_WIDTH. After all its + * bits are set, i.e., LRU_REFS_FLAGS|BIT(PG_workingset), a folio is lazily + * promoted into the second oldest generation in the eviction path. And when + * folio_inc_gen() does that, it clears LRU_REFS_FLAGS so that + * lru_gen_inc_refs() can start over. Note that for this case, LRU_REFS_MASK is + * only valid when PG_referenced is set. + * + * For folios accessed multiple times through page tables, folio_update_gen() + * from a page table walk or lru_gen_set_refs() from a rmap walk sets + * PG_referenced after the accessed bit is cleared for the first time. + * Thereafter, those two paths set PG_workingset and promote folios to the + * youngest generation. Like folio_inc_gen(), folio_update_gen() also clears + * PG_referenced. Note that for this case, LRU_REFS_MASK is not used. + * + * For both cases above, after PG_workingset is set on a folio, it remains until + * this folio is either reclaimed, or "deactivated" by lru_gen_clear_refs(). It + * can be set again if lru_gen_test_recent() returns true upon a refault. + */ +#define LRU_REFS_FLAGS (LRU_REFS_MASK | BIT(PG_referenced)) + +struct lruvec; +struct page_vma_mapped_walk; + #ifdef CONFIG_LRU_GEN enum { @@ -406,8 +428,6 @@ enum { NR_LRU_GEN_CAPS }; -#define LRU_REFS_FLAGS (BIT(PG_referenced) | BIT(PG_workingset)) - #define MIN_LRU_BATCH BITS_PER_LONG #define MAX_LRU_BATCH (MIN_LRU_BATCH * 64) diff --git a/mm/swap.c b/mm/swap.c index 756b6c5b9af7..062c8565b899 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -387,24 +387,19 @@ static void lru_gen_inc_refs(struct folio *folio) if (!folio_test_lru(folio) || folio_test_unevictable(folio)) return; + /* see the comment on LRU_REFS_FLAGS */ if (!folio_test_referenced(folio)) { - folio_set_referenced(folio); + set_mask_bits(&folio->flags, LRU_REFS_MASK, BIT(PG_referenced)); return; } - if (!folio_test_workingset(folio)) { - folio_set_workingset(folio); - return; - } - - /* see the comment on MAX_NR_TIERS */ do { - new_flags = old_flags & LRU_REFS_MASK; - if (new_flags == LRU_REFS_MASK) - break; + if ((old_flags & LRU_REFS_MASK) == LRU_REFS_MASK) { + folio_set_workingset(folio); + return; + } - new_flags += BIT(LRU_REFS_PGOFF); - new_flags |= old_flags & ~LRU_REFS_MASK; + new_flags = old_flags + BIT(LRU_REFS_PGOFF); } while (!try_cmpxchg(&folio->flags, &old_flags, new_flags)); } @@ -416,7 +411,7 @@ static bool lru_gen_clear_refs(struct folio *folio) if (!folio_test_lru(folio) || folio_test_unevictable(folio)) return true; - set_mask_bits(&folio->flags, LRU_REFS_MASK | LRU_REFS_FLAGS, 0); + set_mask_bits(&folio->flags, LRU_REFS_FLAGS | BIT(PG_workingset), 0); lrugen = &folio_lruvec(folio)->lrugen; /* whether can do without shuffling under the LRU lock */ @@ -498,7 +493,7 @@ void folio_add_lru(struct folio *folio) folio_test_unevictable(folio), folio); VM_BUG_ON_FOLIO(folio_test_lru(folio), folio); - /* see the comment in lru_gen_add_folio() */ + /* see the comment in lru_gen_distance() */ if (lru_gen_enabled() && !folio_test_unevictable(folio) && lru_gen_in_fault() && !(current->flags & PF_MEMALLOC)) folio_set_active(folio); diff --git a/mm/vmscan.c b/mm/vmscan.c index 02b01ae2bdbb..5e03a61c894f 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -862,6 +862,31 @@ enum folio_references { FOLIOREF_ACTIVATE, }; +#ifdef CONFIG_LRU_GEN +/* + * Only used on a mapped folio in the eviction (rmap walk) path, where promotion + * needs to be done by taking the folio off the LRU list and then adding it back + * with PG_active set. In contrast, the aging (page table walk) path uses + * folio_update_gen(). + */ +static bool lru_gen_set_refs(struct folio *folio) +{ + /* see the comment on LRU_REFS_FLAGS */ + if (!folio_test_referenced(folio) && !folio_test_workingset(folio)) { + set_mask_bits(&folio->flags, LRU_REFS_MASK, BIT(PG_referenced)); + return false; + } + + set_mask_bits(&folio->flags, LRU_REFS_FLAGS, BIT(PG_workingset)); + return true; +} +#else +static bool lru_gen_set_refs(struct folio *folio) +{ + return false; +} +#endif /* CONFIG_LRU_GEN */ + static enum folio_references folio_check_references(struct folio *folio, struct scan_control *sc) { @@ -870,7 +895,6 @@ static enum folio_references folio_check_references(struct folio *folio, referenced_ptes = folio_referenced(folio, 1, sc->target_mem_cgroup, &vm_flags); - referenced_folio = folio_test_clear_referenced(folio); /* * The supposedly reclaimable folio was found to be in a VM_LOCKED vma. @@ -888,6 +912,15 @@ static enum folio_references folio_check_references(struct folio *folio, if (referenced_ptes == -1) return FOLIOREF_KEEP; + if (lru_gen_enabled()) { + if (!referenced_ptes) + return FOLIOREF_RECLAIM; + + return lru_gen_set_refs(folio) ? FOLIOREF_ACTIVATE : FOLIOREF_KEEP; + } + + referenced_folio = folio_test_clear_referenced(folio); + if (referenced_ptes) { /* * All mapped folios start out with page table @@ -1092,11 +1125,6 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, if (!sc->may_unmap && folio_mapped(folio)) goto keep_locked; - /* folio_update_gen() tried to promote this page? */ - if (lru_gen_enabled() && !ignore_references && - folio_mapped(folio) && folio_test_referenced(folio)) - goto keep_locked; - /* * The number of dirty pages determines if a node is marked * reclaim_congested. kswapd will stall and start writing @@ -3163,16 +3191,19 @@ static int folio_update_gen(struct folio *folio, int gen) VM_WARN_ON_ONCE(gen >= MAX_NR_GENS); + /* see the comment on LRU_REFS_FLAGS */ + if (!folio_test_referenced(folio) && !folio_test_workingset(folio)) { + set_mask_bits(&folio->flags, LRU_REFS_MASK, BIT(PG_referenced)); + return -1; + } + do { /* lru_gen_del_folio() has isolated this page? */ - if (!(old_flags & LRU_GEN_MASK)) { - /* for shrink_folio_list() */ - new_flags = old_flags | BIT(PG_referenced); - continue; - } + if (!(old_flags & LRU_GEN_MASK)) + return -1; - new_flags = old_flags & ~(LRU_GEN_MASK | LRU_REFS_MASK | LRU_REFS_FLAGS); - new_flags |= (gen + 1UL) << LRU_GEN_PGOFF; + new_flags = old_flags & ~(LRU_GEN_MASK | LRU_REFS_FLAGS); + new_flags |= ((gen + 1UL) << LRU_GEN_PGOFF) | BIT(PG_workingset); } while (!try_cmpxchg(&folio->flags, &old_flags, new_flags)); return ((old_flags & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1; @@ -3196,7 +3227,7 @@ static int folio_inc_gen(struct lruvec *lruvec, struct folio *folio, bool reclai new_gen = (old_gen + 1) % MAX_NR_GENS; - new_flags = old_flags & ~(LRU_GEN_MASK | LRU_REFS_MASK | LRU_REFS_FLAGS); + new_flags = old_flags & ~(LRU_GEN_MASK | LRU_REFS_FLAGS); new_flags |= (new_gen + 1UL) << LRU_GEN_PGOFF; /* for folio_end_writeback() */ if (reclaiming) @@ -3374,9 +3405,11 @@ static unsigned long get_pmd_pfn(pmd_t pmd, struct vm_area_struct *vma, unsigned static struct folio *get_pfn_folio(unsigned long pfn, struct mem_cgroup *memcg, struct pglist_data *pgdat) { - struct folio *folio; + struct folio *folio = pfn_folio(pfn); + + if (folio_lru_gen(folio) < 0) + return NULL; - folio = pfn_folio(pfn); if (folio_nid(folio) != pgdat->node_id) return NULL; @@ -3753,8 +3786,7 @@ static bool inc_min_seq(struct lruvec *lruvec, int type, int swappiness) while (!list_empty(head)) { struct folio *folio = lru_to_folio(head); int refs = folio_lru_refs(folio); - int tier = lru_tier_from_refs(refs); - int delta = folio_nr_pages(folio); + bool workingset = folio_test_workingset(folio); VM_WARN_ON_ONCE_FOLIO(folio_test_unevictable(folio), folio); VM_WARN_ON_ONCE_FOLIO(folio_test_active(folio), folio); @@ -3764,8 +3796,14 @@ static bool inc_min_seq(struct lruvec *lruvec, int type, int swappiness) new_gen = folio_inc_gen(lruvec, folio, false); list_move_tail(&folio->lru, &lrugen->folios[new_gen][type][zone]); - WRITE_ONCE(lrugen->protected[hist][type][tier], - lrugen->protected[hist][type][tier] + delta); + /* don't count the workingset being lazily promoted */ + if (refs + workingset != BIT(LRU_REFS_WIDTH) + 1) { + int tier = lru_tier_from_refs(refs, workingset); + int delta = folio_nr_pages(folio); + + WRITE_ONCE(lrugen->protected[hist][type][tier], + lrugen->protected[hist][type][tier] + delta); + } if (!--remaining) return false; @@ -4134,16 +4172,10 @@ bool lru_gen_look_around(struct page_vma_mapped_walk *pvmw) old_gen = folio_update_gen(folio, new_gen); if (old_gen >= 0 && old_gen != new_gen) update_batch_size(walk, folio, old_gen, new_gen); - - continue; - } - - old_gen = folio_lru_gen(folio); - if (old_gen < 0) - folio_set_referenced(folio); - else if (old_gen != new_gen) { - folio_clear_lru_refs(folio); - folio_activate(folio); + } else if (lru_gen_set_refs(folio)) { + old_gen = folio_lru_gen(folio); + if (old_gen >= 0 && old_gen != new_gen) + folio_activate(folio); } } @@ -4304,7 +4336,8 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c int zone = folio_zonenum(folio); int delta = folio_nr_pages(folio); int refs = folio_lru_refs(folio); - int tier = lru_tier_from_refs(refs); + bool workingset = folio_test_workingset(folio); + int tier = lru_tier_from_refs(refs, workingset); struct lru_gen_folio *lrugen = &lruvec->lrugen; VM_WARN_ON_ONCE_FOLIO(gen >= MAX_NR_GENS, folio); @@ -4326,14 +4359,17 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c } /* protected */ - if (tier > tier_idx || refs == BIT(LRU_REFS_WIDTH)) { - int hist = lru_hist_from_seq(lrugen->min_seq[type]); - + if (tier > tier_idx || refs + workingset == BIT(LRU_REFS_WIDTH) + 1) { gen = folio_inc_gen(lruvec, folio, false); - list_move_tail(&folio->lru, &lrugen->folios[gen][type][zone]); + list_move(&folio->lru, &lrugen->folios[gen][type][zone]); - WRITE_ONCE(lrugen->protected[hist][type][tier], - lrugen->protected[hist][type][tier] + delta); + /* don't count the workingset being lazily promoted */ + if (refs + workingset != BIT(LRU_REFS_WIDTH) + 1) { + int hist = lru_hist_from_seq(lrugen->min_seq[type]); + + WRITE_ONCE(lrugen->protected[hist][type][tier], + lrugen->protected[hist][type][tier] + delta); + } return true; } @@ -4353,8 +4389,7 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c } /* waiting for writeback */ - if (folio_test_locked(folio) || writeback || - (type == LRU_GEN_FILE && dirty)) { + if (writeback || (type == LRU_GEN_FILE && dirty)) { gen = folio_inc_gen(lruvec, folio, true); list_move(&folio->lru, &lrugen->folios[gen][type][zone]); return true; @@ -4383,13 +4418,12 @@ static bool isolate_folio(struct lruvec *lruvec, struct folio *folio, struct sca return false; } - /* see the comment on MAX_NR_TIERS */ + /* see the comment on LRU_REFS_FLAGS */ if (!folio_test_referenced(folio)) - folio_clear_lru_refs(folio); + set_mask_bits(&folio->flags, LRU_REFS_MASK, 0); /* for shrink_folio_list() */ folio_clear_reclaim(folio); - folio_clear_referenced(folio); success = lru_gen_del_folio(lruvec, folio, true); VM_WARN_ON_ONCE_FOLIO(!success, folio); @@ -4585,25 +4619,16 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap continue; } - if (folio_test_reclaim(folio) && - (folio_test_dirty(folio) || folio_test_writeback(folio))) { - /* restore LRU_REFS_FLAGS cleared by isolate_folio() */ - if (folio_test_workingset(folio)) - folio_set_referenced(folio); - continue; - } - - if (skip_retry || folio_test_active(folio) || folio_test_referenced(folio) || - folio_mapped(folio) || folio_test_locked(folio) || - folio_test_dirty(folio) || folio_test_writeback(folio)) { - /* don't add rejected folios to the oldest generation */ - set_mask_bits(&folio->flags, LRU_REFS_MASK | LRU_REFS_FLAGS, - BIT(PG_active)); - continue; - } - /* retry folios that may have missed folio_rotate_reclaimable() */ - list_move(&folio->lru, &clean); + if (!skip_retry && !folio_test_active(folio) && !folio_mapped(folio) && + !folio_test_dirty(folio) && !folio_test_writeback(folio)) { + list_move(&folio->lru, &clean); + continue; + } + + /* don't add rejected folios to the oldest generation */ + if (!lru_gen_distance(folio, false)) + set_mask_bits(&folio->flags, LRU_REFS_FLAGS, BIT(PG_active)); } spin_lock_irq(&lruvec->lru_lock); diff --git a/mm/workingset.c b/mm/workingset.c index 2c310c29f51e..3662c0def77a 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -239,7 +239,8 @@ static void *lru_gen_eviction(struct folio *folio) int type = folio_is_file_lru(folio); int delta = folio_nr_pages(folio); int refs = folio_lru_refs(folio); - int tier = lru_tier_from_refs(refs); + bool workingset = folio_test_workingset(folio); + int tier = lru_tier_from_refs(refs, workingset); struct mem_cgroup *memcg = folio_memcg(folio); struct pglist_data *pgdat = folio_pgdat(folio); @@ -253,7 +254,7 @@ static void *lru_gen_eviction(struct folio *folio) hist = lru_hist_from_seq(min_seq); atomic_long_add(delta, &lrugen->evicted[hist][type][tier]); - return pack_shadow(mem_cgroup_id(memcg), pgdat, token, refs); + return pack_shadow(mem_cgroup_id(memcg), pgdat, token, workingset); } /* @@ -304,24 +305,20 @@ static void lru_gen_refault(struct folio *folio, void *shadow) lrugen = &lruvec->lrugen; hist = lru_hist_from_seq(READ_ONCE(lrugen->min_seq[type])); - /* see the comment in folio_lru_refs() */ - refs = (token & (BIT(LRU_REFS_WIDTH) - 1)) + workingset; - tier = lru_tier_from_refs(refs); + refs = (token & (BIT(LRU_REFS_WIDTH) - 1)) + 1; + tier = lru_tier_from_refs(refs, workingset); atomic_long_add(delta, &lrugen->refaulted[hist][type][tier]); - mod_lruvec_state(lruvec, WORKINGSET_ACTIVATE_BASE + type, delta); - /* - * Count the following two cases as stalls: - * 1. For pages accessed through page tables, hotter pages pushed out - * hot pages which refaulted immediately. - * 2. For pages accessed multiple times through file descriptors, - * they would have been protected by sort_folio(). - */ - if (lru_gen_in_fault() || refs >= BIT(LRU_REFS_WIDTH) - 1) { - set_mask_bits(&folio->flags, 0, LRU_REFS_MASK | BIT(PG_workingset)); + /* see folio_add_lru() where folio_set_active() happens */ + if (lru_gen_in_fault()) + mod_lruvec_state(lruvec, WORKINGSET_ACTIVATE_BASE + type, delta); + + if (workingset) { + folio_set_workingset(folio); mod_lruvec_state(lruvec, WORKINGSET_RESTORE_BASE + type, delta); - } + } else + set_mask_bits(&folio->flags, LRU_REFS_MASK, (refs - 1UL) << LRU_REFS_PGOFF); unlock: rcu_read_unlock(); }