From patchwork Wed Nov 29 03:21:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 13472171 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A2D6C4167B for ; Wed, 29 Nov 2023 03:28:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 955AA6B03A0; Tue, 28 Nov 2023 22:28:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8E0336B03A2; Tue, 28 Nov 2023 22:28:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 77EEB6B03A3; Tue, 28 Nov 2023 22:28:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 637FF6B03A0 for ; Tue, 28 Nov 2023 22:28:36 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 317F5C03BE for ; Wed, 29 Nov 2023 03:28:36 +0000 (UTC) X-FDA: 81509559432.07.FAD53BA Received: from mail-oo1-f73.google.com (mail-oo1-f73.google.com [209.85.161.73]) by imf01.hostedemail.com (Postfix) with ESMTP id 7854E40015 for ; Wed, 29 Nov 2023 03:28:34 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="V/7KoWtW"; spf=pass (imf01.hostedemail.com: domain of 3Wq5mZQoKCIM5vzy5hotlknvvnsl.jvtspu14-ttr2hjr.vyn@flex--yosryahmed.bounces.google.com designates 209.85.161.73 as permitted sender) smtp.mailfrom=3Wq5mZQoKCIM5vzy5hotlknvvnsl.jvtspu14-ttr2hjr.vyn@flex--yosryahmed.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701228514; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fp/riADQZ1JFvdRujo1i1RQ8s//WQnrX4uDVuQ0rm8w=; b=SeMXFOFDnP+7/CJcwYqaPtY5mJ7XazONUwkwsuPXjSjctHx/lWqh589Iu7I4O10DfR6m1o Qo4SjOziMxTqdnqLwfp5YBqlWXVaL1uv2uihwSEPRlwb08RBDYgD1EYskBUT22G3steefa KRvJCl8INPanKss5/kDp6gINT3t+HyA= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="V/7KoWtW"; spf=pass (imf01.hostedemail.com: domain of 3Wq5mZQoKCIM5vzy5hotlknvvnsl.jvtspu14-ttr2hjr.vyn@flex--yosryahmed.bounces.google.com designates 209.85.161.73 as permitted sender) smtp.mailfrom=3Wq5mZQoKCIM5vzy5hotlknvvnsl.jvtspu14-ttr2hjr.vyn@flex--yosryahmed.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701228514; a=rsa-sha256; cv=none; b=AsU3etZH5JUN7HahAPKCYtz+gEBJdhOsFB6sweIjwC6mTIEVtQ0VygkPit3eiYqtSxEyDE m4qDAoPUgKYzUBgl0osYzeKEGmhadBiV9ri59jaTgh/q7hsPoKo2XSAG9TiQw0tAOmHHha e/m991vQIshG72NVcS2DTi9zNqNPfv4= Received: by mail-oo1-f73.google.com with SMTP id 006d021491bc7-58a91cf3f97so6780309eaf.0 for ; Tue, 28 Nov 2023 19:28:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1701228513; x=1701833313; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=fp/riADQZ1JFvdRujo1i1RQ8s//WQnrX4uDVuQ0rm8w=; b=V/7KoWtWpTkhFUCxccDG6EqQwGTOYn6QUZKSwQXyxALrcfqHqFPQyAB7G5EQcHHouE 4cZ5nyvPkEzzwYgxNM04ulDvRjq+i+Qc8ZrK3IfzkaG1JG/jwZznJ4UVoTYX60CcoGM9 VLkD44K+gvSHZjdAqy1IkNZytBUOVTX0PqHsQhQ8iBfSCwlE4gDUmCovhtFYXwvYZNCh xEqqrjFL/8jLO/nagIheEvbmAlSgGzB/MmBGw5GIrdOUQVvUWiMhqeFmpoIby30TKIej /yA1naQjEy7lF6ndaxcE4RSsFrQOz9sAy3h6GOMsTfmkzGoCpY66NTzGejc4rXgwO4Ol 4WPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701228513; x=1701833313; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=fp/riADQZ1JFvdRujo1i1RQ8s//WQnrX4uDVuQ0rm8w=; b=tp/qR4oDsu17dfJvbd2hoEiYH0aVXwLRP3fCDMlT/aDPEe4R/ltxTCK/og2UAI/4hU c4KXz5nJxLBEWrS4f9yNl2eNpz4lNvpzW+Se6ef9i9DB7M03fln7NrARtpjoX8q+DMiv w58j59hRtx6xoSqJR2o+Ta5yr2hrulXSeoWlkpOXHiaEqlpx40T6b5MALmbJTMyYbOB0 2snTk0uDEj4DqC4A5QspLCZccN7dRRxY5UUBroMB5hRHT4Ke79T4zdNjsPWqE3/2nmtp J0wHR527t5bhJycBfLcZWNeOyanVb9vKggQMdMMOWxPDIoaAsU0HCKit91ABG1TwFN2s fuiQ== X-Gm-Message-State: AOJu0Yx/Qy8pamAqic6doAfVeWNMNtGtbvYcibX3Kg5Dfr2l165HQ25p nJ0rMKT/v6CMwVkEDptPXMpSOZguV5YZ+Hff X-Google-Smtp-Source: AGHT+IF7CgJpm4ll7+nN91xVmCbgACzK5L5zzJqDfVPl2iyh6TzmkcueFovPIrPdqybEYEo2dnecWek0ZhORplj8 X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:29b4]) (user=yosryahmed job=sendgmr) by 2002:a05:690c:989:b0:59b:ca80:919a with SMTP id ce9-20020a05690c098900b0059bca80919amr510184ywb.0.1701228122771; Tue, 28 Nov 2023 19:22:02 -0800 (PST) Date: Wed, 29 Nov 2023 03:21:49 +0000 In-Reply-To: <20231129032154.3710765-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20231129032154.3710765-1-yosryahmed@google.com> X-Mailer: git-send-email 2.43.0.rc1.413.gea7ed67945-goog Message-ID: <20231129032154.3710765-2-yosryahmed@google.com> Subject: [mm-unstable v4 1/5] mm: memcg: change flush_next_time to flush_last_time From: Yosry Ahmed To: Andrew Morton Cc: Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Ivan Babrou , Tejun Heo , " =?utf-8?q?Michal_Koutn=C3=BD?= " , Waiman Long , kernel-team@cloudflare.com, Wei Xu , Greg Thelen , Domenico Cerasuolo , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Yosry Ahmed , Chris Li X-Rspamd-Queue-Id: 7854E40015 X-Rspam-User: X-Stat-Signature: 955n9ncwbwzru6oppf9ue7wkqi895dkw X-Rspamd-Server: rspam01 X-HE-Tag: 1701228514-61088 X-HE-Meta: U2FsdGVkX191hSVsoP7CSe8Yixt+WAzOtZ+PIMuXR7Km7wRxgKl0S9oJen6NbfzBNb8bB9eD6IMAPpUuceOqkWZ/iVVyDrhVB1WdYB4RaCTrOPBGd1hO1n3FEhAdZDskdRo5+U2WtTub5FUwedY3VpNZNtXiunohwZ/Uqay+QmAlDmnExMiOQu5nTG483B9eF/CpChRqwopB25/tUrXovMHtp2iWEJY14gNQ6MvN/LMkBOoINIsKIRqO+YMs8c5jnUbHr5k5DoYGlYSVTGZJsyh91W3BFN09mTp0BmhCYXgZQany3QotQzgqtnVB6fz3PLyZqisjB3oTvcXxSHce7sBzr55RPi83GbM/+p4E8DrJaitKxbHmRoZ6J9Qu22ZSaz2+QC+0EB3xELIZfuD00GwflHP7udEH1ETE8moIV58Vg9AZFXeT3iMfj1n5fG76Xcy+5qXSsvcpuiuV8gsg287L2Ha1SGVylPFWSFMbUJHR2eUloJAGVasg6vcOS8ISYmo2xNXfkUv6vJ0Gbxvvl8h4zvHzZsMfBgDJjRF084SZh8aTVsOgZ6dT6ssGvZjeykEU2HCdCXk1iBx7w3D10Dj0Zfj0l6QcEt++ZX25OwFlM7mSrgIE6cR2x9dNn4oBayE9rPJ5iLbBTJb0HLazfYj4cXx8PmY03vCK1dkhRQNJNcq4dXBWpcXnibpr8ISQllIPmRRT4AL5b0eTrzEPB2nJNBenpypHwNLpRkTNCjV9p11unKqgJyHha3CTo3gJ/NT6YzxwbEnQPUq8pp1Aibr3DaqFaCf2x8NsUMeyYKFf+rZTcqTi4m4T0KC/+OEM4nUeoEnTtisWGN7LvjFjWqrGIA+yZrktPlzALdaw+kUyOiP3as6emrOYCZ5pxJvj2KnEXNus0HE1dtiv2A3051RVOTEK5RIebJPv3n4kFyWAA13m3wFfmidgMUBQcp6Sd20LLAyqa0kif5H5+XV 8Hws3Xfn MuXSSQlrsUmx5jPq/MzQE9/9W0BFgeOBXZM3D8thG/QjDO7VCMkdB0nwjBsIPn6ast6cXVd6OTR+Km7+Huac894tl5wwd4Zm9dDtbOIjKrXu1b2lp3e/v1xxbbxlDUhAh/o0kx4VvidpcfeBnwcIKTuDjzfuoNs5f8yv1CK1Yna8bkPxKyqVP8xTeVIuck6ht29YIC1I9Tf4r8i51UArsmWTvOK7QhgtZ/HpJlUCx1EC0JjluPZrmw3HmYBma8uaxYAfnTF09LLtkN2N+kvAY4TnKpKNmXhm3k7v65zKL2amOOFDCEvb15Y5FZoTAzCpRXQb4eJvWU330CV6+iTsnDCKLPCe69roXISlylNixAt5G0o/g90hvZ8vhpuF+G+b2YloXAu69W+Ca/si2X2q0//G4h9QYQ8pzWWlQ50Z8yviIZbfhzPNjLGNJKKu0rRTkFmHJe7AO4bau8Lx4cbn5p0WVc+O+dMqvZONo3MrzcyvvkTzlwDkyMb2Oqi7haoU6Xm372a3Afta7I+GykM9ARhAIGHFfifAFt5OR2ACSC96+kfOoTyBt7hTe5Xd2wVdCnOBBy/o1/skRsr9cKWZEcCSRTVaxM9ZBaExhRYcnCcr0owzrhnn7zDMTelrmy1ELLXdjnM3lUqWDR/B/0dSqxTH75ohUzBkXGbO6ixIMGefvhBpUQG8o2M8hgM66L4Yty8YnkH9ub8Ia8oLzG3lrxKEMN44mxr+nkaNSFsv8d/G81duyUYcL2lOJWNw0ipMjz5iNzqGpr6XsPpA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: flush_next_time is an inaccurate name. It's not the next time that periodic flushing will happen, it's rather the next time that ratelimited flushing can happen if the periodic flusher is late. Simplify its semantics by just storing the timestamp of the last flush instead, flush_last_time. Move the 2*FLUSH_TIME addition to mem_cgroup_flush_stats_ratelimited(), and add a comment explaining it. This way, all the ratelimiting semantics live in one place. No functional change intended. Signed-off-by: Yosry Ahmed Tested-by: Domenico Cerasuolo Acked-by: Shakeel Butt Acked-by: Chris Li (Google) --- mm/memcontrol.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index f88c8fd036897..61435bd037cb4 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -593,7 +593,7 @@ static DECLARE_DEFERRABLE_WORK(stats_flush_dwork, flush_memcg_stats_dwork); static DEFINE_PER_CPU(unsigned int, stats_updates); static atomic_t stats_flush_ongoing = ATOMIC_INIT(0); static atomic_t stats_flush_threshold = ATOMIC_INIT(0); -static u64 flush_next_time; +static u64 flush_last_time; #define FLUSH_TIME (2UL*HZ) @@ -653,7 +653,7 @@ static void do_flush_stats(void) atomic_xchg(&stats_flush_ongoing, 1)) return; - WRITE_ONCE(flush_next_time, jiffies_64 + 2*FLUSH_TIME); + WRITE_ONCE(flush_last_time, jiffies_64); cgroup_rstat_flush(root_mem_cgroup->css.cgroup); @@ -669,7 +669,8 @@ void mem_cgroup_flush_stats(void) void mem_cgroup_flush_stats_ratelimited(void) { - if (time_after64(jiffies_64, READ_ONCE(flush_next_time))) + /* Only flush if the periodic flusher is one full cycle late */ + if (time_after64(jiffies_64, READ_ONCE(flush_last_time) + 2*FLUSH_TIME)) mem_cgroup_flush_stats(); } From patchwork Wed Nov 29 03:21:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 13472158 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA8BFC4167B for ; Wed, 29 Nov 2023 03:22:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7C9F86B0398; Tue, 28 Nov 2023 22:22:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 77A7C6B0399; Tue, 28 Nov 2023 22:22:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 61A106B039A; Tue, 28 Nov 2023 22:22:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 4BE616B0398 for ; Tue, 28 Nov 2023 22:22:07 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 21314A03F1 for ; Wed, 29 Nov 2023 03:22:07 +0000 (UTC) X-FDA: 81509543094.11.02829A3 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf11.hostedemail.com (Postfix) with ESMTP id 6C4A140007 for ; Wed, 29 Nov 2023 03:22:05 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=tYGOe7QM; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf11.hostedemail.com: domain of 3XK5mZQoKCIU7x107jqvnmpxxpun.lxvurw36-vvt4jlt.x0p@flex--yosryahmed.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3XK5mZQoKCIU7x107jqvnmpxxpun.lxvurw36-vvt4jlt.x0p@flex--yosryahmed.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701228125; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dkGBNY4lxNmyHcFXPaFanJxG5Xvmyz6hflPyUWhCGXY=; b=Ta6+D5qN0EtszdE2KkNZOdywuxeB5f1iSrUqbaUgr0KOo7J+g4imtt1lvkOGpHd2Usb4tw r7ME4WjZ6kns5zNmu+/kURf7+X4R9siUE56ND5g3g1P3/tALrk+V33SitkS92vpTG3y3g5 aZsV2DOnWhRagID6LzrS57s0cr/a7rM= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=tYGOe7QM; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf11.hostedemail.com: domain of 3XK5mZQoKCIU7x107jqvnmpxxpun.lxvurw36-vvt4jlt.x0p@flex--yosryahmed.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3XK5mZQoKCIU7x107jqvnmpxxpun.lxvurw36-vvt4jlt.x0p@flex--yosryahmed.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701228125; a=rsa-sha256; cv=none; b=CKd/iuUDccNndGMG2b543ITyBH5eMq61+VLxZaY8HvabEbq2r69uMGPTG/Aafi6gbYzmNO cN+D6tag3vEgGxX/ZlJm0b8gCdS//2jvmC3k1LwsBQ9Y0ktFXc8JfbHT7mHhcR5mOwSmXr +ke4Og+A2+FRg17WLlKeIXbPgUuBz8I= Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-5cd573c2cccso4105107b3.1 for ; Tue, 28 Nov 2023 19:22:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1701228124; x=1701832924; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=dkGBNY4lxNmyHcFXPaFanJxG5Xvmyz6hflPyUWhCGXY=; b=tYGOe7QMbsFNKKpGBQbLqg3GomXE4O+QCVydWEK9JBQ9/uMs5X9p1+tV2fFomz8a+r LDm9URf+txjq+YcctlVc+sKeBgOrc89nGSR3Fr6sSMyDVpSrfhvJ8+V2cKgro0jDsbzH oatHujefALnUiWKIQmPmTjZLCkRbNWTE/owlEWqvTfmYuEvI+QBgMWwQt5RKOk0wsWCX tomC0RkXc1eYIXms7pLV2YvpZL3mVK40WpTBZAm4R3WMOpeiQEjjuOoUwXe/etU/L7DI m1nDu5evdd8IsLeKyNPJrp6TdE6hzVBMrtTL7UbdAR2DTpd1l8jaDMt8IQdAY7ZKhlhP hK6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701228124; x=1701832924; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dkGBNY4lxNmyHcFXPaFanJxG5Xvmyz6hflPyUWhCGXY=; b=KuWw4ZosDT7tRbZKyV4NBJ/RbqC4m4DWc69lhvC8xua/EVTMAPVE93kBVasz3evStw 0HO+rymIMDO4+uegXIFlavomM64fr+bBDtYeX10i3WQQQfbX5g7vXiOnio6f3ohnaV7Z r4kwUyqgA/sEJ67+Lyw9xdRf991xY7aDchzlUDLUpnAi7cJYaftZMzOa9w88UlKEWmJl tiUja66oBxwn99KNDbyZOy+TlsPdqnSopTMzTaeJ73kRb4NaPcXYkTaGmauWVLIuoLku BktrSrkAgaJ95VykokNpVY95RblLmw/PK8lE5i2FmGKgDA6orCTxbiQzLe7ejvL4rPU1 lj5g== X-Gm-Message-State: AOJu0Yw7VnJT8rG4djFjBhLYpBFqT96McXCDHdWTbOZywxk0kGjvpOp+ 3racrlDNNBshtt8K1uBxJQwgbjPMAbPPanTr X-Google-Smtp-Source: AGHT+IGgU2pugrxnPiU5pgzoPzMJYEV1VVlpcCDjt+l6Df6QyQjrSFotV7ovT+35/w/aqiFCJqQcbTtkApU+OqYQ X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:29b4]) (user=yosryahmed job=sendgmr) by 2002:a05:690c:3692:b0:5d1:6732:25a0 with SMTP id fu18-20020a05690c369200b005d1673225a0mr197147ywb.4.1701228124565; Tue, 28 Nov 2023 19:22:04 -0800 (PST) Date: Wed, 29 Nov 2023 03:21:50 +0000 In-Reply-To: <20231129032154.3710765-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20231129032154.3710765-1-yosryahmed@google.com> X-Mailer: git-send-email 2.43.0.rc1.413.gea7ed67945-goog Message-ID: <20231129032154.3710765-3-yosryahmed@google.com> Subject: [mm-unstable v4 2/5] mm: memcg: move vmstats structs definition above flushing code From: Yosry Ahmed To: Andrew Morton Cc: Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Ivan Babrou , Tejun Heo , " =?utf-8?q?Michal_Koutn=C3=BD?= " , Waiman Long , kernel-team@cloudflare.com, Wei Xu , Greg Thelen , Domenico Cerasuolo , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Yosry Ahmed X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 6C4A140007 X-Stat-Signature: 8q7bzi8jagf3xnncy458wmmmmxxqkye9 X-HE-Tag: 1701228125-757485 X-HE-Meta: U2FsdGVkX1+nyMoy57JbV3a+vMp9SIRVwyx+ULMF9k8R9jBsVZ1dOMOTf/zPjMY3adByaos7vaznwMYVy/NdahNcLrc2W9KS4PG5h2YKR4R+dkJKQB+aIaCW572rnylFGd8aHNNHxP1pEhJHLkn+hL0WeS4LdknmrrmSxKJIJsmzfkv5ZVOB/S1F3XC0hAkyRMNdNInS1NWKgcsNVm+F25xnpO6eXpiUCvDJldkQn0d1NxG6XVAp+TP9cpA9b7ci2DLbuZxZKBZa+C7HxvAWOKbnFUq53hK7e0NAvQqD9s7TzvKMGFkgyN3tWv0k6qckvfn5aSI7xUNzEw8TqSLctu22oVlvwYR6UuNcST4udj5pvCFhI36Oz+d1qHIOw4J9zvny+gVEC0xTKcJ1QX2e1mE360013zWdjM9TwFljyawqupLgG62NBS0Z2UeGr5fAuTEW+3p3KK0TSt/EXAJAZNwggl2U3IfKQm8NXmQOfSc5r7PRTUWFFjNDdz++f10QMRGBuWun2puMW4qm4V1kwMSU+LA2rXi7pXK4TePr8nXBRl7qXE2Nwc78TJ7iAj3uEY3jYvT0dgY1PgPDxkoMX2qvmeTkuqKLS2rBbJU5S9ztMH/F5GDECjnmDNh/J22fzkGV+GTWj9G1LSF7SGbTas1QFvFvAQqLNxus8FzrNR5jrAvQkFNpAdJylV9aYAtS2TQkmp7KpVxjI2jbcp9cPFsC20DbxRZUK/z5Aq5uJLRMgE2Md0ZvuYzCosyiVwF3DhB3Bo41YWs0Ch2bzgdPEhCcL7YWgTifBFPkSBMBZwJy/xeFT1ZffBjB3EMmXUQdeVg39fzUEHdxGjnjgZiasgTO6lNO8unPiQPcnZr0ufjOMYAZ4mJIDCR+l8nlGf/4RS04hMkTD48uMY8h1THeeaUt+Cdf0o/+oEjVNc2GlA0yGoGyTaDBkgMGCWlIoQN7vrpxDP7I3Em9o+gZmt9 0l19DT6K IFArlfCcs6KS2bjOAzjof3NleSZHQHwz9h1posCX8QV/J/gzKNz/wW2kJtQVtPw2CEDKKNSUTNxlS+7M1Ut1eZxyF7iMoEBWoF9jc0bUH9J1ZkJ559fCVgWwzXzinomSvTI4St0RGK8LL4YDAsnd91N5D2aOyfTxKCmSFeL/PH4LRkPZNtjxEHsU7sFJxAX5gILmytFG5Fwn3sjCStdN4jOrbrP3ZbqlK41SNRB58NhnypqQ23NaDUbuVJt5rkq+u/eEqHi6y5gbqVk7U6eXQqTorziLoeavUDriXt4ZI/Jnle7Msqkn8hUVVz1/Gkj1U7AFoREbGZPHEtzpRRdBnnjOPdvadIKe+RTYucY78IX6LEksAa3gA4Q0fDLgMEUeQ30zQl6M65UMOMRjNThlcp1AnbaB7I3oYwrCsjh5tSu+DOh8t2YUZ9cRQyr5OwOx4km1mgYam3L16/6dERRF+qv+tUT0fL/5lf4EvRNaf8wcdKY9BuKxlj4+7VtbslcyNaRBu8o6XZNgju0R4/JYIkf1L3D8T2yaOjIbPTCQBcvhdlRAZ9daX7q3v61Rygk7YY0XADd1+RFLEl5C1XGHXMV6zD3AMNwaDAzEi4mNjhHzinHFj6ga2jtxFiehRxY29T8gtMOQl5N+qcz2rG2aFI8VMNUDgK+fLofXZ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The following patch will make use of those structs in the flushing code, so move their definitions (and a few other dependencies) a little bit up to reduce the diff noise in the following patch. No functional change intended. Signed-off-by: Yosry Ahmed Tested-by: Domenico Cerasuolo Acked-by: Shakeel Butt --- mm/memcontrol.c | 148 ++++++++++++++++++++++++------------------------ 1 file changed, 74 insertions(+), 74 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 61435bd037cb4..cf05b97c1e824 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -573,6 +573,80 @@ mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_node *mctz) return mz; } +/* Subset of vm_event_item to report for memcg event stats */ +static const unsigned int memcg_vm_event_stat[] = { + PGPGIN, + PGPGOUT, + PGSCAN_KSWAPD, + PGSCAN_DIRECT, + PGSCAN_KHUGEPAGED, + PGSTEAL_KSWAPD, + PGSTEAL_DIRECT, + PGSTEAL_KHUGEPAGED, + PGFAULT, + PGMAJFAULT, + PGREFILL, + PGACTIVATE, + PGDEACTIVATE, + PGLAZYFREE, + PGLAZYFREED, +#if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP) + ZSWPIN, + ZSWPOUT, + ZSWP_WB, +#endif +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + THP_FAULT_ALLOC, + THP_COLLAPSE_ALLOC, + THP_SWPOUT, + THP_SWPOUT_FALLBACK, +#endif +}; + +#define NR_MEMCG_EVENTS ARRAY_SIZE(memcg_vm_event_stat) +static int mem_cgroup_events_index[NR_VM_EVENT_ITEMS] __read_mostly; + +static void init_memcg_events(void) +{ + int i; + + for (i = 0; i < NR_MEMCG_EVENTS; ++i) + mem_cgroup_events_index[memcg_vm_event_stat[i]] = i + 1; +} + +static inline int memcg_events_index(enum vm_event_item idx) +{ + return mem_cgroup_events_index[idx] - 1; +} + +struct memcg_vmstats_percpu { + /* Local (CPU and cgroup) page state & events */ + long state[MEMCG_NR_STAT]; + unsigned long events[NR_MEMCG_EVENTS]; + + /* Delta calculation for lockless upward propagation */ + long state_prev[MEMCG_NR_STAT]; + unsigned long events_prev[NR_MEMCG_EVENTS]; + + /* Cgroup1: threshold notifications & softlimit tree updates */ + unsigned long nr_page_events; + unsigned long targets[MEM_CGROUP_NTARGETS]; +}; + +struct memcg_vmstats { + /* Aggregated (CPU and subtree) page state & events */ + long state[MEMCG_NR_STAT]; + unsigned long events[NR_MEMCG_EVENTS]; + + /* Non-hierarchical (CPU aggregated) page state & events */ + long state_local[MEMCG_NR_STAT]; + unsigned long events_local[NR_MEMCG_EVENTS]; + + /* Pending child counts during tree propagation */ + long state_pending[MEMCG_NR_STAT]; + unsigned long events_pending[NR_MEMCG_EVENTS]; +}; + /* * memcg and lruvec stats flushing * @@ -684,80 +758,6 @@ static void flush_memcg_stats_dwork(struct work_struct *w) queue_delayed_work(system_unbound_wq, &stats_flush_dwork, FLUSH_TIME); } -/* Subset of vm_event_item to report for memcg event stats */ -static const unsigned int memcg_vm_event_stat[] = { - PGPGIN, - PGPGOUT, - PGSCAN_KSWAPD, - PGSCAN_DIRECT, - PGSCAN_KHUGEPAGED, - PGSTEAL_KSWAPD, - PGSTEAL_DIRECT, - PGSTEAL_KHUGEPAGED, - PGFAULT, - PGMAJFAULT, - PGREFILL, - PGACTIVATE, - PGDEACTIVATE, - PGLAZYFREE, - PGLAZYFREED, -#if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP) - ZSWPIN, - ZSWPOUT, - ZSWP_WB, -#endif -#ifdef CONFIG_TRANSPARENT_HUGEPAGE - THP_FAULT_ALLOC, - THP_COLLAPSE_ALLOC, - THP_SWPOUT, - THP_SWPOUT_FALLBACK, -#endif -}; - -#define NR_MEMCG_EVENTS ARRAY_SIZE(memcg_vm_event_stat) -static int mem_cgroup_events_index[NR_VM_EVENT_ITEMS] __read_mostly; - -static void init_memcg_events(void) -{ - int i; - - for (i = 0; i < NR_MEMCG_EVENTS; ++i) - mem_cgroup_events_index[memcg_vm_event_stat[i]] = i + 1; -} - -static inline int memcg_events_index(enum vm_event_item idx) -{ - return mem_cgroup_events_index[idx] - 1; -} - -struct memcg_vmstats_percpu { - /* Local (CPU and cgroup) page state & events */ - long state[MEMCG_NR_STAT]; - unsigned long events[NR_MEMCG_EVENTS]; - - /* Delta calculation for lockless upward propagation */ - long state_prev[MEMCG_NR_STAT]; - unsigned long events_prev[NR_MEMCG_EVENTS]; - - /* Cgroup1: threshold notifications & softlimit tree updates */ - unsigned long nr_page_events; - unsigned long targets[MEM_CGROUP_NTARGETS]; -}; - -struct memcg_vmstats { - /* Aggregated (CPU and subtree) page state & events */ - long state[MEMCG_NR_STAT]; - unsigned long events[NR_MEMCG_EVENTS]; - - /* Non-hierarchical (CPU aggregated) page state & events */ - long state_local[MEMCG_NR_STAT]; - unsigned long events_local[NR_MEMCG_EVENTS]; - - /* Pending child counts during tree propagation */ - long state_pending[MEMCG_NR_STAT]; - unsigned long events_pending[NR_MEMCG_EVENTS]; -}; - unsigned long memcg_page_state(struct mem_cgroup *memcg, int idx) { long x = READ_ONCE(memcg->vmstats->state[idx]); From patchwork Wed Nov 29 03:21:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 13472159 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C3FDC07CB1 for ; Wed, 29 Nov 2023 03:22:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 299D16B0399; Tue, 28 Nov 2023 22:22:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 24E346B039A; Tue, 28 Nov 2023 22:22:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 09D126B039B; Tue, 28 Nov 2023 22:22:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id EAEA76B0399 for ; Tue, 28 Nov 2023 22:22:08 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id C5E4A120443 for ; Wed, 29 Nov 2023 03:22:08 +0000 (UTC) X-FDA: 81509543136.02.F00ED78 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf13.hostedemail.com (Postfix) with ESMTP id 00B882001E for ; Wed, 29 Nov 2023 03:22:06 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=o+lJAqhS; spf=pass (imf13.hostedemail.com: domain of 3Xq5mZQoKCIc9z329lsxporzzrwp.nzxwty58-xxv6lnv.z2r@flex--yosryahmed.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3Xq5mZQoKCIc9z329lsxporzzrwp.nzxwty58-xxv6lnv.z2r@flex--yosryahmed.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701228127; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=yhGB2buQoBgcCIpoYKeC0LjcVeijrI8T84D/Lk2xUnw=; b=kqBVzzkL731QStEhwqvpzLYIAsXRAF7p66ENv43ElFetJC1CykfOzWWTa1CNsDYAHG3jL2 pQa8eBDz6C5yasdZBHtjq74Yj1dExfv29GX3DppsKxQ4AjcvW60lXgKXy/VG46hKMKHWkw SwsVksmJ4Xy6tGMACNdmeW/jq2gOY9g= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701228127; a=rsa-sha256; cv=none; b=I5h2NvkN/rfQ0Y3rJDuSVI1TEN8qqsh60RqpjpBOiFcESesxM23UN2PU9/uLlorW3U14mY jWjghxy7FbJ3x4QeI5vZxMuu5hYjY2i1udlHu9tBCv6QGr/0hSzs1spuqFrEiYR6b3A3xb z5xhwQZeAyvayNTujnn8wgE9v4lHWec= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=o+lJAqhS; spf=pass (imf13.hostedemail.com: domain of 3Xq5mZQoKCIc9z329lsxporzzrwp.nzxwty58-xxv6lnv.z2r@flex--yosryahmed.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3Xq5mZQoKCIc9z329lsxporzzrwp.nzxwty58-xxv6lnv.z2r@flex--yosryahmed.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-5ca61d84dc3so82005827b3.0 for ; Tue, 28 Nov 2023 19:22:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1701228126; x=1701832926; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=yhGB2buQoBgcCIpoYKeC0LjcVeijrI8T84D/Lk2xUnw=; b=o+lJAqhSaRBkZR/qhR48AX1DmVWz1byMKqt0OeVlklTxuHM5qsx9Vzg4Q+PFYPKgTL OO7K1m+FqrC0KDLr0gUugkAKWpvVg1hj0TRQd0gn9B5oSpa0htU7bnna1nluG2Om+E7j KWOFQBfbhNRwalnafxTSzhRPmyzWhrSl6Ron9t/dnG0+FE33ZtVui4TB0iTDPYTkWUn8 dcp82UN4+kRTMe14SvUhYGN06lpWzsopwZ5Pf9Dnu4L1OZb8nTz2JwiAYDzUOqaHAw7G 7Gh1FvDug5iEfApCfPhWtXDXj1VTTVOfLZLIUdoH9qGZZC46HpX84vKXSnqW6F7ROQW6 SCSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701228126; x=1701832926; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=yhGB2buQoBgcCIpoYKeC0LjcVeijrI8T84D/Lk2xUnw=; b=CO2sDt7tCXW6xas1O/0iPd6TyJooBoFPQTCJg0ZX8UIq2yjaQc3py0Zxyd6iJbVShs BhYiI1h6DypgLNyJH7JOfSw86UXSTDcWnMrcuyYruVDTl3+m7t5pRpSpwvNELktFuM2G 3b03QgY4zycS6VQ/1/ql3tD5mrTNw4ql8QgBuF9rKEwUVaVD1r60l/IPUsfpUG94yYYc wHZquu79Jja4vB7Qrxf6I9ZjYN+7R+LoVuTKMVNBbkFQchJbOPQSFQTa7QH+cwpI7x0D U24xnyFISO/LhgncEGSyXml4g0JCjop6XvX+i3O0SpSp1mh8oueowduaLmvN+MMQyAcY Zhmg== X-Gm-Message-State: AOJu0YxOLjF/AW+OuSFIPA5kXNN1P/Xm8FhLG5iEQp3fb/AZxYLBeZHd yQvkb7ea4c/P7qw8QpBbDQmGjecLv/BhhBUC X-Google-Smtp-Source: AGHT+IGQMvjzW97pO0CVzIRKmnSqmB117hXzR09KNowV7aCNCGSm6cUnDihjta6XjONTlKJWB5U7rHlCSdY/ECEH X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:29b4]) (user=yosryahmed job=sendgmr) by 2002:a05:690c:3183:b0:5cc:412c:27c7 with SMTP id fd3-20020a05690c318300b005cc412c27c7mr550939ywb.5.1701228126157; Tue, 28 Nov 2023 19:22:06 -0800 (PST) Date: Wed, 29 Nov 2023 03:21:51 +0000 In-Reply-To: <20231129032154.3710765-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20231129032154.3710765-1-yosryahmed@google.com> X-Mailer: git-send-email 2.43.0.rc1.413.gea7ed67945-goog Message-ID: <20231129032154.3710765-4-yosryahmed@google.com> Subject: [mm-unstable v4 3/5] mm: memcg: make stats flushing threshold per-memcg From: Yosry Ahmed To: Andrew Morton Cc: Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Ivan Babrou , Tejun Heo , " =?utf-8?q?Michal_Koutn=C3=BD?= " , Waiman Long , kernel-team@cloudflare.com, Wei Xu , Greg Thelen , Domenico Cerasuolo , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Yosry Ahmed X-Stat-Signature: 4fi3msa54be3jm3gjx8rotpok4jzphgg X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 00B882001E X-Rspam-User: X-HE-Tag: 1701228126-431105 X-HE-Meta: U2FsdGVkX1/ISUXrUoPR1R5OP2ARGCbkQ/lLtJbHATV8k66O+LIfUzEpcxOIK3RvKBsytk1ffnMaCiUYCMIKzmahZYkQQKKe7HKGWzqAu0np1iAjvNCDm3d4kkks11iBz2kGfym2VOHXvkzjSxZtxrzgGeo4qgQzLBRsirj+V2FECjZi+57ZlGDGam6EclSK5+yRigSt5QZrIU/NVtNe6VRH8DA0APbJWIARfan8/PYl0OiUUKtd1bk+mRAWBv5mYa/+ZVK7vgE835h6LNkwrdxtLqnkvjmXhyWx4nl494HhweYZwufC9O3kzseiBraYak82p354MviH1/+eFV9FEMiqxJwn9Wha77+3XwCDZubp/M1M81+GyTfr0Lywp1BN2y7EKqLGHK9X/GAk3rAE3Mf8+t2N9DpXw0lnMT0V9xFpm4yGxrs5EthaoWVp2qN68kaOhz6Lq+qClYHoOutsSEi45sNTkKcwq52dTrTfRyKfEV1AX7HJkAnbA2u/B8rOdjBGgk1iVOKu6B0yWrKrkzSifSZRggo1bGj8fX2nLbS8A1c0xntr94erOV2fYWrmvzH/mgJY8PSf4xElD9r7Uuah4x4os6/Jh7uj9FWQikIIhYeZ5+UsMVRZ2Mj5INtx2CKFg9ysTrGukH9ckcWlIFj56MwwYaNYGFqAEM20wUEKjQu31x1CEkfEG+2bJJkPyr6ztpy5QmPaQQInu0mzRS5M2WPOXNbQ9veVY3H2VNBWHWGoyxY9DcUGdSZA7e0J3FeVdCFMMMDprNwPQ4vVqCFAumzLfyoneUXVFSLEPcXnHy0HRItg30wejQkUMZ6CxItTgjENfGHLlji6ebh/2cnSYc5kkQ87vb20Dzck26nX5yjuYkLYtCTqLl7PzooMyI+P+lAlsGNfK3VVAYdR03VlxJaGE61KuOgcWd8iFOdOzJ2c2WIETjEuzbByaRqmsnRTSGmtakU0MXc8gcg v1UN4xC2 syimI/AfjcDeDq26LXTDfmoj66XY5qgbp6fEXpMbFhFgOLQtGmTM3Sl5puGMQSTHkgvvInv+zyORzuMPlopCB5TNGq5MOjwqopxE+ET5gH1omlnmqaT+uTY2zlM6AlfjJ39ifZLI2JrG2hMbF7fGOqCVKVZfrh9PMrgksJJLR7Y1ctBrhqa6/y5YErWB3YzaxhfMwZApT+VOCUwxspvNM5VlNRtjXgLS2pMiXER40y+b0VxKRpyrlpwG41Um9QzxpKG9L75uwbjMYXUSKS6ZVxOQUb3F4ePcUGGwVd8hq7ZnUCH3T3E61GehfclVLJHtJCA9sC0qEC83RR5qjs+QswtbV24MG9qVF0Dgy9x6m+lPqWSIgJ2q0cDpOgRwulDu1QDWp4Dh4ZklW8KQo8hrrTJCh60+4gRAdPUHW49lD7thpJ6Cm5hIl7IeEL3sP+1v1a7dil/nE26OhFjOvrddM3ZfPie+afjcKCgvpuJmIAxjUysoORCl8qbe0gicOY5PpXKq/fU4UZvTZZb+jShz6QRlyDYNl0/x/CHTKuWaE+MwlRIahMcLaa1ZLIxz5/oad+fmCc9VG27utt5ULjD9qzO2OCmcpKgIyaSFATdmwoHcRi1C8qd7ee5zyFpc1WQdlhv6i1oOPsp7f5SbLV9INp4AmZgIcC2KVPlBkM7DecuuSxWiAkg1Rw1ZIx3PnufXZ2tHfqyqggxMkFNTkUdNwIJgLhdTops0JuD4v8ndb9FbO6SyjomocOtRXcHM1hT9wtBHn//5In8jAYcaQYGFFHv8dUQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: A global counter for the magnitude of memcg stats update is maintained on the memcg side to avoid invoking rstat flushes when the pending updates are not significant. This avoids unnecessary flushes, which are not very cheap even if there isn't a lot of stats to flush. It also avoids unnecessary lock contention on the underlying global rstat lock. Make this threshold per-memcg. The scheme is followed where percpu (now also per-memcg) counters are incremented in the update path, and only propagated to per-memcg atomics when they exceed a certain threshold. This provides two benefits: (a) On large machines with a lot of memcgs, the global threshold can be reached relatively fast, so guarding the underlying lock becomes less effective. Making the threshold per-memcg avoids this. (b) Having a global threshold makes it hard to do subtree flushes, as we cannot reset the global counter except for a full flush. Per-memcg counters removes this as a blocker from doing subtree flushes, which helps avoid unnecessary work when the stats of a small subtree are needed. Nothing is free, of course. This comes at a cost: (a) A new per-cpu counter per memcg, consuming NR_CPUS * NR_MEMCGS * 4 bytes. The extra memory usage is insigificant. (b) More work on the update side, although in the common case it will only be percpu counter updates. The amount of work scales with the number of ancestors (i.e. tree depth). This is not a new concept, adding a cgroup to the rstat tree involves a parent loop, so is charging. Testing results below show no significant regressions. (c) The error margin in the stats for the system as a whole increases from NR_CPUS * MEMCG_CHARGE_BATCH to NR_CPUS * MEMCG_CHARGE_BATCH * NR_MEMCGS. This is probably fine because we have a similar per-memcg error in charges coming from percpu stocks, and we have a periodic flusher that makes sure we always flush all the stats every 2s anyway. This patch was tested to make sure no significant regressions are introduced on the update path as follows. The following benchmarks were ran in a cgroup that is 2 levels deep (/sys/fs/cgroup/a/b/): (1) Running 22 instances of netperf on a 44 cpu machine with hyperthreading disabled. All instances are run in a level 2 cgroup, as well as netserver: # netserver -6 # netperf -6 -H ::1 -l 60 -t TCP_SENDFILE -- -m 10K Averaging 20 runs, the numbers are as follows: Base: 40198.0 mbps Patched: 38629.7 mbps (-3.9%) The regression is minimal, especially for 22 instances in the same cgroup sharing all ancestors (so updating the same atomics). (2) will-it-scale page_fault tests. These tests (specifically per_process_ops in page_fault3 test) detected a 25.9% regression before for a change in the stats update path [1]. These are the numbers from 10 runs (+ is good) on a machine with 256 cpus: LABEL | MEAN | MEDIAN | STDDEV | ------------------------------+-------------+-------------+------------- page_fault1_per_process_ops | | | | (A) base | 270249.164 | 265437.000 | 13451.836 | (B) patched | 261368.709 | 255725.000 | 13394.767 | | -3.29% | -3.66% | | page_fault1_per_thread_ops | | | | (A) base | 242111.345 | 239737.000 | 10026.031 | (B) patched | 237057.109 | 235305.000 | 9769.687 | | -2.09% | -1.85% | | page_fault1_scalability | | | (A) base | 0.034387 | 0.035168 | 0.0018283 | (B) patched | 0.033988 | 0.034573 | 0.0018056 | | -1.16% | -1.69% | | page_fault2_per_process_ops | | | (A) base | 203561.836 | 203301.000 | 2550.764 | (B) patched | 197195.945 | 197746.000 | 2264.263 | | -3.13% | -2.73% | | page_fault2_per_thread_ops | | | (A) base | 171046.473 | 170776.000 | 1509.679 | (B) patched | 166626.327 | 166406.000 | 768.753 | | -2.58% | -2.56% | | page_fault2_scalability | | | (A) base | 0.054026 | 0.053821 | 0.00062121 | (B) patched | 0.053329 | 0.05306 | 0.00048394 | | -1.29% | -1.41% | | page_fault3_per_process_ops | | | (A) base | 1295807.782 | 1297550.000 | 5907.585 | (B) patched | 1275579.873 | 1273359.000 | 8759.160 | | -1.56% | -1.86% | | page_fault3_per_thread_ops | | | (A) base | 391234.164 | 390860.000 | 1760.720 | (B) patched | 377231.273 | 376369.000 | 1874.971 | | -3.58% | -3.71% | | page_fault3_scalability | | | (A) base | 0.60369 | 0.60072 | 0.0083029 | (B) patched | 0.61733 | 0.61544 | 0.009855 | | +2.26% | +2.45% | | All regressions seem to be minimal, and within the normal variance for the benchmark. The fix for [1] assumes that 3% is noise -- and there were no further practical complaints), so hopefully this means that such variations in these microbenchmarks do not reflect on practical workloads. (3) I also ran stress-ng in a nested cgroup and did not observe any obvious regressions. [1]https://lore.kernel.org/all/20190520063534.GB19312@shao2-debian/ Suggested-by: Johannes Weiner Signed-off-by: Yosry Ahmed Tested-by: Domenico Cerasuolo Acked-by: Shakeel Butt --- mm/memcontrol.c | 50 +++++++++++++++++++++++++++++++++---------------- 1 file changed, 34 insertions(+), 16 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index cf05b97c1e824..93b483b379aa1 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -631,6 +631,9 @@ struct memcg_vmstats_percpu { /* Cgroup1: threshold notifications & softlimit tree updates */ unsigned long nr_page_events; unsigned long targets[MEM_CGROUP_NTARGETS]; + + /* Stats updates since the last flush */ + unsigned int stats_updates; }; struct memcg_vmstats { @@ -645,6 +648,9 @@ struct memcg_vmstats { /* Pending child counts during tree propagation */ long state_pending[MEMCG_NR_STAT]; unsigned long events_pending[NR_MEMCG_EVENTS]; + + /* Stats updates since the last flush */ + atomic64_t stats_updates; }; /* @@ -664,9 +670,7 @@ struct memcg_vmstats { */ static void flush_memcg_stats_dwork(struct work_struct *w); static DECLARE_DEFERRABLE_WORK(stats_flush_dwork, flush_memcg_stats_dwork); -static DEFINE_PER_CPU(unsigned int, stats_updates); static atomic_t stats_flush_ongoing = ATOMIC_INIT(0); -static atomic_t stats_flush_threshold = ATOMIC_INIT(0); static u64 flush_last_time; #define FLUSH_TIME (2UL*HZ) @@ -693,26 +697,37 @@ static void memcg_stats_unlock(void) preempt_enable_nested(); } + +static bool memcg_should_flush_stats(struct mem_cgroup *memcg) +{ + return atomic64_read(&memcg->vmstats->stats_updates) > + MEMCG_CHARGE_BATCH * num_online_cpus(); +} + static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) { + int cpu = smp_processor_id(); unsigned int x; if (!val) return; - cgroup_rstat_updated(memcg->css.cgroup, smp_processor_id()); + cgroup_rstat_updated(memcg->css.cgroup, cpu); + + for (; memcg; memcg = parent_mem_cgroup(memcg)) { + x = __this_cpu_add_return(memcg->vmstats_percpu->stats_updates, + abs(val)); + + if (x < MEMCG_CHARGE_BATCH) + continue; - x = __this_cpu_add_return(stats_updates, abs(val)); - if (x > MEMCG_CHARGE_BATCH) { /* - * If stats_flush_threshold exceeds the threshold - * (>num_online_cpus()), cgroup stats update will be triggered - * in __mem_cgroup_flush_stats(). Increasing this var further - * is redundant and simply adds overhead in atomic update. + * If @memcg is already flush-able, increasing stats_updates is + * redundant. Avoid the overhead of the atomic update. */ - if (atomic_read(&stats_flush_threshold) <= num_online_cpus()) - atomic_add(x / MEMCG_CHARGE_BATCH, &stats_flush_threshold); - __this_cpu_write(stats_updates, 0); + if (!memcg_should_flush_stats(memcg)) + atomic64_add(x, &memcg->vmstats->stats_updates); + __this_cpu_write(memcg->vmstats_percpu->stats_updates, 0); } } @@ -731,13 +746,12 @@ static void do_flush_stats(void) cgroup_rstat_flush(root_mem_cgroup->css.cgroup); - atomic_set(&stats_flush_threshold, 0); atomic_set(&stats_flush_ongoing, 0); } void mem_cgroup_flush_stats(void) { - if (atomic_read(&stats_flush_threshold) > num_online_cpus()) + if (memcg_should_flush_stats(root_mem_cgroup)) do_flush_stats(); } @@ -751,8 +765,8 @@ void mem_cgroup_flush_stats_ratelimited(void) static void flush_memcg_stats_dwork(struct work_struct *w) { /* - * Always flush here so that flushing in latency-sensitive paths is - * as cheap as possible. + * Deliberately ignore memcg_should_flush_stats() here so that flushing + * in latency-sensitive paths is as cheap as possible. */ do_flush_stats(); queue_delayed_work(system_unbound_wq, &stats_flush_dwork, FLUSH_TIME); @@ -5809,6 +5823,10 @@ static void mem_cgroup_css_rstat_flush(struct cgroup_subsys_state *css, int cpu) } } } + statc->stats_updates = 0; + /* We are in a per-cpu loop here, only do the atomic write once */ + if (atomic64_read(&memcg->vmstats->stats_updates)) + atomic64_set(&memcg->vmstats->stats_updates, 0); } #ifdef CONFIG_MMU From patchwork Wed Nov 29 03:21:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 13472160 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7756AC4167B for ; Wed, 29 Nov 2023 03:22:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0A1866B039A; Tue, 28 Nov 2023 22:22:11 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 02B7B6B039B; Tue, 28 Nov 2023 22:22:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DE3A56B039C; Tue, 28 Nov 2023 22:22:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id CB0176B039A for ; Tue, 28 Nov 2023 22:22:10 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id A0321803EC for ; Wed, 29 Nov 2023 03:22:10 +0000 (UTC) X-FDA: 81509543220.25.E708DA7 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf22.hostedemail.com (Postfix) with ESMTP id C161FC000E for ; Wed, 29 Nov 2023 03:22:08 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=TWfSuGDn; spf=pass (imf22.hostedemail.com: domain of 3YK5mZQoKCIkB154Bnuzrqt11tyr.p1zyv07A-zzx8npx.14t@flex--yosryahmed.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3YK5mZQoKCIkB154Bnuzrqt11tyr.p1zyv07A-zzx8npx.14t@flex--yosryahmed.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701228128; a=rsa-sha256; cv=none; b=lx33ZHwztdsyBKSFOXgbLcyDxifL74Azok2zTiMlFr8mur/yo0rTtRhVwGnHUE0x8Fqrra WKBRPOFMf/jvnrAQwuRtOij1jcSl0fUB7qekXf/jSfNHdQw4/z09F3tBPpzMMhgLy5F/A0 fjjdGF+Nwcbb5BzjGH6iCycUF+4WsXA= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=TWfSuGDn; spf=pass (imf22.hostedemail.com: domain of 3YK5mZQoKCIkB154Bnuzrqt11tyr.p1zyv07A-zzx8npx.14t@flex--yosryahmed.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3YK5mZQoKCIkB154Bnuzrqt11tyr.p1zyv07A-zzx8npx.14t@flex--yosryahmed.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701228128; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GMYQlWUldBV9LWasymxz6cRCo6mSsi6IzDXKk+8BatU=; b=3SXSuumxoZQbeM3HCtlKLhtzC1S0NjVE5ClhNfxzgIJJ9OuojQL/0NfNOCONlNnvDQnVgk WZgab4ZmTpLkpO/NbYBgBCCIefh5PMD6QrC/DAWfnZqFMYxU3yl12Opa5937MrIrjp5PlN alwRtgLdHGoIuiDz4uUaA/zbGQKDtBs= Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-5ca61d84dc3so82006097b3.0 for ; Tue, 28 Nov 2023 19:22:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1701228128; x=1701832928; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=GMYQlWUldBV9LWasymxz6cRCo6mSsi6IzDXKk+8BatU=; b=TWfSuGDnIudKGEo7ES/f3xTNHZBfrdmPbmcvvg5LlEzgecuVAjraGlfh9QVh70QuxQ 1ZjvWo7QC0T+FjqSu1nqCmrTvmaQ67KSN9j21vn9C+Q26k09bmrexcmzjatQwWMVTrw9 VRnHuNTl2JAOTrpn/YH6vTRLlDRrSwJhhUJE5HTdLtzjbT1yCXPySKya9ZDA/GIRwHaY jGJ2PZHWUgDPRRborFcbpGnwg8kk02SSfQAV40Hud3hdUTipq5RpzEppPISgorLgDEWA AZrE1uCS077fWeWANBk5BJP8huEX2r1y1kQdwBwJFIFcY5x+/RfTa3PBDOi2GspKarrK U8bg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701228128; x=1701832928; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=GMYQlWUldBV9LWasymxz6cRCo6mSsi6IzDXKk+8BatU=; b=jS80VDX+T3vDvwdlo9e5ajAeaXcTzSNp7yJN7xeR/6pThbFPLLDhhxaJ3ldrJvcVMe BTu+gT6rzWhJoh4w5GFHhRzf+q6KIYy6uA4krMmnPsvv8PvcZTgoO9BzhsX9w9AWAy9+ n/0zykAi4bp7w5YaFl7hLYg1yAP7AOgkSiQTBDbR5gF5kxBy2IW5gWg4ah9dZVkWPZ6X wHc9b3uSvJhWKs+HyiQRHJAZg2kddx41DsZiT1DagpSyJyXgoo3cnC4YBpYzB5H8G0mb xbiEmC3zkNCqkAcvU+zdVLD9u1oE6GqC8WKIVJTvC6Xuz8YPhWMplsbzbHfLY/2++hax xbVg== X-Gm-Message-State: AOJu0YyzwMi5nfSubHXVIL1y7r7csqoBD+rks3A2YOD6kQ5Ih4ziJLBC Y+OpPc0hSuPKhTpkQafXBzti+kAjxW2DOaeW X-Google-Smtp-Source: AGHT+IF6lUJvoUePL6awLPUorDhmrelziQ66K/nBRxKZkO4u8ApkLFgb5tzVr7krTn6SjMgmlcBKg7J4Ix+mgnzs X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:29b4]) (user=yosryahmed job=sendgmr) by 2002:a05:690c:f84:b0:5ca:ad72:2d78 with SMTP id df4-20020a05690c0f8400b005caad722d78mr634136ywb.8.1701228128017; Tue, 28 Nov 2023 19:22:08 -0800 (PST) Date: Wed, 29 Nov 2023 03:21:52 +0000 In-Reply-To: <20231129032154.3710765-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20231129032154.3710765-1-yosryahmed@google.com> X-Mailer: git-send-email 2.43.0.rc1.413.gea7ed67945-goog Message-ID: <20231129032154.3710765-5-yosryahmed@google.com> Subject: [mm-unstable v4 4/5] mm: workingset: move the stats flush into workingset_test_recent() From: Yosry Ahmed To: Andrew Morton Cc: Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Ivan Babrou , Tejun Heo , " =?utf-8?q?Michal_Koutn=C3=BD?= " , Waiman Long , kernel-team@cloudflare.com, Wei Xu , Greg Thelen , Domenico Cerasuolo , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Yosry Ahmed X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: C161FC000E X-Stat-Signature: y9rhh4issawb11sfdghwf4r7yoy64peh X-Rspam-User: X-HE-Tag: 1701228128-570355 X-HE-Meta: U2FsdGVkX1/fJ4Qb3EDZjgxmOpQVjfG9vSzS70SgLC/2LhVpRMcsqjizUqpm9hLZ0ujqTxhy5W9phscbThsHdUhnPdbct9LKUfoAIZu1l6IKFCNBZj0DPEeVEvqMEJL5+WGJcHrZGIBHb70/dHmDK7jZmw39Y6QazVc1bRoMN7GQfa91NbQ3+91NNQz7KAf1WD0/nrOgr7BoLpc2APGLM6HFQRmn1vziJ3+XTPPSMqQVFZtoJtfK8Sd4fuCao3e+uovjrn3yWWkXzY2+mcKhCqWM8k/amUFQR5TrmUAha3ppx4WWymOSyV5EgwbZe26pBq3c+wDmNsU6h6bYhOBVgnRsu3ouyv0BRDELJ/1DU7RKDZg4PKRES8iPN1ZYO21JoBOz+jitEM1h4sn4fvU6e4hktvROw1ql1H1ckldgPMobh/Qjte//LGuh8x0dWcT+qp/6jWD7UPnrYtl8upqlMaikXD2NsCoS9w9jwFO/wOqrYV0o5TKjSlKYvaO+ZhxRncJygQz/vqCwOBU0w1k4ueDOo0OUYzNBsPIwUeA12qLSM4BA8UkolXKg4+aWnH9N1rIsp8/0mvPeFCQVgVqk9Otw4lVnyYQmi+noIEjEA1acHsyyHcTomriwZSBN/Qy7co8Y/0C/LCuBgx+x5Z5ilNDUg1Zop1kDylyJ0OB4xAThxSEnlQfwceflpveV744z2pa+b6OTykcGVamavI50sPyl8jFXP9evI/ccKskvuCYpmDrl+nxGpMsAwWrKI9dGjBHFUB0KYZsxx5asiPrRcXcauJKhgRSAX6jyS9sprJyW/2zBbmnrzMBFtZB56jzSyHtLme5WEbCOByDR66PAODopeCVdyaxaa6PWLAGhYke+WAZTfShyI1H4YDWa0c5Ye2ovPGYEiHR8vgpKaeW0EQmLjqM6u57T/b4C9XlOJBkYhCFCE2SEqeJZODTTxMNmW+TS/FjOZXZtL9Cirdy 5ibrcQ8w YxG0g6rk4O/cubSebRdAU8xsnWmSk1WE0mlA/it7G1Fbpvrr+6B9fLVUqxiSntID7t2LRvQqojYuBWtpp5iWDh098vOOCSzX1boXA0xekPZbgyk+S5P3aIH0VRqONuvzgCLsWf2lZtmYW/ukmGMeP92oyHjiM+rsfL3j3JSZYdOXn8m476/5dr669dysLvHf57acNMuzZB7+vcKL/AGaBxt6R1d6crB64eg7/Z3kIAkCsAQAL8E5y1JrAHBgM39HW6rHJfj5Q/BGeqBolZCuqTSVUyuoKKEYrMEYelStTj0zF63t7YT/UQc0R4+LFJ6nGo+JPMYZBSYcxAXjoKCxCM1kfq9/OyLEcnF/2M2/s0tVdMRKGLjQ40Q98QVPcMdYG0w0fbJQYEExVs9/cu3UozTVUdWqsV4biRcjdA0hfCtPFvdyNomGYONzyiW5I4Y5MPIz8atpbXinmrwtTmhvUIfaBcIDdwXC9LW+wx/wfSWXVfjv7x8H0StRrXuAHBTMuvDGxTZWgy4PNJFqSdcA0MuyR+oXkQxcoQEb4gpsIxDx1YfvodLMPWfVkQQqmbrEoJbdbUe01vXDTjzMLl4VFI/cG1FVczyveYsCSxdkWJYZhqmlor9Izunqx4gBRtYoBGr6r4wtWaqxBd8IsV4PPA02axqVuE4rOBJCNlI2j6sYL52vSpEJFujPhOx25e/RxW7J5yhLF9+oJYaDWzuHMpgfb3Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.002704, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The workingset code flushes the stats in workingset_refault() to get accurate stats of the eviction memcg. In preparation for more scoped flushed and passing the eviction memcg to the flush call, move the call to workingset_test_recent() where we have a pointer to the eviction memcg. The flush call is sleepable, and cannot be made in an rcu read section. Hence, minimize the rcu read section by also moving it into workingset_test_recent(). Furthermore, instead of holding the rcu read lock throughout workingset_test_recent(), only hold it briefly to get a ref on the eviction memcg. This allows us to make the flush call after we get the eviction memcg. As for workingset_refault(), nothing else there appears to be protected by rcu. The memcg of the faulted folio (which is not necessarily the same as the eviction memcg) is protected by the folio lock, which is held from all callsites. Add a VM_BUG_ON() to make sure this doesn't change from under us. No functional change intended. Signed-off-by: Yosry Ahmed Tested-by: Domenico Cerasuolo Acked-by: Shakeel Butt --- mm/workingset.c | 36 ++++++++++++++++++++++++------------ 1 file changed, 24 insertions(+), 12 deletions(-) diff --git a/mm/workingset.c b/mm/workingset.c index c17d45c6f29b0..dce41577a49d2 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -425,8 +425,16 @@ bool workingset_test_recent(void *shadow, bool file, bool *workingset) struct pglist_data *pgdat; unsigned long eviction; - if (lru_gen_enabled()) - return lru_gen_test_recent(shadow, file, &eviction_lruvec, &eviction, workingset); + rcu_read_lock(); + + if (lru_gen_enabled()) { + bool recent = lru_gen_test_recent(shadow, file, + &eviction_lruvec, &eviction, workingset); + + rcu_read_unlock(); + return recent; + } + unpack_shadow(shadow, &memcgid, &pgdat, &eviction, workingset); eviction <<= bucket_order; @@ -448,8 +456,16 @@ bool workingset_test_recent(void *shadow, bool file, bool *workingset) * configurations instead. */ eviction_memcg = mem_cgroup_from_id(memcgid); - if (!mem_cgroup_disabled() && !eviction_memcg) + if (!mem_cgroup_disabled() && + (!eviction_memcg || !mem_cgroup_tryget(eviction_memcg))) { + rcu_read_unlock(); return false; + } + + rcu_read_unlock(); + + /* Flush stats (and potentially sleep) outside the RCU read section */ + mem_cgroup_flush_stats_ratelimited(); eviction_lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat); refault = atomic_long_read(&eviction_lruvec->nonresident_age); @@ -493,6 +509,7 @@ bool workingset_test_recent(void *shadow, bool file, bool *workingset) } } + mem_cgroup_put(eviction_memcg); return refault_distance <= workingset_size; } @@ -519,19 +536,16 @@ void workingset_refault(struct folio *folio, void *shadow) return; } - /* Flush stats (and potentially sleep) before holding RCU read lock */ - mem_cgroup_flush_stats_ratelimited(); - - rcu_read_lock(); - /* * The activation decision for this folio is made at the level * where the eviction occurred, as that is where the LRU order * during folio reclaim is being determined. * * However, the cgroup that will own the folio is the one that - * is actually experiencing the refault event. + * is actually experiencing the refault event. Make sure the folio is + * locked to guarantee folio_memcg() stability throughout. */ + VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); nr = folio_nr_pages(folio); memcg = folio_memcg(folio); pgdat = folio_pgdat(folio); @@ -540,7 +554,7 @@ void workingset_refault(struct folio *folio, void *shadow) mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr); if (!workingset_test_recent(shadow, file, &workingset)) - goto out; + return; folio_set_active(folio); workingset_age_nonresident(lruvec, nr); @@ -556,8 +570,6 @@ void workingset_refault(struct folio *folio, void *shadow) lru_note_cost_refault(folio); mod_lruvec_state(lruvec, WORKINGSET_RESTORE_BASE + file, nr); } -out: - rcu_read_unlock(); } /** From patchwork Wed Nov 29 03:21:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 13472161 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5299C07CB1 for ; Wed, 29 Nov 2023 03:22:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DE7F16B039D; Tue, 28 Nov 2023 22:22:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D951F6B039F; Tue, 28 Nov 2023 22:22:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BE8596B039E; Tue, 28 Nov 2023 22:22:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id A870A6B039B for ; Tue, 28 Nov 2023 22:22:12 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 856A6120441 for ; Wed, 29 Nov 2023 03:22:12 +0000 (UTC) X-FDA: 81509543304.13.87074AD Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf01.hostedemail.com (Postfix) with ESMTP id B378D4000C for ; Wed, 29 Nov 2023 03:22:10 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=PPLzHH6z; spf=pass (imf01.hostedemail.com: domain of 3Ya5mZQoKCIoC265Cov0sru22uzs.q20zw18B-00y9oqy.25u@flex--yosryahmed.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3Ya5mZQoKCIoC265Cov0sru22uzs.q20zw18B-00y9oqy.25u@flex--yosryahmed.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701228130; a=rsa-sha256; cv=none; b=wErqnFKur4lOXhcDdM176X2kzECutuId/UiVKMCka6nt47zTvzKKr9yhkuNu84NakqRPHF S/34Njc4MkBxGn0x9/pZukv/fQgl0ULiSejv1aTtBkjFunj4839rDqvTchIbae8d2t4Pbr 5xthuAo+hSb4qxII1jIFs16ROJq2k4g= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=PPLzHH6z; spf=pass (imf01.hostedemail.com: domain of 3Ya5mZQoKCIoC265Cov0sru22uzs.q20zw18B-00y9oqy.25u@flex--yosryahmed.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3Ya5mZQoKCIoC265Cov0sru22uzs.q20zw18B-00y9oqy.25u@flex--yosryahmed.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701228130; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=oWg7ecS8tkDtiG8j3csNG9DywOjBZaeV4aEKsxpY4X4=; b=JkahYbDS89Rdyk2HxsVD6Ur7I7jfoyTJCnBUP/zQu8u1xiglXcRSWlLWSCk1tXtHbjeCNE VbE9037pvU7yJGz2OLWT1Eoxc8ATwA13BJQ57P8oIbY9fr32sTDUKHHoM1PTSWUJuo0d9M xu5Q3H3TBece3Nv+NSf9nbycVFXgYLA= Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-5c5daf2baccso83340787b3.3 for ; Tue, 28 Nov 2023 19:22:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1701228130; x=1701832930; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=oWg7ecS8tkDtiG8j3csNG9DywOjBZaeV4aEKsxpY4X4=; b=PPLzHH6zO3gvobv/PKekntg1AU3xTKDfuFCb6DruFHYsUX/7sC7lKI7xjAKvAJNNmQ jufuqn5FWcV3e2pQRf8PTmiS8YUwPGfvXEHMTjkcPZoX2F7qlszCERtjb2tN26mpJKW4 0b/VkSEoD2s9nu34erXNj7dPtMmzuCWAPn36Tt2Tv6DmDBvKOlTDD4rkyzRTYXj4llKh 4tFMJssE4ZKD1+iEC7Aj0sEktYqJCRGtvtqz2uSfGgNMwsqB12ixAfe3w5KCO/ZN+Nl+ bIR3qjsh/Mc71+6klXc+S6OAHW23VIz56wTjf0f5iWTYqLnmDYbgheKH0182OMetGvfX 4mhw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701228130; x=1701832930; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=oWg7ecS8tkDtiG8j3csNG9DywOjBZaeV4aEKsxpY4X4=; b=mW4Gv/yARX49Rv1rCR3CH/rekF+IcakCJgnKu4FMIlEbd1z2wjz9wh0QeLII9hQajd P6VMw6JLjMgMfObuiegh9Y6dzUYOi7JonAde3gpymIHZiUStbxqOOsTeqRQilhf/pT8U 6h3Sxiq1kBNKt0brbUx7MPTK1NUiFzhLnAclqe2oa4tCaIeR9zd7zZoFWki3Iol9rQtF s67Pe6YD2wwO4A7pQbmKRAAVa2sLuMhkY61O6TPERZwb+on1CRBtwD+e4fpJ1JVOQI66 zyZITeSUKlwAbr1lub83SOR6XWVnH2iXUTqhWuNO4XrCzLlTUQncEbU6yGZnd+apwLpx pQdg== X-Gm-Message-State: AOJu0YyQolTraBgA6fA+AkGaTZcw41V8hZbwtjCW+h8jSLqocBsAzibp ievuiwMXbq33lywnMZ5xL51HFbH+BnGJeJDl X-Google-Smtp-Source: AGHT+IHkZzgEF3saSuZ8F4MVz1DNDW5QnvlvrCcYaR56TX+2LRE7uU69+zXwHvSxVqNELBuh7iHssPpF/A4H8k10 X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:29b4]) (user=yosryahmed job=sendgmr) by 2002:a25:6f8b:0:b0:db3:f436:5714 with SMTP id k133-20020a256f8b000000b00db3f4365714mr586243ybc.0.1701228129872; Tue, 28 Nov 2023 19:22:09 -0800 (PST) Date: Wed, 29 Nov 2023 03:21:53 +0000 In-Reply-To: <20231129032154.3710765-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20231129032154.3710765-1-yosryahmed@google.com> X-Mailer: git-send-email 2.43.0.rc1.413.gea7ed67945-goog Message-ID: <20231129032154.3710765-6-yosryahmed@google.com> Subject: [mm-unstable v4 5/5] mm: memcg: restore subtree stats flushing From: Yosry Ahmed To: Andrew Morton Cc: Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Ivan Babrou , Tejun Heo , " =?utf-8?q?Michal_Koutn=C3=BD?= " , Waiman Long , kernel-team@cloudflare.com, Wei Xu , Greg Thelen , Domenico Cerasuolo , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Yosry Ahmed X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: B378D4000C X-Stat-Signature: exm8ochq9gemannefgxhbn9erf5or5ut X-Rspam-User: X-HE-Tag: 1701228130-514899 X-HE-Meta: U2FsdGVkX1+bzUQggcX3A2pPrJgGGskJuZgZqMXq7Mo5WetJEvXxnL/XqCeA3nTDdbDCskBJRpUefkgOX+VQhNilMj2yEp8c5KHDDNhlaT6roKq9P/0WW32yDWNv6ssVQZXLrQoc/Gh4OB6njyYLNOXsI/AAC2ECYtf3UEO2Ark7OmeBoU7ktQq7vky6WDPnQYTr63muAsSRabZcEgCY5L7+IyHr5P2Uf9prfzhX9EDEfTE4CgDpgh/arT5XCV7+5XVPR3RZgASz+cHTqKu9Bc5r+C5wniEHvboQ1Rqy5yRH9IipJJOQWDm9Vv/EnwkuFczQRENMtyHocslPp8b+OeWRZx4uuceyb3h/9z/YWONKxj+yT6+6UZBrt+awNzhCW0y7m3TCD/X6dtoFOhqETRw5p1RJ9cq79+IQGDXg+/P3OXPnk8aiy4Q/FDG+yxtgQn1ZqlXqdZZ3E/rIFJoBfMwTj7HrZMByd/YOPDv1Ygfc3Gb5t7CAWfkoGBcEOTN2HzlpxL6pfniAue7eWpwBWcxIokeMa7d3HiH9aiHLQWY0qP6eLB1rUJT2EcWLbja2pmEjsP2L7B8Ylk7chi6eHiYAmHrvDm0BV+cgVBIDmqBTBjeC2Ornn5QVKGaCz60EkFNFool1kv03qDe4gbBkYpM0czpRrltgFrA9mGC84uFtivlMH6Syw4CunkZ//1q9QSVtYXlV2eGbk2Bfrpdh0J7dRjGqOC1cE40YsTqxwNkgrPrxXt4cZNt1qJNIFm/PJGbNReGBBN5neGXH0qLWrqgmi4GD8qd49zFo98AWbnSTzok5kiZLKCbFZb2zl47xLCbQMn5hVQN3OVBCXJcbYrC+z1GZn+w/IRWxWyN/w0gQI9zXWr7RXk3Kn1fLwOP3yX/V6qrjuUyiQzSG74LFJlpl2V4UZ4Sn2Sp5a1mhNfLRDre88gFwUIp0coEVmDT81JxT/NXce3SMfIe2dKH c/RcyveH YPlnzaUOJ76B8TN4r7K3FsNMS4Jpigq5oZGDZvj1LH+JhPW51ANrKfS2lQEjBpo8acosOwzpOMER56WFFedroZjTIrMgdBEvEACqbr4lkIbT3ahF5nGiAm7pw58dreWPHgd25RbHPX8KOKy8flmdLhMtlwH3cwCpR0AS2wt3tR82a2puINxqDOPz08bi9IZjYUhHcEs56iZYsmI+D6kL/QutVW7dktuNloYleSS/HsPWYfKkJVrL36PvKQke9ZQVtDA07b9/iv6w1oyrAwXKsaa6iMYWECShneV1IlKg1CoJta1h5241J38y161va/NZHBJTOdgObS1czPEXwp9sgrmb9ocCzFL1kBbMKFNamM9cokF6oavb4ZKc23wolNRL68jdJFfjAlBgVCJNv/5E08fifYog4xcz9MMhlCo0US4DK0uLmS8SpcayZB371kZ1/k5A9bXorckUEY6xNqAaefAHBnm+I65MH9YDwGmWy5aXB9iZ1Sjcgj5e1kDt3DQx9sO+ue4JVYaub6Zc4T3KHZS9UFVZQ+mp6IKOoeKnC+EtAr92Dmg0rQGZeGZNU++Xbf6p0qcaOxPI4Z/Sv9y/u+Z9ghHMhrdX2+KnDIun98+3FXmdHLBoMLtsfdbcaSCBk+5CQNfmL+w2M6Jc9a8UUmmCi6BeUL38lkFiey3katg4lhGIi/4j18A4t+PEEC6yxOjVTol/JREpVw7MfERkUgAdueNnG5GehPo70 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Stats flushing for memcg currently follows the following rules: - Always flush the entire memcg hierarchy (i.e. flush the root). - Only one flusher is allowed at a time. If someone else tries to flush concurrently, they skip and return immediately. - A periodic flusher flushes all the stats every 2 seconds. The reason this approach is followed is because all flushes are serialized by a global rstat spinlock. On the memcg side, flushing is invoked from userspace reads as well as in-kernel flushers (e.g. reclaim, refault, etc). This approach aims to avoid serializing all flushers on the global lock, which can cause a significant performance hit under high concurrency. This approach has the following problems: - Occasionally a userspace read of the stats of a non-root cgroup will be too expensive as it has to flush the entire hierarchy [1]. - Sometimes the stats accuracy are compromised if there is an ongoing flush, and we skip and return before the subtree of interest is actually flushed, yielding stale stats (by up to 2s due to periodic flushing). This is more visible when reading stats from userspace, but can also affect in-kernel flushers. The latter problem is particulary a concern when userspace reads stats after an event occurs, but gets stats from before the event. Examples: - When memory usage / pressure spikes, a userspace OOM handler may look at the stats of different memcgs to select a victim based on various heuristics (e.g. how much private memory will be freed by killing this). Reading stale stats from before the usage spike in this case may cause a wrongful OOM kill. - A proactive reclaimer may read the stats after writing to memory.reclaim to measure the success of the reclaim operation. Stale stats from before reclaim may give a false negative. - Reading the stats of a parent and a child memcg may be inconsistent (child larger than parent), if the flush doesn't happen when the parent is read, but happens when the child is read. As for in-kernel flushers, they will occasionally get stale stats. No regressions are currently known from this, but if there are regressions, they would be very difficult to debug and link to the source of the problem. This patch aims to fix these problems by restoring subtree flushing, and removing the unified/coalesced flushing logic that skips flushing if there is an ongoing flush. This change would introduce a significant regression with global stats flushing thresholds. With per-memcg stats flushing thresholds, this seems to perform really well. The thresholds protect the underlying lock from unnecessary contention. Add a mutex to protect the underlying rstat lock from excessive memcg flushing. The thresholds are re-checked after the mutex is grabbed to make sure that a concurrent flush did not already get the subtree we are trying to flush. A call to cgroup_rstat_flush() is not cheap, even if there are no pending updates. This patch was tested in two ways to ensure the latency of flushing is up to bar, on a machine with 384 cpus: - A synthetic test with 5000 concurrent workers in 500 cgroups doing allocations and reclaim, as well as 1000 readers for memory.stat (variation of [2]). No regressions were noticed in the total runtime. Note that significant regressions in this test are observed with global stats thresholds, but not with per-memcg thresholds. - A synthetic stress test for concurrently reading memcg stats while memory allocation/freeing workers are running in the background, provided by Wei Xu [3]. With 250k threads reading the stats every 100ms in 50k cgroups, 99.9% of reads take <= 50us. Less than 0.01% of reads take more than 1ms, and no reads take more than 100ms. [1] https://lore.kernel.org/lkml/CABWYdi0c6__rh-K7dcM_pkf9BJdTRtAU08M43KO9ME4-dsgfoQ@mail.gmail.com/ [2] https://lore.kernel.org/lkml/CAJD7tka13M-zVZTyQJYL1iUAYvuQ1fcHbCjcOBZcz6POYTV-4g@mail.gmail.com/ [3] https://lore.kernel.org/lkml/CAAPL-u9D2b=iF5Lf_cRnKxUfkiEe0AMDTu6yhrUAzX0b6a6rDg@mail.gmail.com/ Signed-off-by: Yosry Ahmed Tested-by: Domenico Cerasuolo Signed-off-by: Yosry Ahmed Acked-by: Shakeel Butt --- include/linux/memcontrol.h | 8 ++-- mm/memcontrol.c | 75 +++++++++++++++++++++++--------------- mm/vmscan.c | 2 +- mm/workingset.c | 10 +++-- 4 files changed, 58 insertions(+), 37 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index a568f70a26774..8673140683e6e 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1050,8 +1050,8 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec, return x; } -void mem_cgroup_flush_stats(void); -void mem_cgroup_flush_stats_ratelimited(void); +void mem_cgroup_flush_stats(struct mem_cgroup *memcg); +void mem_cgroup_flush_stats_ratelimited(struct mem_cgroup *memcg); void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, int val); @@ -1566,11 +1566,11 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec, return node_page_state(lruvec_pgdat(lruvec), idx); } -static inline void mem_cgroup_flush_stats(void) +static inline void mem_cgroup_flush_stats(struct mem_cgroup *memcg) { } -static inline void mem_cgroup_flush_stats_ratelimited(void) +static inline void mem_cgroup_flush_stats_ratelimited(struct mem_cgroup *memcg) { } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 93b483b379aa1..5d300318bf18a 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -670,7 +670,6 @@ struct memcg_vmstats { */ static void flush_memcg_stats_dwork(struct work_struct *w); static DECLARE_DEFERRABLE_WORK(stats_flush_dwork, flush_memcg_stats_dwork); -static atomic_t stats_flush_ongoing = ATOMIC_INIT(0); static u64 flush_last_time; #define FLUSH_TIME (2UL*HZ) @@ -731,35 +730,47 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) } } -static void do_flush_stats(void) +static void do_flush_stats(struct mem_cgroup *memcg) { - /* - * We always flush the entire tree, so concurrent flushers can just - * skip. This avoids a thundering herd problem on the rstat global lock - * from memcg flushers (e.g. reclaim, refault, etc). - */ - if (atomic_read(&stats_flush_ongoing) || - atomic_xchg(&stats_flush_ongoing, 1)) - return; - - WRITE_ONCE(flush_last_time, jiffies_64); - - cgroup_rstat_flush(root_mem_cgroup->css.cgroup); + if (mem_cgroup_is_root(memcg)) + WRITE_ONCE(flush_last_time, jiffies_64); - atomic_set(&stats_flush_ongoing, 0); + cgroup_rstat_flush(memcg->css.cgroup); } -void mem_cgroup_flush_stats(void) +/* + * mem_cgroup_flush_stats - flush the stats of a memory cgroup subtree + * @memcg: root of the subtree to flush + * + * Flushing is serialized by the underlying global rstat lock. There is also a + * minimum amount of work to be done even if there are no stat updates to flush. + * Hence, we only flush the stats if the updates delta exceeds a threshold. This + * avoids unnecessary work and contention on the underlying lock. + */ +void mem_cgroup_flush_stats(struct mem_cgroup *memcg) { - if (memcg_should_flush_stats(root_mem_cgroup)) - do_flush_stats(); + static DEFINE_MUTEX(memcg_stats_flush_mutex); + + if (mem_cgroup_disabled()) + return; + + if (!memcg) + memcg = root_mem_cgroup; + + if (memcg_should_flush_stats(memcg)) { + mutex_lock(&memcg_stats_flush_mutex); + /* Check again after locking, another flush may have occurred */ + if (memcg_should_flush_stats(memcg)) + do_flush_stats(memcg); + mutex_unlock(&memcg_stats_flush_mutex); + } } -void mem_cgroup_flush_stats_ratelimited(void) +void mem_cgroup_flush_stats_ratelimited(struct mem_cgroup *memcg) { /* Only flush if the periodic flusher is one full cycle late */ if (time_after64(jiffies_64, READ_ONCE(flush_last_time) + 2*FLUSH_TIME)) - mem_cgroup_flush_stats(); + mem_cgroup_flush_stats(memcg); } static void flush_memcg_stats_dwork(struct work_struct *w) @@ -768,7 +779,7 @@ static void flush_memcg_stats_dwork(struct work_struct *w) * Deliberately ignore memcg_should_flush_stats() here so that flushing * in latency-sensitive paths is as cheap as possible. */ - do_flush_stats(); + do_flush_stats(root_mem_cgroup); queue_delayed_work(system_unbound_wq, &stats_flush_dwork, FLUSH_TIME); } @@ -1664,7 +1675,7 @@ static void memcg_stat_format(struct mem_cgroup *memcg, struct seq_buf *s) * * Current memory state: */ - mem_cgroup_flush_stats(); + mem_cgroup_flush_stats(memcg); for (i = 0; i < ARRAY_SIZE(memory_stats); i++) { u64 size; @@ -4214,7 +4225,7 @@ static int memcg_numa_stat_show(struct seq_file *m, void *v) int nid; struct mem_cgroup *memcg = mem_cgroup_from_seq(m); - mem_cgroup_flush_stats(); + mem_cgroup_flush_stats(memcg); for (stat = stats; stat < stats + ARRAY_SIZE(stats); stat++) { seq_printf(m, "%s=%lu", stat->name, @@ -4295,7 +4306,7 @@ static void memcg1_stat_format(struct mem_cgroup *memcg, struct seq_buf *s) BUILD_BUG_ON(ARRAY_SIZE(memcg1_stat_names) != ARRAY_SIZE(memcg1_stats)); - mem_cgroup_flush_stats(); + mem_cgroup_flush_stats(memcg); for (i = 0; i < ARRAY_SIZE(memcg1_stats); i++) { unsigned long nr; @@ -4791,7 +4802,7 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages, struct mem_cgroup *memcg = mem_cgroup_from_css(wb->memcg_css); struct mem_cgroup *parent; - mem_cgroup_flush_stats(); + mem_cgroup_flush_stats(memcg); *pdirty = memcg_page_state(memcg, NR_FILE_DIRTY); *pwriteback = memcg_page_state(memcg, NR_WRITEBACK); @@ -6886,7 +6897,7 @@ static int memory_numa_stat_show(struct seq_file *m, void *v) int i; struct mem_cgroup *memcg = mem_cgroup_from_seq(m); - mem_cgroup_flush_stats(); + mem_cgroup_flush_stats(memcg); for (i = 0; i < ARRAY_SIZE(memory_stats); i++) { int nid; @@ -8125,7 +8136,11 @@ bool obj_cgroup_may_zswap(struct obj_cgroup *objcg) break; } - cgroup_rstat_flush(memcg->css.cgroup); + /* + * mem_cgroup_flush_stats() ignores small changes. Use + * do_flush_stats() directly to get accurate stats for charging. + */ + do_flush_stats(memcg); pages = memcg_page_state(memcg, MEMCG_ZSWAP_B) / PAGE_SIZE; if (pages < max) continue; @@ -8190,8 +8205,10 @@ void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size) static u64 zswap_current_read(struct cgroup_subsys_state *css, struct cftype *cft) { - cgroup_rstat_flush(css->cgroup); - return memcg_page_state(mem_cgroup_from_css(css), MEMCG_ZSWAP_B); + struct mem_cgroup *memcg = mem_cgroup_from_css(css); + + mem_cgroup_flush_stats(memcg); + return memcg_page_state(memcg, MEMCG_ZSWAP_B); } static int zswap_max_show(struct seq_file *m, void *v) diff --git a/mm/vmscan.c b/mm/vmscan.c index d8c3338fee0fb..0b8a0107d58d8 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2250,7 +2250,7 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc) * Flush the memory cgroup stats, so that we read accurate per-memcg * lruvec stats for heuristics. */ - mem_cgroup_flush_stats(); + mem_cgroup_flush_stats(sc->target_mem_cgroup); /* * Determine the scan balance between anon and file LRUs. diff --git a/mm/workingset.c b/mm/workingset.c index dce41577a49d2..7d3dacab8451a 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -464,8 +464,12 @@ bool workingset_test_recent(void *shadow, bool file, bool *workingset) rcu_read_unlock(); - /* Flush stats (and potentially sleep) outside the RCU read section */ - mem_cgroup_flush_stats_ratelimited(); + /* + * Flush stats (and potentially sleep) outside the RCU read section. + * XXX: With per-memcg flushing and thresholding, is ratelimiting + * still needed here? + */ + mem_cgroup_flush_stats_ratelimited(eviction_memcg); eviction_lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat); refault = atomic_long_read(&eviction_lruvec->nonresident_age); @@ -676,7 +680,7 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker, struct lruvec *lruvec; int i; - mem_cgroup_flush_stats(); + mem_cgroup_flush_stats(sc->memcg); lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid)); for (pages = 0, i = 0; i < NR_LRU_LISTS; i++) pages += lruvec_page_state_local(lruvec,