From patchwork Thu Nov 18 06:53:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shakeel Butt X-Patchwork-Id: 12626059 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CFC29C433EF for ; Thu, 18 Nov 2021 06:54:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5980061A8B for ; Thu, 18 Nov 2021 06:54:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 5980061A8B Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id B584E6B0072; Thu, 18 Nov 2021 01:54:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B07B06B0073; Thu, 18 Nov 2021 01:54:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9F6B46B0074; Thu, 18 Nov 2021 01:54:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0038.hostedemail.com [216.40.44.38]) by kanga.kvack.org (Postfix) with ESMTP id 905D96B0072 for ; Thu, 18 Nov 2021 01:54:06 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 3FDFC886C4 for ; Thu, 18 Nov 2021 06:53:56 +0000 (UTC) X-FDA: 78821136114.10.06C44C6 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) by imf23.hostedemail.com (Postfix) with ESMTP id 32832900038A for ; Thu, 18 Nov 2021 06:53:54 +0000 (UTC) Received: by mail-pf1-f202.google.com with SMTP id x14-20020a627c0e000000b0049473df362dso3027563pfc.12 for ; Wed, 17 Nov 2021 22:53:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:message-id:mime-version:subject:from:to:cc; bh=Gqma2pWdNr7xdLS7T1V9DQe179q0YDzZTbjqjZXlV0o=; b=HbN3FLE40/IRvRc1oUsvv83AnEx3cTy8FUmt1YKuGn+CdySMi7r7n1WB32cyzurnem 6UQrs9TCkeOGsd+i+5WJsvqUyl/ORgAD4YnYPB2YfWiXhElbH3Hf7G64RXKsrr91B05H qEkv5UVvocjv0ct742vQRIU5Nk16nukGBhgU0a1qvW1TI8A5tRvbhTSY19lj5HCodBs9 oHeN6jPtK50uTctPvTSSUt9j48EgI1Ey9sLP/MSH1wNLp78N87bHkvLJ3g4usImymHkY yDCL9+v+SzTms7fiKiv39bm9gytpi9C1KEEnlcAla3hokHKuspYTljc/q4IlW80K2Vkv Dpdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=Gqma2pWdNr7xdLS7T1V9DQe179q0YDzZTbjqjZXlV0o=; b=vKFf8qYxLkzAAhH3BD20Elobr5AWpTYTo8ukmQUSwqyaq4EJXdQIS24DW2UMsiq97d MuphK30k6e88cCRIZjNrLd2h+RUHY3ppC7Db6wOzcuhhfONFCVP+4euuHBsNb2fUmYsi N8zjLXfoA4I+zpqTt/Vmv0rKb5E9N69IkL7PqyUpSHc3Vl2gQkmd7mJaYA6rVsav3f85 aT3gz8cSK0PBpFvj0FVSNXK/aYC09QWc4CrYQhavzn57n3DQqcX4LGo7/4xHZxdxF8L2 C95N9OWIKg71jtfloa+Ii8m895ryUtFShYhsYuM+emfORwS3ALjJmORENKLf1DTcTksN 3dtg== X-Gm-Message-State: AOAM530SLyAxtj8vKgGxDzUZKdGAPDzamufWZenEnN+lKZlBriVX4kta 8gj2woIGXeSM2NTOk1TUOak+ApiNzLFOqA== X-Google-Smtp-Source: ABdhPJxjuLAWkPLRGxQBAq8gzZIgv3ll3UCcW09rwWPbGZA/IrFbHxFox4B0xY4NKYVZGRnNAkzggRcBpuTapQ== X-Received: from shakeelb.svl.corp.google.com ([2620:15c:2cd:202:1f92:79a2:910:cfc8]) (user=shakeelb job=sendgmr) by 2002:a17:90b:1a86:: with SMTP id ng6mr7771396pjb.142.1637218434751; Wed, 17 Nov 2021 22:53:54 -0800 (PST) Date: Wed, 17 Nov 2021 22:53:50 -0800 Message-Id: <20211118065350.697046-1-shakeelb@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog Subject: [PATCH] memcg: better bounds on the memcg stats updates From: Shakeel Butt To: Johannes Weiner , Michal Hocko , " =?utf-8?q?Michal_Koutn=C3=BD?= " Cc: Andrew Morton , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Shakeel Butt X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 32832900038A X-Stat-Signature: ci1r3oz8fyrh1x5qui5p1jigst8qomxd Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=HbN3FLE4; spf=pass (imf23.hostedemail.com: domain of 3gviVYQgKCN0RG9JDDKAFNNFKD.BNLKHMTW-LLJU9BJ.NQF@flex--shakeelb.bounces.google.com designates 209.85.210.202 as permitted sender) smtp.mailfrom=3gviVYQgKCN0RG9JDDKAFNNFKD.BNLKHMTW-LLJU9BJ.NQF@flex--shakeelb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1637218434-491538 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The commit 11192d9c124d ("memcg: flush stats only if updated") added tracking of memcg stats updates which is used by the readers to flush only if the updates are over a certain threshold. However each individual update can corresponds to a large value change for a given stat. For example adding or removing a hugepage to an LRU changes the stat by thp_nr_pages (512 on x86_64). Treating the update related to THP as one can keep the stat off, in theory, by (thp_nr_pages * nr_cpus * CHARGE_BATCH) before flush. To handle such scenarios, this patch adds consideration of the stat update value as well instead of just the update event. In addition let the asyn flusher unconditionally flush the stats to put time limit on the stats skew and hopefully a lot less readers would need to flush. Signed-off-by: Shakeel Butt --- mm/memcontrol.c | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 781605e92015..a8f07540d4b8 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -629,11 +629,17 @@ static DEFINE_SPINLOCK(stats_flush_lock); static DEFINE_PER_CPU(unsigned int, stats_updates); static atomic_t stats_flush_threshold = ATOMIC_INIT(0); -static inline void memcg_rstat_updated(struct mem_cgroup *memcg) +static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) { + unsigned int x; + cgroup_rstat_updated(memcg->css.cgroup, smp_processor_id()); - if (!(__this_cpu_inc_return(stats_updates) % MEMCG_CHARGE_BATCH)) - atomic_inc(&stats_flush_threshold); + + x = __this_cpu_add_return(stats_updates, abs(val)); + if (x > MEMCG_CHARGE_BATCH) { + atomic_add(x / MEMCG_CHARGE_BATCH, &stats_flush_threshold); + __this_cpu_write(stats_updates, 0); + } } static void __mem_cgroup_flush_stats(void) @@ -656,7 +662,7 @@ void mem_cgroup_flush_stats(void) static void flush_memcg_stats_dwork(struct work_struct *w) { - mem_cgroup_flush_stats(); + __mem_cgroup_flush_stats(); queue_delayed_work(system_unbound_wq, &stats_flush_dwork, 2UL*HZ); } @@ -672,7 +678,7 @@ void __mod_memcg_state(struct mem_cgroup *memcg, int idx, int val) return; __this_cpu_add(memcg->vmstats_percpu->state[idx], val); - memcg_rstat_updated(memcg); + memcg_rstat_updated(memcg, val); } /* idx can be of type enum memcg_stat_item or node_stat_item. */ @@ -705,7 +711,7 @@ void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, /* Update lruvec */ __this_cpu_add(pn->lruvec_stats_percpu->state[idx], val); - memcg_rstat_updated(memcg); + memcg_rstat_updated(memcg, val); } /** @@ -807,7 +813,7 @@ void __count_memcg_events(struct mem_cgroup *memcg, enum vm_event_item idx, return; __this_cpu_add(memcg->vmstats_percpu->events[idx], count); - memcg_rstat_updated(memcg); + memcg_rstat_updated(memcg, count); } static unsigned long memcg_events(struct mem_cgroup *memcg, int event)