From patchwork Thu Apr 10 02:57:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shakeel Butt X-Patchwork-Id: 14045824 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3DF58C369A2 for ; Thu, 10 Apr 2025 02:58:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 636A16B012B; Wed, 9 Apr 2025 22:58:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5E4996B0133; Wed, 9 Apr 2025 22:58:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4AD866B0132; Wed, 9 Apr 2025 22:58:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 2D1FE6B02C1 for ; Wed, 9 Apr 2025 22:58:19 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id D484E161996 for ; Thu, 10 Apr 2025 02:58:19 +0000 (UTC) X-FDA: 83316625518.05.A856096 Received: from out-171.mta1.migadu.com (out-171.mta1.migadu.com [95.215.58.171]) by imf30.hostedemail.com (Postfix) with ESMTP id 2A21980004 for ; Thu, 10 Apr 2025 02:58:17 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b="P5Upj/0u"; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf30.hostedemail.com: domain of shakeel.butt@linux.dev designates 95.215.58.171 as permitted sender) smtp.mailfrom=shakeel.butt@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1744253898; a=rsa-sha256; cv=none; b=EDQUkGI39tdXzDkEMXbFwNU9lP6sQyPqSgzlLBFmfg9jvs4gGFqSaceJsFqITeflw6taWs HlofqqCRquuGl/fyzgmMjw26tbSCJlYg7Q6HloeGOOEEq4IGvSNEFC0PkK9JQ/b+3QAl/D 5MxWRw9ZM+jY4QWiEmavDLgRNCkUIAc= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b="P5Upj/0u"; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf30.hostedemail.com: domain of shakeel.butt@linux.dev designates 95.215.58.171 as permitted sender) smtp.mailfrom=shakeel.butt@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1744253898; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=QbwnSKjPedkNReipVXQJ7J5oJfAs+GWPD73Z0sp3QZA=; b=UJGoU5eB8CQctGkLI+0JDhCsmWn+i+q+gAF3oTQ2bdxH0/y7+va2+nZwk7MfdsRrZo+IxP zRsLf5BBV/pqJYOVje31zfp6xuJyzDdC+J61BrILRPphxApvcaIo3tluvrVe/0365e3PNh wvZCzOo8tNfzEfhhFsuqR6+DX+w2O10= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1744253895; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=QbwnSKjPedkNReipVXQJ7J5oJfAs+GWPD73Z0sp3QZA=; b=P5Upj/0uvQZRUZafgO9lcJ2NRz0DwA+nO8En1S9CmnkdxbBvgNuqCDjYPwvkEC423PjYhi 6kVxfYYaVNduhMcCiCucKBdrQtW0G5oCJPD+5ljzbb2XQBsR+Fo0ffzeZQO3zENuL1YIuM WYSldIjr4U05ywf2DYQ3zumAM2YXM6U= From: Shakeel Butt To: Andrew Morton Cc: Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Yosry Ahmed , Waiman Long , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Meta kernel team Subject: [PATCH v2] memcg: optimize memcg_rstat_updated Date: Wed, 9 Apr 2025 19:57:52 -0700 Message-ID: <20250410025752.92159-1-shakeel.butt@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: 2A21980004 X-Stat-Signature: qqbddgdpegcoji675bw3g4wnj4um678x X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1744253897-884750 X-HE-Meta: U2FsdGVkX1/llINXdF7c+UuyqY/gfcppB3aYk5ZZzJgR3c6LNNpRcOhYJ/e3Jlk2585l5fltxqd+zHsXfuEGrGD+WOrylN4YujvG/Qc9GXcNvvgECMCnNjy7h69xWl4ZQZrKtg7ta2ijCyNoRR+mYVMB/J3HlgpVyPItRVS39BYz7YBRlM/unl4GV//5iWAp5E3UQO9ED+H6u+5V5PFY+dCTnG+O1BG/q8eGm+Evc6nx3Wy5PmTQIGw9g9MQHt02sumBYEvEWBAdT9aiAnFBMCPNevusAgVmhi44ZNaYBOlvykPHkBT6AY9S2mBQNRAESP1dpvZ8Ttzc46NtRl4dMaEsN53JjH6EhdhJtE/LdBnHcUaKzEWtj6NWWNabfaq83rWF3yxkgvowis1bo07oiQoQtbKL0hqeHZD7CJLf9Um9g31ty91Vop57ahNJEsfkPEOFbkdkwnBVKR8LubKq2cUBQo1FfqsBDSaKFcEMelrbQAkgy2UqECHhxIAgEjmyGJv6+ZfAEycj0ZStlAsRN/6BXxLc4PRd90yWA7pXWGJTezssyd7CQ8iKwn5vwvmUfKEFAoNQ6BHjVdTBUCVVtNWs/NKMp9zwPWYZdI4NAm2FJ/fHuvrRZqBc1zY1QCZPhlzS/pQouAxSh2S58lkuR/Z0rckipT+ijuNj/cu9g8gBE8Df9yFSEogDFeJXXNHrGpY3gs3AFNMvdmRTdiSHIOy0FqhL/y/tghZ1c5by+u7rNpUcjeJO8u9GEI6k0vXU6d2V2NplTAWR2IcDnH5DXJfM4nx81jU6PfrbuMNkjBX07USuuj3CegJk7JkUrycSLrcfn2tb8jQhU3i4wyIrmzzlEY81KWIUue8DnoAsm3b61HgTKlljXWiJrUOlmGvCGLYwj0ApxgnYCg8F3DxQh4L6oCFC8hieDYYw0VXn4Sh8QPUgjunkon/fnrA9BgOJ4/JMNCdVHCDDjkQvGqJ PwH3dWRz 5GtcswPQx6Qs3oHFbisjTuRDDLpEFdcjWv5YgwkoL4gV3eYjWHC+PG2rVAc0E8V0FyECTBQK14x6nyhy3pntLjC2dvGBOLDawvuCghTEwrk72a+9H//yMFy4bCwzeL/2IZqCG/L0CDbom/+EEWOo023B/NNQT/qXm9/ctZqOVzaEqpYOocSOJN3wvxJyyHk/QEg86Xa4T/P3lZ6Dl0PVv3MTYNQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently the kernel maintains the stats updates per-memcg which is needed to implement stats flushing threshold. On the update side, the update is added to the per-cpu per-memcg update of the given memcg and all of its ancestors. However when the given memcg has passed the flushing threshold, all of its ancestors should have passed the threshold as well. There is no need to traverse up the memcg tree to maintain the stats updates. Perf profile collected from our fleet shows that memcg_rstat_updated is one of the most expensive memcg function i.e. a lot of cumulative CPU is being spent on it. So, even small micro optimizations matter a lot. This patch is microbenchmarked with multiple instances of netperf on a single machine with locally running netserver and we see couple of percentage of improvement. Signed-off-by: Shakeel Butt Acked-by: Roman Gushchin --- Changes since v1: - Fix the condition (Longman) - Ran netperf mm/memcontrol.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 421740f1bcdc..3035c1595b32 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -585,18 +585,20 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) cgroup_rstat_updated(memcg->css.cgroup, cpu); statc = this_cpu_ptr(memcg->vmstats_percpu); for (; statc; statc = statc->parent) { + /* + * If @memcg is already flushable then all its ancestors are + * flushable as well and also there is no need to increase + * stats_updates. + */ + if (memcg_vmstats_needs_flush(statc->vmstats)) + break; + stats_updates = READ_ONCE(statc->stats_updates) + abs(val); WRITE_ONCE(statc->stats_updates, stats_updates); if (stats_updates < MEMCG_CHARGE_BATCH) continue; - /* - * If @memcg is already flush-able, increasing stats_updates is - * redundant. Avoid the overhead of the atomic update. - */ - if (!memcg_vmstats_needs_flush(statc->vmstats)) - atomic64_add(stats_updates, - &statc->vmstats->stats_updates); + atomic64_add(stats_updates, &statc->vmstats->stats_updates); WRITE_ONCE(statc->stats_updates, 0); } }