From patchwork Wed Jul 24 20:20:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roman Gushchin X-Patchwork-Id: 13741325 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1AC17C3DA64 for ; Wed, 24 Jul 2024 20:21:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A61E66B0088; Wed, 24 Jul 2024 16:21:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A11466B0089; Wed, 24 Jul 2024 16:21:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 88B936B008A; Wed, 24 Jul 2024 16:21:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 65DFE6B0088 for ; Wed, 24 Jul 2024 16:21:17 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 237FA403B2 for ; Wed, 24 Jul 2024 20:21:17 +0000 (UTC) X-FDA: 82375765794.08.2BCF6E5 Received: from out-186.mta1.migadu.com (out-186.mta1.migadu.com [95.215.58.186]) by imf24.hostedemail.com (Postfix) with ESMTP id 54906180028 for ; Wed, 24 Jul 2024 20:21:15 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=gImcowIl; spf=pass (imf24.hostedemail.com: domain of roman.gushchin@linux.dev designates 95.215.58.186 as permitted sender) smtp.mailfrom=roman.gushchin@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721852452; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=f1I7ocH/WB9aOUiuZnewQVyMXUKjz+54IxrRJyopPQg=; b=vIYgxjo7kr29NNIJBHpaoNM3DijCyM2Chu9Jm5/t8dbLadVDDQL0xbT0BfmRmAtaE6Qgik 3GOXyXdT+6ndbyjviSuurw+ndFxigENPTRgq4LZenAtCxSjNn9nBzzHzcMPDbZUVNvCNrq Exhg5UknIBETSuW1NFMNQfE7EmLBHFQ= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=gImcowIl; spf=pass (imf24.hostedemail.com: domain of roman.gushchin@linux.dev designates 95.215.58.186 as permitted sender) smtp.mailfrom=roman.gushchin@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721852452; a=rsa-sha256; cv=none; b=d3pHqjTrQRXSOx9HECQlXr95ZYuIDgxbhFZovqT/MKvnzKP85xKdoq8ESPqzMLSOj2J4FA QG3MtgUZPqO0YU4Hf0TcdgZHPSeWRHuNPi+L1iHbwOL3avlh3mGCBCcspz0krYFJbe4RwN MUWGaCmFQvw9/+fy6mG9dBo3PvJsVS0= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1721852473; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=f1I7ocH/WB9aOUiuZnewQVyMXUKjz+54IxrRJyopPQg=; b=gImcowIlebRIU5FH2VUps7rSfOnDD8Y+Bsr+L7q3zykd0e4m5r4G+PnVjCuord+yNeOwJH C2ErCMbYR3MattIJ6LCHkbvXSHThhD7/7/pmdqV5Qp81w9YmIlHVbiGtrEFbt3sEvNHbtt Nmrxt8kRbsbEzSSnazLg3DHMtuwFfdM= From: Roman Gushchin To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Shakeel Butt , Muchun Song , Roman Gushchin Subject: [PATCH v2 1/5] mm: memcg: don't call propagate_protected_usage() needlessly Date: Wed, 24 Jul 2024 20:20:59 +0000 Message-ID: <20240724202103.1210065-2-roman.gushchin@linux.dev> In-Reply-To: <20240724202103.1210065-1-roman.gushchin@linux.dev> References: <20240724202103.1210065-1-roman.gushchin@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 54906180028 X-Stat-Signature: psz3wrtz67zfykonq8r5f65ys9oay9xz X-HE-Tag: 1721852475-632313 X-HE-Meta: U2FsdGVkX19652jT4aQtTn3MbJhmS2/alkAXWzQ0XWlnbCQumPy7q57x5IqVL1BvLWuNdFfyUrSj0l28fIwC7y8gNxiKUpoMAzyE9VWH5A60KQscQKQ9pj2GIhw0L9aEZjdo9ghdKes/5/Y7GWaYUFMM8N1pzEUYcrcNx+FdZMH7KXv3T4MwhkjUR1Y5IkNRQHE7CWQ3pD4UKhqCppA7PIQU7ffC314Oyl3VtkNv+uPpdPeSKEKltkDd1ilWYEPNrcanoESqogqPsbQAwxzc+es9ju52UbFBsNjTQe/Sw75f3bd4FtmFgBG3c0OAtM8Of3Xl7YuOVIlDCC3wGTCsJMnC7dEONYhfktvfezBimPJf9LNsemnmN8CBeOwiIKkiL95DoN7vqlxk73CIiENyXQWZ4bl5ZJoQ84dXU0k/ZWoWk/PlMknTbOtSyNqidFy8wL7ytyrGfIuTFwlK1XyfMZkKEwKPnJgCJXXo/BOwEc12Z40bApWJMfdKYjru6Zwd/Yg7SsArCXPzrNv/1bA0JYbxSsorYkzEtTFOY+lta+UAPlHn/6F0ZLAV/pQN7bBKCdfV7OiZnizLB7l2ssFUEaenW4YKIM2l5tWd9LphFFyFFyPQslx5jy1rGKwtA/O0VmJpp5GXuk+5u4chgd28AzcnnKrWZfQ7FJbYbSl5NFcdLa7ykCTpyNx6wuC2lr1R2wssTQfpZpk7y5A2a+UK+ZLe0Ae5iyE0FiTDWdsLTL18h59+AvrFr4gX60idzNu/2J7ct2QxCx7Dq0TRdBuU39orLqs0bGK63yHHAxPptQkV3n80a2b6Okzz4TClGmPNHu5d85M62h6NpzKbn19n4QB+JnYsE/ACh0zV67GGFO/f0SpZZYpzsQSlaMirG4fwNvNpwU5OeEpLqmQdS8a5k6UfJ9FXpg4Ns3gBov6uchc68npVnUsELVPsdKGm1ENMjA217yceFEYva4VGgQ0 91+4ApM9 ZAIGOdwGGETebtrg0Gu0zlvCt/Z8lpawxxeBx+1rVtuP32UnSc0LHvHvS2jWAFAyqmp5wXiK4+h21wDlMHokELqiXOj07na3IW2Eis1wUFPW6p+yUXTfZOKaD4EcYASBnALXA8If+tCsZvx9NSamWyfopEQrkGutrXtTu X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Memory protection (min/low) requires a constant tracking of protected memory usage. propagate_protected_usage() is called on each page counters update and does a number of operations even in cases when the actual memory protection functionality is not supported (e.g. hugetlb cgroups or memcg swap counters). It's obviously inefficient and leads to a waste of CPU cycles. It can be addressed by calling propagate_protected_usage() only for the counters which do support memory guarantees. As of now it's only memcg->memory - the unified memory memcg counter. Signed-off-by: Roman Gushchin Acked-by: Shakeel Butt --- include/linux/page_counter.h | 8 +++++++- mm/hugetlb_cgroup.c | 4 ++-- mm/memcontrol.c | 16 ++++++++-------- mm/page_counter.c | 16 +++++++++++++--- 4 files changed, 30 insertions(+), 14 deletions(-) diff --git a/include/linux/page_counter.h b/include/linux/page_counter.h index 860f313182e7..b31fd5b208aa 100644 --- a/include/linux/page_counter.h +++ b/include/linux/page_counter.h @@ -32,6 +32,7 @@ struct page_counter { /* Keep all the read most fields in a separete cacheline. */ CACHELINE_PADDING(_pad2_); + bool protection_support; unsigned long min; unsigned long low; unsigned long high; @@ -45,12 +46,17 @@ struct page_counter { #define PAGE_COUNTER_MAX (LONG_MAX / PAGE_SIZE) #endif +/* + * Protection is supported only for the first counter (with id 0). + */ static inline void page_counter_init(struct page_counter *counter, - struct page_counter *parent) + struct page_counter *parent, + bool protection_support) { atomic_long_set(&counter->usage, 0); counter->max = PAGE_COUNTER_MAX; counter->parent = parent; + counter->protection_support = protection_support; } static inline unsigned long page_counter_read(struct page_counter *counter) diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c index f443a56409a9..d8d0e665caed 100644 --- a/mm/hugetlb_cgroup.c +++ b/mm/hugetlb_cgroup.c @@ -114,10 +114,10 @@ static void hugetlb_cgroup_init(struct hugetlb_cgroup *h_cgroup, } page_counter_init(hugetlb_cgroup_counter_from_cgroup(h_cgroup, idx), - fault_parent); + fault_parent, false); page_counter_init( hugetlb_cgroup_counter_from_cgroup_rsvd(h_cgroup, idx), - rsvd_parent); + rsvd_parent, false); limit = round_down(PAGE_COUNTER_MAX, pages_per_huge_page(&hstates[idx])); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 87fa448b731f..45c0f816a974 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3579,21 +3579,21 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css) if (parent) { WRITE_ONCE(memcg->swappiness, mem_cgroup_swappiness(parent)); - page_counter_init(&memcg->memory, &parent->memory); - page_counter_init(&memcg->swap, &parent->swap); + page_counter_init(&memcg->memory, &parent->memory, true); + page_counter_init(&memcg->swap, &parent->swap, false); #ifdef CONFIG_MEMCG_V1 WRITE_ONCE(memcg->oom_kill_disable, READ_ONCE(parent->oom_kill_disable)); - page_counter_init(&memcg->kmem, &parent->kmem); - page_counter_init(&memcg->tcpmem, &parent->tcpmem); + page_counter_init(&memcg->kmem, &parent->kmem, false); + page_counter_init(&memcg->tcpmem, &parent->tcpmem, false); #endif } else { init_memcg_stats(); init_memcg_events(); - page_counter_init(&memcg->memory, NULL); - page_counter_init(&memcg->swap, NULL); + page_counter_init(&memcg->memory, NULL, true); + page_counter_init(&memcg->swap, NULL, false); #ifdef CONFIG_MEMCG_V1 - page_counter_init(&memcg->kmem, NULL); - page_counter_init(&memcg->tcpmem, NULL); + page_counter_init(&memcg->kmem, NULL, false); + page_counter_init(&memcg->tcpmem, NULL, false); #endif root_mem_cgroup = memcg; return &memcg->css; diff --git a/mm/page_counter.c b/mm/page_counter.c index ad9bdde5d5d2..a54382a58ace 100644 --- a/mm/page_counter.c +++ b/mm/page_counter.c @@ -13,6 +13,11 @@ #include #include +static bool track_protection(struct page_counter *c) +{ + return c->protection_support; +} + static void propagate_protected_usage(struct page_counter *c, unsigned long usage) { @@ -57,7 +62,8 @@ void page_counter_cancel(struct page_counter *counter, unsigned long nr_pages) new = 0; atomic_long_set(&counter->usage, new); } - propagate_protected_usage(counter, new); + if (track_protection(counter)) + propagate_protected_usage(counter, new); } /** @@ -70,12 +76,14 @@ void page_counter_cancel(struct page_counter *counter, unsigned long nr_pages) void page_counter_charge(struct page_counter *counter, unsigned long nr_pages) { struct page_counter *c; + bool protection = track_protection(counter); for (c = counter; c; c = c->parent) { long new; new = atomic_long_add_return(nr_pages, &c->usage); - propagate_protected_usage(c, new); + if (protection) + propagate_protected_usage(c, new); /* * This is indeed racy, but we can live with some * inaccuracy in the watermark. @@ -112,6 +120,7 @@ bool page_counter_try_charge(struct page_counter *counter, struct page_counter **fail) { struct page_counter *c; + bool protection = track_protection(counter); for (c = counter; c; c = c->parent) { long new; @@ -141,7 +150,8 @@ bool page_counter_try_charge(struct page_counter *counter, *fail = c; goto failed; } - propagate_protected_usage(c, new); + if (protection) + propagate_protected_usage(c, new); /* see comment on page_counter_charge */ if (new > READ_ONCE(c->local_watermark)) {