From patchwork Thu Aug 15 05:04:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shakeel Butt X-Patchwork-Id: 13764400 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75B75C52D7F for ; Thu, 15 Aug 2024 05:05:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 01D046B0096; Thu, 15 Aug 2024 01:05:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EE85F6B0095; Thu, 15 Aug 2024 01:05:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D3C5E6B0096; Thu, 15 Aug 2024 01:05:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 9E7E06B0093 for ; Thu, 15 Aug 2024 01:05:33 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 5596D80E15 for ; Thu, 15 Aug 2024 05:05:33 +0000 (UTC) X-FDA: 82453291746.11.AB79167 Received: from out-176.mta1.migadu.com (out-176.mta1.migadu.com [95.215.58.176]) by imf20.hostedemail.com (Postfix) with ESMTP id 944B11C0009 for ; Thu, 15 Aug 2024 05:05:31 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=NYVUPZKR; spf=pass (imf20.hostedemail.com: domain of shakeel.butt@linux.dev designates 95.215.58.176 as permitted sender) smtp.mailfrom=shakeel.butt@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723698274; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0zs8JI6aVxhuUibp1uzMfh5rP7bzFpoEcwMysAA/4Ek=; b=LOq8v6GZteFeSgo/cGM6AuZQOvuQ6ixl48tKdRcv7bI4iW5s2S7XKanRxqiKUvV125go9U QRDXC05Rx+xHc1vMwM1SKgAlkUpKi3Pz9vWXWfBYo+qDi93jL9b+34z5Gc3iF8PNccL0u0 fKz7LIxxqLjrTgkRh1/vl3Y1TOKv4Mg= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=NYVUPZKR; spf=pass (imf20.hostedemail.com: domain of shakeel.butt@linux.dev designates 95.215.58.176 as permitted sender) smtp.mailfrom=shakeel.butt@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723698274; a=rsa-sha256; cv=none; b=q73kld2C3fs+bBCVHw5nFzDhA0qExhzgI17z/dST1WJ5biS19GgS2n4bDOIVTEJWSLyYak YwbtUMhSH9yLU2t9av0XNr05UQC2slowhFCyMg7lGvn/D50MWfQQBKJXmkuP17Cms5HCJI XG7tZ/GZAJ801enhe4euTsvZhimHclE= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1723698330; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0zs8JI6aVxhuUibp1uzMfh5rP7bzFpoEcwMysAA/4Ek=; b=NYVUPZKRDyftYd4jWKz6eJxpMyBBS3VYXLMedsgvZjz0oPrIH3OYf1IEUOh5sCV80li/Yu lRk4xUdMr+RC9MPZFLr5K+eHQNx/j4b06zQD++GNIoNGumkjKOsymupdndD2XKl7eIJPZ/ fZZxpp/v/SLIuu1a9AMlglYO45I0+R4= From: Shakeel Butt To: Andrew Morton Cc: Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , "T . J . Mercier" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Meta kernel team , cgroups@vger.kernel.org Subject: [PATCH 4/7] memcg: move v1 events and statistics code to v1 file Date: Wed, 14 Aug 2024 22:04:50 -0700 Message-ID: <20240815050453.1298138-5-shakeel.butt@linux.dev> In-Reply-To: <20240815050453.1298138-1-shakeel.butt@linux.dev> References: <20240815050453.1298138-1-shakeel.butt@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Stat-Signature: 6y3ynw75673ijzse9b7n7rwn8yzc64i8 X-Rspam-User: X-Rspamd-Queue-Id: 944B11C0009 X-Rspamd-Server: rspam02 X-HE-Tag: 1723698331-477053 X-HE-Meta: U2FsdGVkX19LE4ikmuuJm3RQ2M6WVqCCIgeVGMjYkENhIo3hvDAtz02tXugaQBWoNb5wE5EDNGe8UAFLY7yDhdvYalsaQaI2A1K6gBxKkTHCwsyik29XDo9Mz/VjV9sb1v0fp2Hp3J9x9Dqb+x/oxKwz+inMlcE6Mr67gZyvUHH2dG0JctGBaaXKxtqcENoV2JUqzH9TXUIUdrI42SSUgHtKSmJZBRktntTqVH4FrUnN7vO9hAr7ArBUHkdRHOQj5dP0r820cqaoWBmguH5f6QQe6KyAFerrm0p2DgdfXegdZorxKhXwtWpTURdS83iWRdjxphyKOjmOq42DgshWgIdG5HRsT5Gu/mPHXUdgY0dsujRyk7x8Q82Xe87SdCtXDG9kaB1fWZJRiBUsHGxeZH2XZAIj8X2/CNmgceKSttCMyzidQS1mHfTjYPfVrDus2rETsJn/XamgmF9nBJ8jf14NGV+3A5PGW6lusE/+YDynXWIY5K3ua5VKbrrcriHQ6t11lTbCIvBBScvezVJ+O+vEcw4A+f2qx9kwLLd00gOksjooGawTxDFTkpTenaG6c597Cpg+/q8yYIVvgrAZIGhEGQI345+qxDBRAWhLMcOXrYCCeBFzUOiWSasHrErfi8gwyYaLdmrskDKJPIlsxnuI4+AtVigfy96vuJQd6AnCotSXB5Gl6Rjo67uCjI1Bev3bXmOxssgBbnj97k6DpyTCYjkVmmMN5BhXjxW8NSlHr5vkekegtVu41s9FL9nzVzjgzfbmMv+EmVjYsOgXHnzXkei6LI5qP0+rm6ItbBvRYTewBOXp840Fs/JeUyxcRrAP7zy1L6NxSy5DgrGYSPi8mKnffLmDFdDCwZFqV2CG7JxbCSBUCMfQzHGeFhLiNyWY06ocylO4+9pcHS8cbk0jklgAIbJSf5OVCAl09IkC+yq3fEu6zcaCyHrbmd4BmdJ+ejzaKm3cG4I3Uyt bboTNPot leb8F6AGRxMMypfabCleP4keuH2uTuyYpBDu1BQo+iBQF53G6YHEnWXtl2XP4VxWnvkRcz1smpVSbK0t4P7Q9gI8BnrPXZTOx18LR+2qubhPVQ+t/z5Ab6Mge3Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently the common code path for charge commit, swapout and batched uncharge are executing v1 only code which is completely useless for the v2 deployments where CONFIG_MEMCG_V1 is disabled. In addition, it is mucking with IRQs which might be slow on some architectures. Let's move all of this code to v1 only code and remove them from v2 only deployments. Signed-off-by: Shakeel Butt --- mm/memcontrol-v1.c | 37 +++++++++++++++++++++++++++++++++++++ mm/memcontrol-v1.h | 14 ++++++++++++++ mm/memcontrol.c | 33 ++++----------------------------- 3 files changed, 55 insertions(+), 29 deletions(-) diff --git a/mm/memcontrol-v1.c b/mm/memcontrol-v1.c index 73587e6417c5..ffb7246b3f35 100644 --- a/mm/memcontrol-v1.c +++ b/mm/memcontrol-v1.c @@ -1502,6 +1502,43 @@ void memcg1_check_events(struct mem_cgroup *memcg, int nid) } } +void memcg1_commit_charge(struct folio *folio, struct mem_cgroup *memcg) +{ + unsigned long flags; + + local_irq_save(flags); + memcg1_charge_statistics(memcg, folio_nr_pages(folio)); + memcg1_check_events(memcg, folio_nid(folio)); + local_irq_restore(flags); +} + +void memcg1_swapout(struct folio *folio, struct mem_cgroup *memcg) +{ + /* + * Interrupts should be disabled here because the caller holds the + * i_pages lock which is taken with interrupts-off. It is + * important here to have the interrupts disabled because it is the + * only synchronisation we have for updating the per-CPU variables. + */ + preempt_disable_nested(); + VM_WARN_ON_IRQS_ENABLED(); + memcg1_charge_statistics(memcg, -folio_nr_pages(folio)); + preempt_enable_nested(); + memcg1_check_events(memcg, folio_nid(folio)); +} + +void memcg1_uncharge_batch(struct mem_cgroup *memcg, unsigned long pgpgout, + unsigned long nr_memory, int nid) +{ + unsigned long flags; + + local_irq_save(flags); + __count_memcg_events(memcg, PGPGOUT, pgpgout); + __this_cpu_add(memcg->events_percpu->nr_page_events, nr_memory); + memcg1_check_events(memcg, nid); + local_irq_restore(flags); +} + static int compare_thresholds(const void *a, const void *b) { const struct mem_cgroup_threshold *_a = a; diff --git a/mm/memcontrol-v1.h b/mm/memcontrol-v1.h index ef72d0b7c5c6..376d021a2bf4 100644 --- a/mm/memcontrol-v1.h +++ b/mm/memcontrol-v1.h @@ -118,6 +118,11 @@ void memcg1_oom_recover(struct mem_cgroup *memcg); void memcg1_charge_statistics(struct mem_cgroup *memcg, int nr_pages); void memcg1_check_events(struct mem_cgroup *memcg, int nid); +void memcg1_commit_charge(struct folio *folio, struct mem_cgroup *memcg); +void memcg1_swapout(struct folio *folio, struct mem_cgroup *memcg); +void memcg1_uncharge_batch(struct mem_cgroup *memcg, unsigned long pgpgout, + unsigned long nr_memory, int nid); + void memcg1_stat_format(struct mem_cgroup *memcg, struct seq_buf *s); void memcg1_account_kmem(struct mem_cgroup *memcg, int nr_pages); @@ -150,6 +155,15 @@ static inline void memcg1_oom_recover(struct mem_cgroup *memcg) {} static inline void memcg1_charge_statistics(struct mem_cgroup *memcg, int nr_pages) {} static inline void memcg1_check_events(struct mem_cgroup *memcg, int nid) {} +static inline void memcg1_commit_charge(struct folio *folio, + struct mem_cgroup *memcg) {} + +static inline void memcg1_swapout(struct folio *folio, struct mem_cgroup *memcg) {} + +static inline void memcg1_uncharge_batch(struct mem_cgroup *memcg, + unsigned long pgpgout, + unsigned long nr_memory, int nid) {} + static inline void memcg1_stat_format(struct mem_cgroup *memcg, struct seq_buf *s) {} static inline void memcg1_account_kmem(struct mem_cgroup *memcg, int nr_pages) {} diff --git a/mm/memcontrol.c b/mm/memcontrol.c index f8db9924d5dc..c4b06f26ccfd 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2383,11 +2383,7 @@ void mem_cgroup_commit_charge(struct folio *folio, struct mem_cgroup *memcg) { css_get(&memcg->css); commit_charge(folio, memcg); - - local_irq_disable(); - memcg1_charge_statistics(memcg, folio_nr_pages(folio)); - memcg1_check_events(memcg, folio_nid(folio)); - local_irq_enable(); + memcg1_commit_charge(folio, memcg); } static inline void __mod_objcg_mlstate(struct obj_cgroup *objcg, @@ -4608,8 +4604,6 @@ static inline void uncharge_gather_clear(struct uncharge_gather *ug) static void uncharge_batch(const struct uncharge_gather *ug) { - unsigned long flags; - if (ug->nr_memory) { page_counter_uncharge(&ug->memcg->memory, ug->nr_memory); if (do_memsw_account()) @@ -4621,11 +4615,7 @@ static void uncharge_batch(const struct uncharge_gather *ug) memcg1_oom_recover(ug->memcg); } - local_irq_save(flags); - __count_memcg_events(ug->memcg, PGPGOUT, ug->pgpgout); - __this_cpu_add(ug->memcg->events_percpu->nr_page_events, ug->nr_memory); - memcg1_check_events(ug->memcg, ug->nid); - local_irq_restore(flags); + memcg1_uncharge_batch(ug->memcg, ug->pgpgout, ug->nr_memory, ug->nid); /* drop reference from uncharge_folio */ css_put(&ug->memcg->css); @@ -4732,7 +4722,6 @@ void mem_cgroup_replace_folio(struct folio *old, struct folio *new) { struct mem_cgroup *memcg; long nr_pages = folio_nr_pages(new); - unsigned long flags; VM_BUG_ON_FOLIO(!folio_test_locked(old), old); VM_BUG_ON_FOLIO(!folio_test_locked(new), new); @@ -4760,11 +4749,7 @@ void mem_cgroup_replace_folio(struct folio *old, struct folio *new) css_get(&memcg->css); commit_charge(new, memcg); - - local_irq_save(flags); - memcg1_charge_statistics(memcg, nr_pages); - memcg1_check_events(memcg, folio_nid(new)); - local_irq_restore(flags); + memcg1_commit_charge(new, memcg); } /** @@ -5000,17 +4985,7 @@ void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry) page_counter_uncharge(&memcg->memsw, nr_entries); } - /* - * Interrupts should be disabled here because the caller holds the - * i_pages lock which is taken with interrupts-off. It is - * important here to have the interrupts disabled because it is the - * only synchronisation we have for updating the per-CPU variables. - */ - memcg_stats_lock(); - memcg1_charge_statistics(memcg, -nr_entries); - memcg_stats_unlock(); - memcg1_check_events(memcg, folio_nid(folio)); - + memcg1_swapout(folio, memcg); css_put(&memcg->css); }