From patchwork Tue Feb 1 20:08:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 12732210 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92869C433FE for ; Tue, 1 Feb 2022 20:08:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D6DE18D0084; Tue, 1 Feb 2022 15:08:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D1CB28D0077; Tue, 1 Feb 2022 15:08:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BE4E28D0084; Tue, 1 Feb 2022 15:08:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0006.hostedemail.com [216.40.44.6]) by kanga.kvack.org (Postfix) with ESMTP id AF6C78D0077 for ; Tue, 1 Feb 2022 15:08:30 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 71E9A821FA36 for ; Tue, 1 Feb 2022 20:08:30 +0000 (UTC) X-FDA: 79095298380.17.7EC6000 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) by imf04.hostedemail.com (Postfix) with ESMTP id 68D3E40009 for ; Tue, 1 Feb 2022 20:08:29 +0000 (UTC) Received: by mail-pl1-f201.google.com with SMTP id p5-20020a170902bd0500b00148cb2d29ecso7387082pls.4 for ; Tue, 01 Feb 2022 12:08:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:message-id:mime-version:subject:from:to:cc; bh=yMqEmKjIL/Dc6vWQPZwvkUpxPtUZ6gJbN9NcOUwsK3w=; b=L+XtFUyq5B1aDF185x8JYbmuJblnU+fteeKu2W4ISb+MNg9UGcd2hMR5YgaskLQBqh 9oFQVDQ5vlYh6GLDTHiC/Wg9FEfzVbr+oh0oo523Cjsms5YTWL01nUYqQhQ12at6Qtuo bE/fB1we2A6ihLqnTLH+mATi4wKRovBX+h+tnoohasfCvPOVWyDMcC7TyA+f0j9IJGm/ xZqCl5YhwMkOqwEzwfYpQYRQ5FymaCtNob61HePZA7YfyfBNpzn0gYn+cJPrYMgYFnj5 0wy+cR3Srf2eV92bNfFvv6OUSqz900DuGNJ6MhhsKKYkY25KIxeQSbwLeZA3aIBnZ6Eo VUKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=yMqEmKjIL/Dc6vWQPZwvkUpxPtUZ6gJbN9NcOUwsK3w=; b=j8+uMgorMnxdfo7sE0V7rHVclDnOq4w3mxGJXN7H6sMZ0qqrK+ktv2VlOZe6VPxp2q gKWUjyLV09cuV/dCg9v3gpu7iKmrsGFqcDHBUlu6p1zZr7aImTK2dsBmIW7OWgnKOj+g SRW7SUvaJkooXkMqHBQuTJSUcT+93NBAQPTW0hmUIm4E1ZFDi9VYvAykCAkNq9HhfSiS REotf554+SvdJsPwC/Aswd1lIH9lOMuSb0cbESZR5uLFBU29HW6LuKU4S7PTW7kSp+nr WhR1WCDQDyAgz8J/NDjUyMlFD5lXs9TSKJJBjqfGUQ/B37rkID7fKloJIAqtfEO+GUTd FxQA== X-Gm-Message-State: AOAM53153ecXR4uKqZGpeFQP2JgLL0VGKPcrzIut02sFT/ChKLBdUAcJ By6oQMTOk+0xkNBEeTGGMNi+ctY0rf6byH/M X-Google-Smtp-Source: ABdhPJy9fCU+ODtk1dalQvAMOMuRNnRknobATjnzPy737g+xgIYG6wXb+h99zc6NZowhdArbhPEi3LqrFn/QGqFj X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a17:902:7c92:: with SMTP id y18mr26526801pll.131.1643746108101; Tue, 01 Feb 2022 12:08:28 -0800 (PST) Date: Tue, 1 Feb 2022 20:08:23 +0000 Message-Id: <20220201200823.3283171-1-yosryahmed@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.35.0.rc2.247.g8bbb082509-goog Subject: [PATCH] memcg: add per-memcg total kernel memory stat From: Yosry Ahmed To: Andrew Morton , Johannes Weiner , Michal Hocko , Muchun Song , Shakeel Butt Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Yosry Ahmed X-Rspamd-Queue-Id: 68D3E40009 X-Rspam-User: nil Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=L+XtFUyq; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf04.hostedemail.com: domain of 3PJP5YQoKCF8VLPOV7EJBADLLDIB.9LJIFKRU-JJHS79H.LOD@flex--yosryahmed.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3PJP5YQoKCF8VLPOV7EJBADLLDIB.9LJIFKRU-JJHS79H.LOD@flex--yosryahmed.bounces.google.com X-Stat-Signature: hr9bbyjh374fqi1wr7io1ygzozkyy7dw X-Rspamd-Server: rspam08 X-HE-Tag: 1643746109-978163 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently memcg stats show several types of kernel memory: kernel stack, page tables, sock, vmalloc, and slab. However, there are other allocations with __GFP_ACCOUNT (or supersets such as GFP_KERNEL_ACCOUNT) that are not accounted in any of those stats, a few examples are: - various kvm allocations (e.g. allocated pages to create vcpus) - io_uring - tmp_page in pipes during pipe_write() - bpf ringbuffers - unix sockets Keeping track of the total kernel memory is essential for the ease of migration from cgroup v1 to v2 as there are large discrepancies between v1's kmem.usage_in_bytes and the sum of the available kernel memory stats in v2. Adding separate memcg stats for all __GFP_ACCOUNT kernel allocations is an impractical maintenance burden as there a lot of those all over the kernel code, with more use cases likely to show up in the future. Therefore, add a "kernel" memcg stat that is analogous to kmem page counter, with added benefits such as using rstat infrastructure which aggregates stats more efficiently. Additionally, this provides a lighter alternative in case the legacy kmem is deprecated in the future Signed-off-by: Yosry Ahmed Acked-by: Shakeel Butt Acked-by: Johannes Weiner --- Documentation/admin-guide/cgroup-v2.rst | 5 +++++ include/linux/memcontrol.h | 1 + mm/memcontrol.c | 24 ++++++++++++++++++------ 3 files changed, 24 insertions(+), 6 deletions(-) diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst index 5aa368d165da..a0027d570a7f 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -1317,6 +1317,11 @@ PAGE_SIZE multiple when read back. vmalloc (npn) Amount of memory used for vmap backed memory. + kernel (npn) + Amount of total kernel memory, including + (kernel_stack, pagetables, percpu, vmalloc, slab) in + addition to other kernel memory use cases. + shmem Amount of cached filesystem data that is swap-backed, such as tmpfs, shm segments, shared anonymous mmap()s diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index b72d75141e12..fa51986365a4 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -34,6 +34,7 @@ enum memcg_stat_item { MEMCG_SOCK, MEMCG_PERCPU_B, MEMCG_VMALLOC, + MEMCG_KMEM, MEMCG_NR_STAT, }; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 09d342c7cbd0..c55d7056ac98 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1376,6 +1376,7 @@ static const struct memory_stat memory_stats[] = { { "percpu", MEMCG_PERCPU_B }, { "sock", MEMCG_SOCK }, { "vmalloc", MEMCG_VMALLOC }, + { "kernel", MEMCG_KMEM }, { "shmem", NR_SHMEM }, { "file_mapped", NR_FILE_MAPPED }, { "file_dirty", NR_FILE_DIRTY }, @@ -2979,6 +2980,19 @@ static void memcg_free_cache_id(int id) ida_simple_remove(&memcg_cache_ida, id); } +static void mem_cgroup_kmem_record(struct mem_cgroup *memcg, + int nr_pages) +{ + mod_memcg_state(memcg, MEMCG_KMEM, nr_pages); + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) { + if (nr_pages > 0) + page_counter_charge(&memcg->kmem, nr_pages); + else + page_counter_uncharge(&memcg->kmem, -nr_pages); + } +} + + /* * obj_cgroup_uncharge_pages: uncharge a number of kernel pages from a objcg * @objcg: object cgroup to uncharge @@ -2991,8 +3005,7 @@ static void obj_cgroup_uncharge_pages(struct obj_cgroup *objcg, memcg = get_mem_cgroup_from_objcg(objcg); - if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) - page_counter_uncharge(&memcg->kmem, nr_pages); + mem_cgroup_kmem_record(memcg, -nr_pages); refill_stock(memcg, nr_pages); css_put(&memcg->css); @@ -3018,8 +3031,7 @@ static int obj_cgroup_charge_pages(struct obj_cgroup *objcg, gfp_t gfp, if (ret) goto out; - if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) - page_counter_charge(&memcg->kmem, nr_pages); + mem_cgroup_kmem_record(memcg, nr_pages); out: css_put(&memcg->css); @@ -6801,8 +6813,8 @@ static void uncharge_batch(const struct uncharge_gather *ug) page_counter_uncharge(&ug->memcg->memory, ug->nr_memory); if (do_memsw_account()) page_counter_uncharge(&ug->memcg->memsw, ug->nr_memory); - if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) && ug->nr_kmem) - page_counter_uncharge(&ug->memcg->kmem, ug->nr_kmem); + if (ug->nr_kmem) + mem_cgroup_kmem_record(ug->memcg, -ug->nr_kmem); memcg_oom_recover(ug->memcg); }