From patchwork Fri Aug 3 17:57:43 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejun Heo X-Patchwork-Id: 10555407 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3FEFC14E2 for ; Fri, 3 Aug 2018 17:57:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 28A9C2C6EE for ; Fri, 3 Aug 2018 17:57:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 19F922C6B9; Fri, 3 Aug 2018 17:57:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 54DCF2C6B9 for ; Fri, 3 Aug 2018 17:57:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1AD056B0005; Fri, 3 Aug 2018 13:57:49 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 181766B0006; Fri, 3 Aug 2018 13:57:49 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0704E6B0007; Fri, 3 Aug 2018 13:57:49 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-yb0-f200.google.com (mail-yb0-f200.google.com [209.85.213.200]) by kanga.kvack.org (Postfix) with ESMTP id D068C6B0005 for ; Fri, 3 Aug 2018 13:57:48 -0400 (EDT) Received: by mail-yb0-f200.google.com with SMTP id y8-v6so5707358ybm.15 for ; Fri, 03 Aug 2018 10:57:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:sender:date:from:to:cc:subject :message-id:mime-version:content-disposition:user-agent; bh=PcjEaG7LIEPP6xQfBna3nlactbatOq3CgDxI34LeMZo=; b=ScWeifOjCYAahXi70z+5/7QLckm8VNLqI9mCqh9h3usLWt72fxZEssonnVNBNdwVWG yItHSi23JD7HxiYHUa8/CE0jvOi3qyBeuAMAkocheoOxaQF3tQZ/gIcowfCCOl1QGmC6 CuxKjC0YzCf4DDGh6G+0dWUm0C1SW3RnSo/9EB7HSpmpq4LSXUPb97J2AtjjnOyu1s0X nKCZLM4XCUuPVaOB3q3FwP5gfs4844V/xkPpIYnFPF/bEBd831Bf76RnGA40zokA78I9 epQUAt4zlfEv0yqTmljjPQkpC1LImjQk7pMJTN+7zGUE5ZKY36qc2ami4lwL94/uqQw3 7kMg== X-Gm-Message-State: AOUpUlER1ip+G4gEwSI+8vGStbx2YuAUM9vJlKzBauYEtAuINt0Pstuo NhoaQVnEAIU/lf86bWcNWfelQbI9f7PxQAcC1lTAqxzyspBWdLQTvfCLVqnKKTKfNZNOs3N/cMU yq/crvWJKIkQJVgxs72V2RuwC17+Q8wHK5Nnm+XJbsnp7SBstfD1UjxMcTJUOIaHODVxmjHo51h KYXMf5YoxPDgAS8wH1IBZb8B4UgJpAjj4dKz0ZIhbWuKkcqdXVYRguABQakYVkvLK1vw8ZlEMzH oQcrymuyCGIcVulLr1Ml1r8+pJ1McLiM4NyuJl+z6pZvNQOzv0wvMFFGYNnZDuUvn+GU8fe41vT rHrVGkiuRhbKQ6hTDJOCpAk6Qtq3LUb8X6zRyTNsIzh6/TVqWEnchektvKxCj8DqPnj97Aa6Bg= = X-Received: by 2002:a25:bb93:: with SMTP id y19-v6mr2548939ybg.426.1533319068462; Fri, 03 Aug 2018 10:57:48 -0700 (PDT) X-Received: by 2002:a25:bb93:: with SMTP id y19-v6mr2548904ybg.426.1533319067372; Fri, 03 Aug 2018 10:57:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1533319067; cv=none; d=google.com; s=arc-20160816; b=V6Q+fdRJuw4nKZWrRUAVgD2K5eOdrL4cRCn+hU+/yhQcKrgyVk8+WVd5xf5QE0D23B YbmHz9sjghMAPKoC157CJZd2rYzVsxpn+zDcgNi0Emh/otHDkY6GjaeEEgBZ2pEYCtQw Rb/tpBUbwfYFpT5h+AOoDnsF21m6bT08BDgRXMqn6mC9DUxHXl3dOf7NWcMRZdcsZTnQ sHklbSQUBPPMjbg6dv0xinE5SXShHbnQ1Ew+BHf07MfC2CS4FSJgQcNxcSdznsD5fImy L5vjJRutPDM0pkQCOgR6zrrs5OejSlqQ2Z97Qqtzj4AhFtUGmFM1UEe4E/fG2xHsUXVr 6pYw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=user-agent:content-disposition:mime-version:message-id:subject:cc :to:from:date:sender:dkim-signature:arc-authentication-results; bh=PcjEaG7LIEPP6xQfBna3nlactbatOq3CgDxI34LeMZo=; b=n3+59OLyBRbj3Gsa810RllX5ZG9klZWuDKyQosB41aXWDSXvGSzpVNepKXvpZuEcBK MPMfBq+Zy85igtojpnKmcTBde0Vn8yNyZfKbRVs0oUG3lkQA1+vsi7CcHIzmUAGA7xCj AdfU0OGuQ+W06w8wvCLrGCRSneArgNTd2mor4gxWStK6sjPpJSsnkT6rC6jS98dKasDY Eu45C1+zdv8/hhlbDNJy43RJg8vp/dzG9OVUquoybdH27UyXZVR7bp+3IIVjrXFzZCh6 BxsXVo3aWr+YuWUfWj/tXqE2UFvmNH60UExxV0FRkJoi9GdIt2NaXsn5WlkUnqvzwsn9 ez2g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=UcFQliMk; spf=pass (google.com: domain of htejun@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=htejun@gmail.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id k17-v6sor1311944ywa.544.2018.08.03.10.57.47 for (Google Transport Security); Fri, 03 Aug 2018 10:57:47 -0700 (PDT) Received-SPF: pass (google.com: domain of htejun@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=UcFQliMk; spf=pass (google.com: domain of htejun@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=htejun@gmail.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:date:from:to:cc:subject:message-id:mime-version :content-disposition:user-agent; bh=PcjEaG7LIEPP6xQfBna3nlactbatOq3CgDxI34LeMZo=; b=UcFQliMkpjgKWovcstiVZq3edHW55uzqTl0Uxpf7oEurc6UkRbWxyetYhGGG7u5E1M JfTR/TrvO7FqQsuKYvnWuzqDO7JK4IBlMpKUT44FizQfwZ+5Ueccqit5gfTsDDi8Efy6 Fbmh/8s+yPl+H4bzBWMTkUFyZtEG6nl/gDxYmzCFkxwFF92oi/52pIKUpsftdN7YJS7A cZiqQwEbky24IbWOMxjtcYD7ehzxJToYoyjsULhI6F1l2HQtmdAMQIeyYP0yA3TW9Kpc GI2TIosiMerV9XKctSoINUIhFjLXzD5PqFqPxJxp9IqijIM1Si9LkoTSsALt76Vw3UZl Sv2A== X-Google-Smtp-Source: AAOMgpe/yZL3Ld9zu1/myDcc00IyO+polDla6+TIKCTGDjA196zTGiDgAtXkynjYoRkAuHZ2x/KQ6Q== X-Received: by 2002:a81:9ec6:: with SMTP id v189-v6mr2739947ywg.462.1533319066814; Fri, 03 Aug 2018 10:57:46 -0700 (PDT) Received: from localhost ([2620:10d:c091:200::1:1f28]) by smtp.gmail.com with ESMTPSA id l11-v6sm1642926ywh.99.2018.08.03.10.57.45 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 03 Aug 2018 10:57:45 -0700 (PDT) Date: Fri, 3 Aug 2018 10:57:43 -0700 From: Tejun Heo To: Andrew Morton , Roman Gushchin , Johannes Weiner , Michal Hocko , Vladimir Davydov Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, kernel-team@fb.com Subject: [PATCH] mm: memcg: update memcg OOM messages on cgroup2 Message-ID: <20180803175743.GW1206094@devbig004.ftw2.facebook.com> MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.5.21 (2010-09-15) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP mem_cgroup_print_oom_info() currently prints the same info for cgroup1 and cgroup2 OOMs. It doesn't make much sense on cgroup2, which doesn't use memsw or separate kmem accounting - the information reported is both superflous and insufficient. This patch updates the memcg OOM messages on cgroup2 so that * It prints memory and swap usages and limits used on cgroup2. * It shows the same information as memory.stat. I took out the recursive printing for cgroup2 because the amount of output could be a lot and the benefits aren't clear. What do you guys think? Signed-off-by: Tejun Heo Acked-by: Johannes Weiner --- mm/memcontrol.c | 165 ++++++++++++++++++++++++++++++++------------------------ 1 file changed, 96 insertions(+), 69 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 8c0280b3143e..86133e50a0b2 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -177,6 +177,7 @@ struct mem_cgroup_event { static void mem_cgroup_threshold(struct mem_cgroup *memcg); static void mem_cgroup_oom_notify(struct mem_cgroup *memcg); +static void __memory_stat_show(struct seq_file *m, struct mem_cgroup *memcg); /* Stuffs for move charges at task migration. */ /* @@ -1146,33 +1147,49 @@ void mem_cgroup_print_oom_info(struct mem_cgroup *memcg, struct task_struct *p) rcu_read_unlock(); - pr_info("memory: usage %llukB, limit %llukB, failcnt %lu\n", - K((u64)page_counter_read(&memcg->memory)), - K((u64)memcg->memory.max), memcg->memory.failcnt); - pr_info("memory+swap: usage %llukB, limit %llukB, failcnt %lu\n", - K((u64)page_counter_read(&memcg->memsw)), - K((u64)memcg->memsw.max), memcg->memsw.failcnt); - pr_info("kmem: usage %llukB, limit %llukB, failcnt %lu\n", - K((u64)page_counter_read(&memcg->kmem)), - K((u64)memcg->kmem.max), memcg->kmem.failcnt); + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) { + pr_info("memory: usage %llukB, limit %llukB, failcnt %lu\n", + K((u64)page_counter_read(&memcg->memory)), + K((u64)memcg->memory.max), memcg->memory.failcnt); + pr_info("memory+swap: usage %llukB, limit %llukB, failcnt %lu\n", + K((u64)page_counter_read(&memcg->memsw)), + K((u64)memcg->memsw.max), memcg->memsw.failcnt); + pr_info("kmem: usage %llukB, limit %llukB, failcnt %lu\n", + K((u64)page_counter_read(&memcg->kmem)), + K((u64)memcg->kmem.max), memcg->kmem.failcnt); - for_each_mem_cgroup_tree(iter, memcg) { - pr_info("Memory cgroup stats for "); - pr_cont_cgroup_path(iter->css.cgroup); - pr_cont(":"); + for_each_mem_cgroup_tree(iter, memcg) { + pr_info("Memory cgroup stats for "); + pr_cont_cgroup_path(iter->css.cgroup); + pr_cont(":"); + + for (i = 0; i < ARRAY_SIZE(memcg1_stats); i++) { + if (memcg1_stats[i] == MEMCG_SWAP && !do_swap_account) + continue; + pr_cont(" %s:%luKB", memcg1_stat_names[i], + K(memcg_page_state(iter, memcg1_stats[i]))); + } - for (i = 0; i < ARRAY_SIZE(memcg1_stats); i++) { - if (memcg1_stats[i] == MEMCG_SWAP && !do_swap_account) - continue; - pr_cont(" %s:%luKB", memcg1_stat_names[i], - K(memcg_page_state(iter, memcg1_stats[i]))); + for (i = 0; i < NR_LRU_LISTS; i++) + pr_cont(" %s:%luKB", mem_cgroup_lru_names[i], + K(mem_cgroup_nr_lru_pages(iter, BIT(i)))); + + pr_cont("\n"); } + } else { + pr_info("memory %llu (max %llu)\n", + (u64)page_counter_read(&memcg->memory) * PAGE_SIZE, + (u64)memcg->memory.max * PAGE_SIZE); - for (i = 0; i < NR_LRU_LISTS; i++) - pr_cont(" %s:%luKB", mem_cgroup_lru_names[i], - K(mem_cgroup_nr_lru_pages(iter, BIT(i)))); + if (memcg->swap.max == PAGE_COUNTER_MAX) + pr_info("swap %llu\n", + (u64)page_counter_read(&memcg->swap) * PAGE_SIZE); + else + pr_info("swap %llu (max %llu)\n", + (u64)page_counter_read(&memcg->swap) * PAGE_SIZE, + (u64)memcg->swap.max * PAGE_SIZE); - pr_cont("\n"); + __memory_stat_show(NULL, memcg); } } @@ -5246,9 +5263,15 @@ static int memory_events_show(struct seq_file *m, void *v) return 0; } -static int memory_stat_show(struct seq_file *m, void *v) +#define seq_pr_info(m, fmt, ...) do { \ + if ((m)) \ + seq_printf(m, fmt, ##__VA_ARGS__); \ + else \ + printk(KERN_INFO fmt, ##__VA_ARGS__); \ +} while (0) + +static void __memory_stat_show(struct seq_file *m, struct mem_cgroup *memcg) { - struct mem_cgroup *memcg = mem_cgroup_from_css(seq_css(m)); unsigned long stat[MEMCG_NR_STAT]; unsigned long events[NR_VM_EVENT_ITEMS]; int i; @@ -5267,26 +5290,26 @@ static int memory_stat_show(struct seq_file *m, void *v) tree_stat(memcg, stat); tree_events(memcg, events); - seq_printf(m, "anon %llu\n", - (u64)stat[MEMCG_RSS] * PAGE_SIZE); - seq_printf(m, "file %llu\n", - (u64)stat[MEMCG_CACHE] * PAGE_SIZE); - seq_printf(m, "kernel_stack %llu\n", - (u64)stat[MEMCG_KERNEL_STACK_KB] * 1024); - seq_printf(m, "slab %llu\n", - (u64)(stat[NR_SLAB_RECLAIMABLE] + - stat[NR_SLAB_UNRECLAIMABLE]) * PAGE_SIZE); - seq_printf(m, "sock %llu\n", - (u64)stat[MEMCG_SOCK] * PAGE_SIZE); - - seq_printf(m, "shmem %llu\n", - (u64)stat[NR_SHMEM] * PAGE_SIZE); - seq_printf(m, "file_mapped %llu\n", - (u64)stat[NR_FILE_MAPPED] * PAGE_SIZE); - seq_printf(m, "file_dirty %llu\n", - (u64)stat[NR_FILE_DIRTY] * PAGE_SIZE); - seq_printf(m, "file_writeback %llu\n", - (u64)stat[NR_WRITEBACK] * PAGE_SIZE); + seq_pr_info(m, "anon %llu\n", + (u64)stat[MEMCG_RSS] * PAGE_SIZE); + seq_pr_info(m, "file %llu\n", + (u64)stat[MEMCG_CACHE] * PAGE_SIZE); + seq_pr_info(m, "kernel_stack %llu\n", + (u64)stat[MEMCG_KERNEL_STACK_KB] * 1024); + seq_pr_info(m, "slab %llu\n", + (u64)(stat[NR_SLAB_RECLAIMABLE] + + stat[NR_SLAB_UNRECLAIMABLE]) * PAGE_SIZE); + seq_pr_info(m, "sock %llu\n", + (u64)stat[MEMCG_SOCK] * PAGE_SIZE); + + seq_pr_info(m, "shmem %llu\n", + (u64)stat[NR_SHMEM] * PAGE_SIZE); + seq_pr_info(m, "file_mapped %llu\n", + (u64)stat[NR_FILE_MAPPED] * PAGE_SIZE); + seq_pr_info(m, "file_dirty %llu\n", + (u64)stat[NR_FILE_DIRTY] * PAGE_SIZE); + seq_pr_info(m, "file_writeback %llu\n", + (u64)stat[NR_WRITEBACK] * PAGE_SIZE); for (i = 0; i < NR_LRU_LISTS; i++) { struct mem_cgroup *mi; @@ -5294,37 +5317,41 @@ static int memory_stat_show(struct seq_file *m, void *v) for_each_mem_cgroup_tree(mi, memcg) val += mem_cgroup_nr_lru_pages(mi, BIT(i)); - seq_printf(m, "%s %llu\n", - mem_cgroup_lru_names[i], (u64)val * PAGE_SIZE); + seq_pr_info(m, "%s %llu\n", + mem_cgroup_lru_names[i], (u64)val * PAGE_SIZE); } - seq_printf(m, "slab_reclaimable %llu\n", - (u64)stat[NR_SLAB_RECLAIMABLE] * PAGE_SIZE); - seq_printf(m, "slab_unreclaimable %llu\n", - (u64)stat[NR_SLAB_UNRECLAIMABLE] * PAGE_SIZE); + seq_pr_info(m, "slab_reclaimable %llu\n", + (u64)stat[NR_SLAB_RECLAIMABLE] * PAGE_SIZE); + seq_pr_info(m, "slab_unreclaimable %llu\n", + (u64)stat[NR_SLAB_UNRECLAIMABLE] * PAGE_SIZE); /* Accumulated memory events */ - seq_printf(m, "pgfault %lu\n", events[PGFAULT]); - seq_printf(m, "pgmajfault %lu\n", events[PGMAJFAULT]); - - seq_printf(m, "pgrefill %lu\n", events[PGREFILL]); - seq_printf(m, "pgscan %lu\n", events[PGSCAN_KSWAPD] + - events[PGSCAN_DIRECT]); - seq_printf(m, "pgsteal %lu\n", events[PGSTEAL_KSWAPD] + - events[PGSTEAL_DIRECT]); - seq_printf(m, "pgactivate %lu\n", events[PGACTIVATE]); - seq_printf(m, "pgdeactivate %lu\n", events[PGDEACTIVATE]); - seq_printf(m, "pglazyfree %lu\n", events[PGLAZYFREE]); - seq_printf(m, "pglazyfreed %lu\n", events[PGLAZYFREED]); - - seq_printf(m, "workingset_refault %lu\n", - stat[WORKINGSET_REFAULT]); - seq_printf(m, "workingset_activate %lu\n", - stat[WORKINGSET_ACTIVATE]); - seq_printf(m, "workingset_nodereclaim %lu\n", - stat[WORKINGSET_NODERECLAIM]); + seq_pr_info(m, "pgfault %lu\n", events[PGFAULT]); + seq_pr_info(m, "pgmajfault %lu\n", events[PGMAJFAULT]); + + seq_pr_info(m, "pgrefill %lu\n", events[PGREFILL]); + seq_pr_info(m, "pgscan %lu\n", events[PGSCAN_KSWAPD] + + events[PGSCAN_DIRECT]); + seq_pr_info(m, "pgsteal %lu\n", events[PGSTEAL_KSWAPD] + + events[PGSTEAL_DIRECT]); + seq_pr_info(m, "pgactivate %lu\n", events[PGACTIVATE]); + seq_pr_info(m, "pgdeactivate %lu\n", events[PGDEACTIVATE]); + seq_pr_info(m, "pglazyfree %lu\n", events[PGLAZYFREE]); + seq_pr_info(m, "pglazyfreed %lu\n", events[PGLAZYFREED]); + seq_pr_info(m, "workingset_refault %lu\n", + stat[WORKINGSET_REFAULT]); + seq_pr_info(m, "workingset_activate %lu\n", + stat[WORKINGSET_ACTIVATE]); + seq_pr_info(m, "workingset_nodereclaim %lu\n", + stat[WORKINGSET_NODERECLAIM]); +} + +static int memory_stat_show(struct seq_file *m, void *v) +{ + __memory_stat_show(m, mem_cgroup_from_css(seq_css(m))); return 0; }