From patchwork Mon Sep 14 02:44:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Waiman Long X-Patchwork-Id: 11772661 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 96411618 for ; Mon, 14 Sep 2020 02:45:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5842621BE5 for ; Mon, 14 Sep 2020 02:45:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="dhYauM8N" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5842621BE5 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 703F76B005A; Sun, 13 Sep 2020 22:45:25 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5EE816B005C; Sun, 13 Sep 2020 22:45:25 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 503506B005D; Sun, 13 Sep 2020 22:45:25 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0212.hostedemail.com [216.40.44.212]) by kanga.kvack.org (Postfix) with ESMTP id 3A5A56B005A for ; Sun, 13 Sep 2020 22:45:25 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 002101D98B for ; Mon, 14 Sep 2020 02:45:24 +0000 (UTC) X-FDA: 77260125810.28.back12_2b06a0627105 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id C2A3B1D998 for ; Mon, 14 Sep 2020 02:45:24 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,longman@redhat.com,,RULES_HIT:30012:30054:30080,0,RBL:205.139.110.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04yrxp11qqoxf8qyeq6pbmnjar6acycnfjarje31ymndetx4b6q3kzkgi55gh3a.dnkp17gneqjmnsbmqxaqe9gowozazhcmit7sfw91xphkoej8rt1kiasdy79wbku.w-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: back12_2b06a0627105 X-Filterd-Recvd-Size: 4994 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Mon, 14 Sep 2020 02:45:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1600051523; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=+lAeArrGnV/U+syM1+KcWlJ8jKPBVWf17BzUlWjyOt4=; b=dhYauM8N3r55e3G3TewVXxAFtcwg01mcYLGSYNC3V39kjJwaHGxIG/Myli6KbILJ+ugohc 8A4Bu2rAt9YmC3czjJ/6KI2xu0QM5LMCfv+5RH89r1K2SMBcaPYhEh+syt8//wmXUrkwDY I2ISwIL5LM28PYXajHDk8uspnudTXpo= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-250-eMyESoohPpSowlGoOucfLg-1; Sun, 13 Sep 2020 22:45:19 -0400 X-MC-Unique: eMyESoohPpSowlGoOucfLg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0B1C7800D53; Mon, 14 Sep 2020 02:45:18 +0000 (UTC) Received: from llong.com (ovpn-113-90.rdu2.redhat.com [10.10.113.90]) by smtp.corp.redhat.com (Postfix) with ESMTP id AE0D460BF3; Mon, 14 Sep 2020 02:45:16 +0000 (UTC) From: Waiman Long To: Johannes Weiner , Michal Hocko , Vladimir Davydov , Andrew Morton , Tejun Heo Cc: linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Chris Down , Roman Gushchin , Yafang Shao , Waiman Long Subject: [PATCH v2 3/3] mm/memcg: Unify swap and memsw page counters Date: Sun, 13 Sep 2020 22:44:52 -0400 Message-Id: <20200914024452.19167-4-longman@redhat.com> In-Reply-To: <20200914024452.19167-1-longman@redhat.com> References: <20200914024452.19167-1-longman@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Rspamd-Queue-Id: C2A3B1D998 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The swap page counter is v2 only while memsw is v1 only. As v1 and v2 controllers cannot be active at the same time, there is no point to keep both swap and memsw page counters in mem_cgroup. The previous patch has made sure that memsw page counter is updated and accessed only when in v1 code paths. So it is now safe to alias the v1 memsw page counter to v2 swap page counter. This saves 14 long's in the size of mem_cgroup. This is a saving of 112 bytes for 64-bit archs. While at it, also document which page counters are used in v1 and/or v2. Signed-off-by: Waiman Long Acked-by: Michal Hocko Reviewed-by: Shakeel Butt --- include/linux/memcontrol.h | 13 ++++++++----- mm/memcontrol.c | 3 --- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index d0b036123c6a..6ef4a552e09d 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -215,13 +215,16 @@ struct mem_cgroup { struct mem_cgroup_id id; /* Accounted resources */ - struct page_counter memory; - struct page_counter swap; + struct page_counter memory; /* Both v1 & v2 */ + + union { + struct page_counter swap; /* v2 only */ + struct page_counter memsw; /* v1 only */ + }; /* Legacy consumer-oriented counters */ - struct page_counter memsw; - struct page_counter kmem; - struct page_counter tcpmem; + struct page_counter kmem; /* v1 only */ + struct page_counter tcpmem; /* v1 only */ /* Range enforcement for interrupt charges */ struct work_struct high_work; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index ca36bed588d1..188901f3a3db 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5281,13 +5281,11 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css) memcg->use_hierarchy = true; page_counter_init(&memcg->memory, &parent->memory); page_counter_init(&memcg->swap, &parent->swap); - page_counter_init(&memcg->memsw, &parent->memsw); page_counter_init(&memcg->kmem, &parent->kmem); page_counter_init(&memcg->tcpmem, &parent->tcpmem); } else { page_counter_init(&memcg->memory, NULL); page_counter_init(&memcg->swap, NULL); - page_counter_init(&memcg->memsw, NULL); page_counter_init(&memcg->kmem, NULL); page_counter_init(&memcg->tcpmem, NULL); /* @@ -5416,7 +5414,6 @@ static void mem_cgroup_css_reset(struct cgroup_subsys_state *css) page_counter_set_max(&memcg->memory, PAGE_COUNTER_MAX); page_counter_set_max(&memcg->swap, PAGE_COUNTER_MAX); - page_counter_set_max(&memcg->memsw, PAGE_COUNTER_MAX); page_counter_set_max(&memcg->kmem, PAGE_COUNTER_MAX); page_counter_set_max(&memcg->tcpmem, PAGE_COUNTER_MAX); page_counter_set_min(&memcg->memory, 0);