From patchwork Tue Mar 9 10:07:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12124587 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1ED64C433E6 for ; Tue, 9 Mar 2021 10:09:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BE61265259 for ; Tue, 9 Mar 2021 10:09:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BE61265259 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 45F678D00D6; Tue, 9 Mar 2021 05:09:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 40DF08D007F; Tue, 9 Mar 2021 05:09:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 23C528D00D6; Tue, 9 Mar 2021 05:09:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0201.hostedemail.com [216.40.44.201]) by kanga.kvack.org (Postfix) with ESMTP id 0BFFD8D007F for ; Tue, 9 Mar 2021 05:09:01 -0500 (EST) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id B6205824999B for ; Tue, 9 Mar 2021 10:09:00 +0000 (UTC) X-FDA: 77899912440.01.DA723A6 Received: from mail-pj1-f51.google.com (mail-pj1-f51.google.com [209.85.216.51]) by imf09.hostedemail.com (Postfix) with ESMTP id 87B2B6000103 for ; Tue, 9 Mar 2021 10:08:58 +0000 (UTC) Received: by mail-pj1-f51.google.com with SMTP id nh23-20020a17090b3657b02900c0d5e235a8so4915936pjb.0 for ; Tue, 09 Mar 2021 02:09:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7xRtOftBY21GmUXexhKrE28MdVkLxMN/4d1pu6nQNN8=; b=hNN+YWXsXLksN60dbLKXBQX/Cr3xJeGMf8V9dDcTek5cnogE3TtAGDMc3A0hfQjAQK ptq0q98JTymuZscsMw22tAcLajmKNCXhaVpyyMrB45echJ5fsE8ii9LpZhcV4m8bmJ3C pIyBlpp4wxRph7jyxmcNKKhFnaraIygRCeR3W7SlyXSc80tNM1OONpbUkTegUuP5Y41z M5Xu+rXq7gWvRCtsHvDzurR5CfYliql7xabiCkg9UBLh0Jr6QYB+CXorIHJ45IOVCFO/ NSe7qeVKLdEmvH2yb7mkthh2wA+EXNzcwnYL1rkXu6hIe+f3w5jov0yey1zgq2mEx/e7 KStw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7xRtOftBY21GmUXexhKrE28MdVkLxMN/4d1pu6nQNN8=; b=FeJac4PGc1pUUEobhNWWpqAwSD90Iw+MuiP9EspNtd4zOR9irA6zCMAoxqi3jlC1OG 34yK0eqCjH+MJSs/2X9jGSpyiY0gNDzdfpq4+MADb0xExL//Bsx643b/z2qgDzB4+/Ok qPxw27h1s/cfbszUm/kGo2xzkx5dB8dReYqWJRk7xWRFLGF7bNwXqXcZuC8UYOyhZL7j vZlTW9iWVat0PcEkoppCamgBokvD2DE3hF3+9WlKKU0CFadMBx6qoRPzOozDCx0u3ivW pZ2q7Fj7u+5sTkkI4oW9R8tTO28+I2qFGB3mYxD26wjwu8YNyKr+f6DHNTkqPCK/01cp PRFQ== X-Gm-Message-State: AOAM533V1YcCkOt84mAdOEn3LLoVWOZVhBYKpnHg1+TLzAaE14jBWdHq mIfSPlFMaZHNy8zpeLdDFv6WbQ== X-Google-Smtp-Source: ABdhPJxTskKRWtRP87imNGgOSe6VfN1ZIrQZB+dUhFMW12WPfJFOTaBXUG1v5Z1VyOWKJJW4mZfgyQ== X-Received: by 2002:a17:90a:987:: with SMTP id 7mr3935153pjo.61.1615284539383; Tue, 09 Mar 2021 02:08:59 -0800 (PST) Received: from localhost.localdomain ([139.177.225.255]) by smtp.gmail.com with ESMTPSA id a70sm9258424pfa.202.2021.03.09.02.08.55 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 09 Mar 2021 02:08:59 -0800 (PST) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, Muchun Song Subject: [PATCH v3 1/4] mm: memcontrol: introduce obj_cgroup_{un}charge_pages Date: Tue, 9 Mar 2021 18:07:14 +0800 Message-Id: <20210309100717.253-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210309100717.253-1-songmuchun@bytedance.com> References: <20210309100717.253-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Stat-Signature: 9w5ryh8bzrcjwocyhwbdbp717315f1i9 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 87B2B6000103 Received-SPF: none (bytedance.com>: No applicable sender policy available) receiver=imf09; identity=mailfrom; envelope-from=""; helo=mail-pj1-f51.google.com; client-ip=209.85.216.51 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615284538-79961 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We know that the unit of slab object charging is bytes, the unit of kmem page charging is PAGE_SIZE. If we want to reuse obj_cgroup APIs to charge the kmem pages, we should pass PAGE_SIZE (as third parameter) to obj_cgroup_charge(). Because the size is already PAGE_SIZE, we can skip touch the objcg stock. And obj_cgroup_{un}charge_pages() are introduced to charge in units of page level. In the later patch, we also can reuse those two helpers to charge or uncharge a number of kernel pages to a object cgroup. This is just a code movement without any functional changes. Signed-off-by: Muchun Song Acked-by: Roman Gushchin Acked-by: Johannes Weiner Reviewed-by: Shakeel Butt --- mm/memcontrol.c | 46 +++++++++++++++++++++++++++++++--------------- 1 file changed, 31 insertions(+), 15 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 845eec01ef9d..fc22da9805fb 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3056,6 +3056,34 @@ static void memcg_free_cache_id(int id) ida_simple_remove(&memcg_cache_ida, id); } +static inline void obj_cgroup_uncharge_pages(struct obj_cgroup *objcg, + unsigned int nr_pages) +{ + rcu_read_lock(); + __memcg_kmem_uncharge(obj_cgroup_memcg(objcg), nr_pages); + rcu_read_unlock(); +} + +static int obj_cgroup_charge_pages(struct obj_cgroup *objcg, gfp_t gfp, + unsigned int nr_pages) +{ + struct mem_cgroup *memcg; + int ret; + + rcu_read_lock(); +retry: + memcg = obj_cgroup_memcg(objcg); + if (unlikely(!css_tryget(&memcg->css))) + goto retry; + rcu_read_unlock(); + + ret = __memcg_kmem_charge(memcg, gfp, nr_pages); + + css_put(&memcg->css); + + return ret; +} + /** * __memcg_kmem_charge: charge a number of kernel pages to a memcg * @memcg: memory cgroup to charge @@ -3180,11 +3208,8 @@ static void drain_obj_stock(struct memcg_stock_pcp *stock) unsigned int nr_pages = stock->nr_bytes >> PAGE_SHIFT; unsigned int nr_bytes = stock->nr_bytes & (PAGE_SIZE - 1); - if (nr_pages) { - rcu_read_lock(); - __memcg_kmem_uncharge(obj_cgroup_memcg(old), nr_pages); - rcu_read_unlock(); - } + if (nr_pages) + obj_cgroup_uncharge_pages(old, nr_pages); /* * The leftover is flushed to the centralized per-memcg value. @@ -3242,7 +3267,6 @@ static void refill_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes) int obj_cgroup_charge(struct obj_cgroup *objcg, gfp_t gfp, size_t size) { - struct mem_cgroup *memcg; unsigned int nr_pages, nr_bytes; int ret; @@ -3259,24 +3283,16 @@ int obj_cgroup_charge(struct obj_cgroup *objcg, gfp_t gfp, size_t size) * refill_obj_stock(), called from this function or * independently later. */ - rcu_read_lock(); -retry: - memcg = obj_cgroup_memcg(objcg); - if (unlikely(!css_tryget(&memcg->css))) - goto retry; - rcu_read_unlock(); - nr_pages = size >> PAGE_SHIFT; nr_bytes = size & (PAGE_SIZE - 1); if (nr_bytes) nr_pages += 1; - ret = __memcg_kmem_charge(memcg, gfp, nr_pages); + ret = obj_cgroup_charge_pages(objcg, gfp, nr_pages); if (!ret && nr_bytes) refill_obj_stock(objcg, PAGE_SIZE - nr_bytes); - css_put(&memcg->css); return ret; } From patchwork Tue Mar 9 10:07:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12124589 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 336F1C433E0 for ; Tue, 9 Mar 2021 10:09:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9701C65257 for ; Tue, 9 Mar 2021 10:09:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9701C65257 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3748A8D00D7; Tue, 9 Mar 2021 05:09:05 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 34B1A8D007F; Tue, 9 Mar 2021 05:09:05 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1C4B78D00D7; Tue, 9 Mar 2021 05:09:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0084.hostedemail.com [216.40.44.84]) by kanga.kvack.org (Postfix) with ESMTP id 005AB8D007F for ; Tue, 9 Mar 2021 05:09:04 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id B8DF6282B for ; Tue, 9 Mar 2021 10:09:04 +0000 (UTC) X-FDA: 77899912608.20.204A593 Received: from mail-pg1-f173.google.com (mail-pg1-f173.google.com [209.85.215.173]) by imf05.hostedemail.com (Postfix) with ESMTP id B7CC0E0011C9 for ; Tue, 9 Mar 2021 10:09:03 +0000 (UTC) Received: by mail-pg1-f173.google.com with SMTP id v14so1513337pgq.2 for ; Tue, 09 Mar 2021 02:09:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=GrpXFP7mzFB3br19yVefLWQ1sBtOGhBQdFdyZiwBc9E=; b=Fjo/BXlrHiJ13Nmb6ejsJlD6omZuh2lmAFZH13xoP+xIIP1U/F3nVgO1nbcUUJ0kPu PMICbbRHVIALdqGSyyjwdcaa9xgvxd7DqnQeMj21pJqXzggMei0z/nY5oNNrjE6VIOP/ z3tWQC4BfKYK8rHelETBlkaJlEmECJ6G6Uwo5my5ui4pMV5YGOKnmVtNMk+vsRsd6iEc mQailkdK3l79B/zmiktMedao+JR7hKXy5Efx4Qd2dgI3U9R3j6pnkehV7hBEdUaHHUS1 Rh8t7EkNZf+mMnz01e91FPk/UxLt59B6eLN+lVF0k2hZ4bO2xfRQCFtnHrEsM2FNjQwr FRpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GrpXFP7mzFB3br19yVefLWQ1sBtOGhBQdFdyZiwBc9E=; b=IMlOgb+gNOZX8NdntGOybXWR1DxeObtp14HQyfVMiVIaHntyxBukEZ6EEh2tWHspvs rwAsFa9DbpNiAoVdr+2fHaPIINec2YusldZX9wH7QV986l28iio8VkZFxNnZkNglSfkN JUI0l0arMRFguIu8NnUpl/46vqDzfNLLYgcGo4v5w3IQRv1NjCP4jm5Lt1aTaPthsRpH C3SoPgpUqRWIr5cQ8jxl0V6rIRhXeINRGu3VR5R+sno0Q5Va1FgIXcy7nlOXtMtrvbxA 43AhaM3fcQA0ZwclLrHv2cHQQ5BWtow8lVRj8BYnyxrCH+7xP6CKlXMDZgnE8aV3JuOU VXIg== X-Gm-Message-State: AOAM533gQvgAUgnHTwZsa7mZuJkBWrPHVGMVHR4masu+r2/MVVIHyfDP HNFkz7MVUai2Zj1RVDAmdHxcyQ== X-Google-Smtp-Source: ABdhPJw7TKNPVNxtLqpBiHhlVnvtq9cN/7uZaSyCsWEqimZLH8XABkW1MHmPsJWLV6UD2w6WwN/TdQ== X-Received: by 2002:a63:695:: with SMTP id 143mr13039542pgg.314.1615284543320; Tue, 09 Mar 2021 02:09:03 -0800 (PST) Received: from localhost.localdomain ([139.177.225.255]) by smtp.gmail.com with ESMTPSA id a70sm9258424pfa.202.2021.03.09.02.08.59 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 09 Mar 2021 02:09:03 -0800 (PST) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, Muchun Song Subject: [PATCH v3 2/4] mm: memcontrol: make page_memcg{_rcu} only applicable for non-kmem page Date: Tue, 9 Mar 2021 18:07:15 +0800 Message-Id: <20210309100717.253-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210309100717.253-1-songmuchun@bytedance.com> References: <20210309100717.253-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: B7CC0E0011C9 X-Stat-Signature: bn7kdqy6kcsm1pwm8orft61jzhbxksex Received-SPF: none (bytedance.com>: No applicable sender policy available) receiver=imf05; identity=mailfrom; envelope-from=""; helo=mail-pg1-f173.google.com; client-ip=209.85.215.173 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615284543-214549 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We want to reuse the obj_cgroup APIs to charge the kmem pages. If we do that, we should store an object cgroup pointer to page->memcg_data for the kmem pages. Finally, page->memcg_data can have 3 different meanings. 1) For the slab pages, page->memcg_data points to an object cgroups vector. 2) For the kmem pages (exclude the slab pages), page->memcg_data points to an object cgroup. 3) For the user pages (e.g. the LRU pages), page->memcg_data points to a memory cgroup. Currently we always get the memory cgroup associated with a page via page_memcg() or page_memcg_rcu(). page_memcg_check() is special, it has to be used in cases when it's not known if a page has an associated memory cgroup pointer or an object cgroups vector. Because the page->memcg_data of the kmem page is not pointing to a memory cgroup in the later patch, the page_memcg() and page_memcg_rcu() cannot be applicable for the kmem pages. In this patch, make page_memcg() and page_memcg_rcu() no longer apply to the kmem pages. We do not change the behavior of the page_memcg_check(), it is also applicable for the kmem pages. In the end, there are 3 helpers to get the memcg associated with a page. Usage is as follows. 1) Get the memory cgroup associated with a non-kmem page (e.g. the LRU pages). - page_memcg() - page_memcg_rcu() 2) Get the memory cgroup associated with a page. It has to be used in cases when it's not known if a page has an associated memory cgroup pointer or an object cgroups vector. Returns NULL for slab pages or uncharged pages. Otherwise, returns memory cgroup for charged pages (e.g. the kmem pages, the LRU pages). - page_memcg_check() In some place, we use page_memcg() to check whether the page is charged. Now introduce page_memcg_charged() helper to do that. This is a preparation for reparenting the kmem pages. Signed-off-by: Muchun Song --- include/linux/memcontrol.h | 33 +++++++++++++++++++++++++++------ mm/memcontrol.c | 23 +++++++++++++---------- mm/page_alloc.c | 4 ++-- 3 files changed, 42 insertions(+), 18 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index e6dc793d587d..83cbcdcfcc92 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -358,14 +358,26 @@ enum page_memcg_data_flags { #define MEMCG_DATA_FLAGS_MASK (__NR_MEMCG_DATA_FLAGS - 1) +/* Return true for charged page, otherwise false. */ +static inline bool page_memcg_charged(struct page *page) +{ + unsigned long memcg_data = page->memcg_data; + + VM_BUG_ON_PAGE(PageSlab(page), page); + VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_OBJCGS, page); + + return !!memcg_data; +} + /* - * page_memcg - get the memory cgroup associated with a page + * page_memcg - get the memory cgroup associated with a non-kmem page * @page: a pointer to the page struct * * Returns a pointer to the memory cgroup associated with the page, * or NULL. This function assumes that the page is known to have a * proper memory cgroup pointer. It's not safe to call this function - * against some type of pages, e.g. slab pages or ex-slab pages. + * against some type of pages, e.g. slab pages, kmem pages or ex-slab + * pages. * * Any of the following ensures page and memcg binding stability: * - the page lock @@ -378,27 +390,31 @@ static inline struct mem_cgroup *page_memcg(struct page *page) unsigned long memcg_data = page->memcg_data; VM_BUG_ON_PAGE(PageSlab(page), page); + VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_KMEM, page); VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_OBJCGS, page); return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); } /* - * page_memcg_rcu - locklessly get the memory cgroup associated with a page + * page_memcg_rcu - locklessly get the memory cgroup associated with a non-kmem page * @page: a pointer to the page struct * * Returns a pointer to the memory cgroup associated with the page, * or NULL. This function assumes that the page is known to have a * proper memory cgroup pointer. It's not safe to call this function - * against some type of pages, e.g. slab pages or ex-slab pages. + * against some type of pages, e.g. slab pages, kmem pages or ex-slab + * pages. */ static inline struct mem_cgroup *page_memcg_rcu(struct page *page) { + unsigned long memcg_data = READ_ONCE(page->memcg_data); + VM_BUG_ON_PAGE(PageSlab(page), page); + VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_KMEM, page); WARN_ON_ONCE(!rcu_read_lock_held()); - return (struct mem_cgroup *)(READ_ONCE(page->memcg_data) & - ~MEMCG_DATA_FLAGS_MASK); + return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); } /* @@ -1072,6 +1088,11 @@ void mem_cgroup_split_huge_fixup(struct page *head); struct mem_cgroup; +static inline bool page_memcg_charged(struct page *page) +{ + return false; +} + static inline struct mem_cgroup *page_memcg(struct page *page) { return NULL; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index fc22da9805fb..e1dc73ceb98a 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -855,10 +855,11 @@ void __mod_lruvec_page_state(struct page *page, enum node_stat_item idx, int val) { struct page *head = compound_head(page); /* rmap on tail pages */ - struct mem_cgroup *memcg = page_memcg(head); + struct mem_cgroup *memcg; pg_data_t *pgdat = page_pgdat(page); struct lruvec *lruvec; + memcg = page_memcg_check(head); /* Untracked pages have no memcg, no lruvec. Update only the node */ if (!memcg) { __mod_node_page_state(pgdat, idx, val); @@ -3166,12 +3167,13 @@ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order) */ void __memcg_kmem_uncharge_page(struct page *page, int order) { - struct mem_cgroup *memcg = page_memcg(page); + struct mem_cgroup *memcg; unsigned int nr_pages = 1 << order; - if (!memcg) + if (!page_memcg_charged(page)) return; + memcg = page_memcg_check(page); VM_BUG_ON_PAGE(mem_cgroup_is_root(memcg), page); __memcg_kmem_uncharge(memcg, nr_pages); page->memcg_data = 0; @@ -6827,24 +6829,25 @@ static void uncharge_batch(const struct uncharge_gather *ug) static void uncharge_page(struct page *page, struct uncharge_gather *ug) { unsigned long nr_pages; + struct mem_cgroup *memcg; VM_BUG_ON_PAGE(PageLRU(page), page); - if (!page_memcg(page)) + if (!page_memcg_charged(page)) return; /* * Nobody should be changing or seriously looking at - * page_memcg(page) at this point, we have fully - * exclusive access to the page. + * page memcg at this point, we have fully exclusive + * access to the page. */ - - if (ug->memcg != page_memcg(page)) { + memcg = page_memcg_check(page); + if (ug->memcg != memcg) { if (ug->memcg) { uncharge_batch(ug); uncharge_gather_clear(ug); } - ug->memcg = page_memcg(page); + ug->memcg = memcg; /* pairs with css_put in uncharge_batch */ css_get(&ug->memcg->css); @@ -6877,7 +6880,7 @@ void mem_cgroup_uncharge(struct page *page) return; /* Don't touch page->lru of any random page, pre-check: */ - if (!page_memcg(page)) + if (!page_memcg_charged(page)) return; uncharge_gather_clear(&ug); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index f10966e3b4a5..bcb58ae15e24 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1124,7 +1124,7 @@ static inline bool page_expected_state(struct page *page, if (unlikely((unsigned long)page->mapping | page_ref_count(page) | #ifdef CONFIG_MEMCG - (unsigned long)page_memcg(page) | + page_memcg_charged(page) | #endif (page->flags & check_flags))) return false; @@ -1149,7 +1149,7 @@ static const char *page_bad_reason(struct page *page, unsigned long flags) bad_reason = "PAGE_FLAGS_CHECK_AT_FREE flag(s) set"; } #ifdef CONFIG_MEMCG - if (unlikely(page_memcg(page))) + if (unlikely(page_memcg_charged(page))) bad_reason = "page still charged to cgroup"; #endif return bad_reason; From patchwork Tue Mar 9 10:07:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12124591 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4DE9EC433DB for ; Tue, 9 Mar 2021 10:09:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AD50D65257 for ; Tue, 9 Mar 2021 10:09:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AD50D65257 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4D4258D00D8; Tue, 9 Mar 2021 05:09:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4AA958D007F; Tue, 9 Mar 2021 05:09:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 324068D00D8; Tue, 9 Mar 2021 05:09:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0067.hostedemail.com [216.40.44.67]) by kanga.kvack.org (Postfix) with ESMTP id 1660D8D007F for ; Tue, 9 Mar 2021 05:09:09 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id BFF1C180AD81D for ; Tue, 9 Mar 2021 10:09:08 +0000 (UTC) X-FDA: 77899912776.14.63660D3 Received: from mail-pf1-f173.google.com (mail-pf1-f173.google.com [209.85.210.173]) by imf26.hostedemail.com (Postfix) with ESMTP id 2A9404080F50 for ; Tue, 9 Mar 2021 10:09:06 +0000 (UTC) Received: by mail-pf1-f173.google.com with SMTP id a188so9165330pfb.4 for ; Tue, 09 Mar 2021 02:09:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5/zxOOlFck4GlBFK2+ckmxR9mJWNjZboWgW6z95uHA0=; b=Ry1qrRc+5lI1iGnqqWVM4o17Gn43+ZLX4Fjh/ZhiXcHTUhGi2F8ZTOgb9Sd+veN5a5 L2OWCkrr0L1E8ufr1WICKVuju21XlWWG2a9vqYSdH8unRX2h6JYDR8RbcuovmMkWng4N PRRbTCYooxzYrVJ3YmFpAXrIEdBC7SLhlbKaxUTtabygFdRPE1CjhCKvU8VAO+a5JyAg mxG4PlbHrhfhyXBvh5JOqFVBvsgCibil77LD09iPtq83BrI4HPNWTzMggFd0hgyAYxgA dOquiJ+yiFcZI5nDV/xrwvbjPJ3En7Oh7I2ANROqFF4BBF/kaOeKMHQ6IBmzIRgwWIQr uTxg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5/zxOOlFck4GlBFK2+ckmxR9mJWNjZboWgW6z95uHA0=; b=mspzlUS82IL4Wj5gq6uicnmR//h1PN9ozAmHcQHGS9ml5/MabbRUT5vIjOgi+cO09l nhPzSXlxRIftOPFQoajaW+MdyC8eMV5REcURa7oSahz+3BZMlEXqayNyVf5Qsugk0o7A phrpdq/LF36AbrPhiIkZ6Vuhx9HrCGA2/ONRk0pJsbPaa9C6hNpR6lARhLv0is5SLqQG JFbiJ1EVdthaUixmwRqckLLpICWCm8jC60T2/UtLaOv1sc41LokYsLcMakn0tt/ZuO/X NWd8QUCz9vSkhMiHggOzI8mp16B2F5ieY/2CMO7uPcLsm5BgyT5bauV+3d31Re4ZyzN8 ha6w== X-Gm-Message-State: AOAM532QeB2RBB/aLfccmsbFBzdrms5bNdJ0idT/0R6yP6zddz/LF0Zy 0EHiZAwD6+da/TlQmHYaUSjZ0w== X-Google-Smtp-Source: ABdhPJwZ6+mzvwKJVap7GfMIaRdnbYNCKvspqSspeZGIQo0L9r5nG6rhOHbzHqOBBynBPD6y/NczBg== X-Received: by 2002:a62:5a43:0:b029:1ed:263a:b05c with SMTP id o64-20020a625a430000b02901ed263ab05cmr24839949pfb.16.1615284547311; Tue, 09 Mar 2021 02:09:07 -0800 (PST) Received: from localhost.localdomain ([139.177.225.255]) by smtp.gmail.com with ESMTPSA id a70sm9258424pfa.202.2021.03.09.02.09.03 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 09 Mar 2021 02:09:06 -0800 (PST) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, Muchun Song Subject: [PATCH v3 3/4] mm: memcontrol: use obj_cgroup APIs to charge kmem pages Date: Tue, 9 Mar 2021 18:07:16 +0800 Message-Id: <20210309100717.253-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210309100717.253-1-songmuchun@bytedance.com> References: <20210309100717.253-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 2A9404080F50 X-Stat-Signature: 1dy67dxrbwf1p8wtrmbdt753qrhf9gy4 Received-SPF: none (bytedance.com>: No applicable sender policy available) receiver=imf26; identity=mailfrom; envelope-from=""; helo=mail-pf1-f173.google.com; client-ip=209.85.210.173 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615284546-146044 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since Roman series "The new cgroup slab memory controller" applied. All slab objects are charged via the new APIs of obj_cgroup. The new APIs introduce a struct obj_cgroup to charge slab objects. It prevents long-living objects from pinning the original memory cgroup in the memory. But there are still some corner objects (e.g. allocations larger than order-1 page on SLUB) which are not charged via the new APIs. Those objects (include the pages which are allocated from buddy allocator directly) are charged as kmem pages which still hold a reference to the memory cgroup. This patch aims to charge the kmem pages by using the new APIs of obj_cgroup. Finally, the page->memcg_data of the kmem page points to an object cgroup. We can use the page_objcg() to get the object cgroup associated with a kmem page. Or we can use page_memcg_check() to get the memory cgroup associated with a kmem page, but caller must ensure that the returned memcg won't be released (e.g. acquire the rcu_read_lock or css_set_lock). Signed-off-by: Muchun Song Reported-by: kernel test robot --- include/linux/memcontrol.h | 63 ++++++++++++++++++------ mm/memcontrol.c | 119 ++++++++++++++++++++++++++++++--------------- 2 files changed, 128 insertions(+), 54 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 83cbcdcfcc92..07c449af9c0f 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -370,6 +370,18 @@ static inline bool page_memcg_charged(struct page *page) } /* + * After the initialization objcg->memcg is always pointing at + * a valid memcg, but can be atomically swapped to the parent memcg. + * + * The caller must ensure that the returned memcg won't be released: + * e.g. acquire the rcu_read_lock or css_set_lock. + */ +static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg) +{ + return READ_ONCE(objcg->memcg); +} + +/* * page_memcg - get the memory cgroup associated with a non-kmem page * @page: a pointer to the page struct * @@ -422,15 +434,19 @@ static inline struct mem_cgroup *page_memcg_rcu(struct page *page) * @page: a pointer to the page struct * * Returns a pointer to the memory cgroup associated with the page, - * or NULL. This function unlike page_memcg() can take any page + * or NULL. This function unlike page_memcg() can take any page * as an argument. It has to be used in cases when it's not known if a page - * has an associated memory cgroup pointer or an object cgroups vector. + * has an associated memory cgroup pointer or an object cgroups vector or + * an object cgroup. * * Any of the following ensures page and memcg binding stability: * - the page lock * - LRU isolation * - lock_page_memcg() * - exclusive reference + * + * Should be called under rcu lock which can protect memcg associated with a + * kmem page from being released. */ static inline struct mem_cgroup *page_memcg_check(struct page *page) { @@ -443,6 +459,13 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page) if (memcg_data & MEMCG_DATA_OBJCGS) return NULL; + if (memcg_data & MEMCG_DATA_KMEM) { + struct obj_cgroup *objcg; + + objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + return obj_cgroup_memcg(objcg); + } + return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); } @@ -501,6 +524,25 @@ static inline struct obj_cgroup **page_objcgs_check(struct page *page) return (struct obj_cgroup **)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); } +/* + * page_objcg - get the object cgroup associated with a kmem page + * @page: a pointer to the page struct + * + * Returns a pointer to the object cgroup associated with the kmem page, + * or NULL. This function assumes that the page is known to have an + * associated object cgroup. It's only safe to call this function + * against kmem pages (PageMemcgKmem() returns true). + */ +static inline struct obj_cgroup *page_objcg(struct page *page) +{ + unsigned long memcg_data = page->memcg_data; + + VM_BUG_ON_PAGE(PageSlab(page), page); + VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_OBJCGS, page); + VM_BUG_ON_PAGE(!(memcg_data & MEMCG_DATA_KMEM), page); + + return (struct obj_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); +} #else static inline struct obj_cgroup **page_objcgs(struct page *page) { @@ -511,6 +553,11 @@ static inline struct obj_cgroup **page_objcgs_check(struct page *page) { return NULL; } + +static inline struct obj_cgroup *page_objcg(struct page *page) +{ + return NULL; +} #endif static __always_inline bool memcg_stat_item_in_bytes(int idx) @@ -729,18 +776,6 @@ static inline void obj_cgroup_put(struct obj_cgroup *objcg) percpu_ref_put(&objcg->refcnt); } -/* - * After the initialization objcg->memcg is always pointing at - * a valid memcg, but can be atomically swapped to the parent memcg. - * - * The caller must ensure that the returned memcg won't be released: - * e.g. acquire the rcu_read_lock or css_set_lock. - */ -static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg) -{ - return READ_ONCE(objcg->memcg); -} - static inline void mem_cgroup_put(struct mem_cgroup *memcg) { if (memcg) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index e1dc73ceb98a..38376f9d6659 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -859,15 +859,26 @@ void __mod_lruvec_page_state(struct page *page, enum node_stat_item idx, pg_data_t *pgdat = page_pgdat(page); struct lruvec *lruvec; - memcg = page_memcg_check(head); - /* Untracked pages have no memcg, no lruvec. Update only the node */ - if (!memcg) { - __mod_node_page_state(pgdat, idx, val); - return; + if (PageMemcgKmem(head)) { + rcu_read_lock(); + memcg = obj_cgroup_memcg(page_objcg(page)); + } else { + memcg = page_memcg(head); + /* + * Untracked pages have no memcg, no lruvec. Update only the + * node. + */ + if (!memcg) { + __mod_node_page_state(pgdat, idx, val); + return; + } } lruvec = mem_cgroup_lruvec(memcg, pgdat); __mod_lruvec_state(lruvec, idx, val); + + if (PageMemcgKmem(head)) + rcu_read_unlock(); } EXPORT_SYMBOL(__mod_lruvec_page_state); @@ -2906,6 +2917,20 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg) page->memcg_data = (unsigned long)memcg; } +static inline struct mem_cgroup *obj_cgroup_memcg_get(struct obj_cgroup *objcg) +{ + struct mem_cgroup *memcg; + + rcu_read_lock(); +retry: + memcg = obj_cgroup_memcg(objcg); + if (unlikely(!css_tryget(&memcg->css))) + goto retry; + rcu_read_unlock(); + + return memcg; +} + #ifdef CONFIG_MEMCG_KMEM int memcg_alloc_page_obj_cgroups(struct page *page, struct kmem_cache *s, gfp_t gfp, bool new_page) @@ -3071,15 +3096,8 @@ static int obj_cgroup_charge_pages(struct obj_cgroup *objcg, gfp_t gfp, struct mem_cgroup *memcg; int ret; - rcu_read_lock(); -retry: - memcg = obj_cgroup_memcg(objcg); - if (unlikely(!css_tryget(&memcg->css))) - goto retry; - rcu_read_unlock(); - + memcg = obj_cgroup_memcg_get(objcg); ret = __memcg_kmem_charge(memcg, gfp, nr_pages); - css_put(&memcg->css); return ret; @@ -3144,18 +3162,18 @@ static void __memcg_kmem_uncharge(struct mem_cgroup *memcg, unsigned int nr_page */ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order) { - struct mem_cgroup *memcg; + struct obj_cgroup *objcg; int ret = 0; - memcg = get_mem_cgroup_from_current(); - if (memcg && !mem_cgroup_is_root(memcg)) { - ret = __memcg_kmem_charge(memcg, gfp, 1 << order); + objcg = get_obj_cgroup_from_current(); + if (objcg) { + ret = obj_cgroup_charge_pages(objcg, gfp, 1 << order); if (!ret) { - page->memcg_data = (unsigned long)memcg | + page->memcg_data = (unsigned long)objcg | MEMCG_DATA_KMEM; return 0; } - css_put(&memcg->css); + obj_cgroup_put(objcg); } return ret; } @@ -3167,17 +3185,16 @@ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order) */ void __memcg_kmem_uncharge_page(struct page *page, int order) { - struct mem_cgroup *memcg; + struct obj_cgroup *objcg; unsigned int nr_pages = 1 << order; if (!page_memcg_charged(page)) return; - memcg = page_memcg_check(page); - VM_BUG_ON_PAGE(mem_cgroup_is_root(memcg), page); - __memcg_kmem_uncharge(memcg, nr_pages); + objcg = page_objcg(page); + obj_cgroup_uncharge_pages(objcg, nr_pages); page->memcg_data = 0; - css_put(&memcg->css); + obj_cgroup_put(objcg); } static bool consume_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes) @@ -6806,11 +6823,23 @@ static inline void uncharge_gather_clear(struct uncharge_gather *ug) static void uncharge_batch(const struct uncharge_gather *ug) { unsigned long flags; + unsigned long nr_pages; - if (!mem_cgroup_is_root(ug->memcg)) { - page_counter_uncharge(&ug->memcg->memory, ug->nr_pages); + /* + * The kmem pages can be reparented to the root memcg, in + * order to prevent the memory counter of root memcg from + * increasing indefinitely. We should decrease the memory + * counter when unchange. + */ + if (mem_cgroup_is_root(ug->memcg)) + nr_pages = ug->nr_kmem; + else + nr_pages = ug->nr_pages; + + if (nr_pages) { + page_counter_uncharge(&ug->memcg->memory, nr_pages); if (do_memsw_account()) - page_counter_uncharge(&ug->memcg->memsw, ug->nr_pages); + page_counter_uncharge(&ug->memcg->memsw, nr_pages); if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) && ug->nr_kmem) page_counter_uncharge(&ug->memcg->kmem, ug->nr_kmem); memcg_oom_recover(ug->memcg); @@ -6828,7 +6857,7 @@ static void uncharge_batch(const struct uncharge_gather *ug) static void uncharge_page(struct page *page, struct uncharge_gather *ug) { - unsigned long nr_pages; + unsigned long nr_pages, nr_kmem; struct mem_cgroup *memcg; VM_BUG_ON_PAGE(PageLRU(page), page); @@ -6836,34 +6865,44 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) if (!page_memcg_charged(page)) return; + nr_pages = compound_nr(page); /* * Nobody should be changing or seriously looking at - * page memcg at this point, we have fully exclusive - * access to the page. + * page memcg or objcg at this point, we have fully + * exclusive access to the page. */ - memcg = page_memcg_check(page); + if (PageMemcgKmem(page)) { + struct obj_cgroup *objcg; + + objcg = page_objcg(page); + memcg = obj_cgroup_memcg_get(objcg); + + page->memcg_data = 0; + obj_cgroup_put(objcg); + nr_kmem = nr_pages; + } else { + memcg = page_memcg(page); + page->memcg_data = 0; + nr_kmem = 0; + } + if (ug->memcg != memcg) { if (ug->memcg) { uncharge_batch(ug); uncharge_gather_clear(ug); } ug->memcg = memcg; + ug->dummy_page = page; /* pairs with css_put in uncharge_batch */ css_get(&ug->memcg->css); } - nr_pages = compound_nr(page); ug->nr_pages += nr_pages; + ug->nr_kmem += nr_kmem; + ug->pgpgout += !nr_kmem; - if (PageMemcgKmem(page)) - ug->nr_kmem += nr_pages; - else - ug->pgpgout++; - - ug->dummy_page = page; - page->memcg_data = 0; - css_put(&ug->memcg->css); + css_put(&memcg->css); } /** From patchwork Tue Mar 9 10:07:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12124593 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D154C433E6 for ; Tue, 9 Mar 2021 10:09:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0A29365257 for ; Tue, 9 Mar 2021 10:09:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0A29365257 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A03188D00D9; Tue, 9 Mar 2021 05:09:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9D9A48D007F; Tue, 9 Mar 2021 05:09:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 87ACF8D00D9; Tue, 9 Mar 2021 05:09:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0189.hostedemail.com [216.40.44.189]) by kanga.kvack.org (Postfix) with ESMTP id 6D0698D007F for ; Tue, 9 Mar 2021 05:09:12 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 31B741DF6 for ; Tue, 9 Mar 2021 10:09:12 +0000 (UTC) X-FDA: 77899912944.06.9015344 Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) by imf20.hostedemail.com (Postfix) with ESMTP id 82A3DD6 for ; Tue, 9 Mar 2021 10:09:08 +0000 (UTC) Received: by mail-pl1-f169.google.com with SMTP id ba1so6336630plb.1 for ; Tue, 09 Mar 2021 02:09:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qTJbj+bHGwrcKPZVFV0B4s52LD4dHzSLrz0c48QpEbw=; b=RQ7Divm2aZh0WH7Jb+9JI+tZWXNMnj7ERXR0m2p0ZrH5HGUbl95JJHc5qrbLDMX88A U1c1JYkCCQkCAcHgQG3HP3EqJnZhZKEatO2LtpGTlRepR0TdBd2cpbyXDJ6KHne88CZq yA7lzE50UMOg0cmwYEjiYQywg/3kAb5JSTGNQv+qrpF4o/VL+nZ+gJzkYNO3A9o8jcdW 3it+YVkUr/vMIAD9xZx3/LSTlT7CcKfxWnD4L9TAQP2F5JHBIBA8trYPiaAiWpBdtrvh a06q0VnUXD1JkJ2Q1Cx65wokztExnw8cmvMWi/iPc0OGAWNgSzFhFYIHyxUBalvhHrJu 4W4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qTJbj+bHGwrcKPZVFV0B4s52LD4dHzSLrz0c48QpEbw=; b=Fa1egyHkG6Qyfsi6Q0MKR3q3BKNq6wYw8gX9bVi6JEYgegE14DxGCLwtvPdfuT6v0h j3Im535CC7XZa73y4plfLeLGcp8mFonfmb3bgdxpnOwZ7ss8Qu7/4TFTb9/Vet5nyHwi MewhduTETYWP7/L8Z3A9SSxcbPYAorOK/NNCvyYDR9B+TYoLxX8W+cbME72TjH3lQTn4 bjjPnjbLYVbyKocvtpbpFKig9NylvWgu0LBcaHmQxhmmVmrcFOmeEXeI12VJUD1HsJ8Q AcTHXNt3ENB5G2TdbG5OmbpN7+/W+EGJ4tvIg+ep9+yz9e7HGMzzWfornivcG2P2Iuak vgtg== X-Gm-Message-State: AOAM532C/cHGGvKjuL68AFCq1FFXa7vn41kr+MiKKQ6T0PF1AQvkwrQ4 Aqn02Sv2TnRZYXTi1iJsWpCM/w== X-Google-Smtp-Source: ABdhPJyWchdqh/CYZ41KxFNRdvzk6t/duAn74ceMdAR046SE1ztNLZF+5rYZjR2B0CD2D+CIelI8cg== X-Received: by 2002:a17:902:9691:b029:e3:dd4b:f6bb with SMTP id n17-20020a1709029691b02900e3dd4bf6bbmr3130697plp.77.1615284550974; Tue, 09 Mar 2021 02:09:10 -0800 (PST) Received: from localhost.localdomain ([139.177.225.255]) by smtp.gmail.com with ESMTPSA id a70sm9258424pfa.202.2021.03.09.02.09.07 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 09 Mar 2021 02:09:10 -0800 (PST) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, Muchun Song Subject: [PATCH v3 4/4] mm: memcontrol: move PageMemcgKmem to the scope of CONFIG_MEMCG_KMEM Date: Tue, 9 Mar 2021 18:07:17 +0800 Message-Id: <20210309100717.253-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210309100717.253-1-songmuchun@bytedance.com> References: <20210309100717.253-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 82A3DD6 X-Stat-Signature: dsiztnerzoqbmp7j5f5tmzxz4xoj3p7e Received-SPF: none (bytedance.com>: No applicable sender policy available) receiver=imf20; identity=mailfrom; envelope-from=""; helo=mail-pl1-f169.google.com; client-ip=209.85.214.169 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615284548-993981 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The page only can be marked as kmem when CONFIG_MEMCG_KMEM is enabled. So move PageMemcgKmem() to the scope of the CONFIG_MEMCG_KMEM. As a bonus, on !CONFIG_MEMCG_KMEM build some code can be compiled out. Signed-off-by: Muchun Song Acked-by: Roman Gushchin Reviewed-by: Shakeel Butt Acked-by: Johannes Weiner --- include/linux/memcontrol.h | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 07c449af9c0f..d3ca8c8e7fc3 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -469,6 +469,7 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page) return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); } +#ifdef CONFIG_MEMCG_KMEM /* * PageMemcgKmem - check if the page has MemcgKmem flag set * @page: a pointer to the page struct @@ -483,7 +484,6 @@ static inline bool PageMemcgKmem(struct page *page) return page->memcg_data & MEMCG_DATA_KMEM; } -#ifdef CONFIG_MEMCG_KMEM /* * page_objcgs - get the object cgroups vector associated with a page * @page: a pointer to the page struct @@ -544,6 +544,11 @@ static inline struct obj_cgroup *page_objcg(struct page *page) return (struct obj_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); } #else +static inline bool PageMemcgKmem(struct page *page) +{ + return false; +} + static inline struct obj_cgroup **page_objcgs(struct page *page) { return NULL;