From patchwork Wed Jun 30 04:00:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12351177 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 079BDC11F68 for ; Wed, 30 Jun 2021 04:02:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 720AB61D2C for ; Wed, 30 Jun 2021 04:02:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 720AB61D2C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D344C8D0167; Wed, 30 Jun 2021 00:02:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D0AB28D0160; Wed, 30 Jun 2021 00:02:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BD37A8D0167; Wed, 30 Jun 2021 00:02:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0001.hostedemail.com [216.40.44.1]) by kanga.kvack.org (Postfix) with ESMTP id 980988D0160 for ; Wed, 30 Jun 2021 00:02:37 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 6B9638249980 for ; Wed, 30 Jun 2021 04:02:37 +0000 (UTC) X-FDA: 78309043554.15.43E4CC9 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf06.hostedemail.com (Postfix) with ESMTP id 91D62801AF02 for ; Wed, 30 Jun 2021 04:02:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=4M1RMjp1sHStCMlmNe7eYn2fAPfqamAD/pAGYiqL3n0=; b=UwhPCJerldFihrwtXHNJHqabcZ YOMDupabJSqcrUfNNo3/nalt5CP2DtpDO9MlQq4bzk/jMonK/U9FciycMAGimGYPIxM0li0CXco9a qwTqO4VIVW3DRZ2uWG0SSbaobc3Tb0zEsakks3F9N79CV+PgsSVmYGHAI6XoW6ljE/BlDVPjBokY1 FBKnqsCKNguz89FOlIWYpP9NMuw78GhS83wDZPIUmxBLVV27LttCUgTEmZ5O/VVhIBRM7e/aVtIoQ BjpRB3m03CXKYuck65gL8JB6qQdqk2hy6eXMti3FXpaFVUxn/g8KHqPQdWDBZBuHEvdcKQuwVIGem Jvy3SiHA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1lyRPh-004qkb-NG; Wed, 30 Jun 2021 04:01:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, cgroups@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Johannes Weiner , Michal Hocko , Vladimir Davydov Subject: [PATCH v3 01/18] mm: Add folio_nid() Date: Wed, 30 Jun 2021 05:00:17 +0100 Message-Id: <20210630040034.1155892-2-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210630040034.1155892-1-willy@infradead.org> References: <20210630040034.1155892-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 91D62801AF02 Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=UwhPCJer; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: wmoebn43naofscd1wd64htdecnowkbtx X-HE-Tag: 1625025756-358312 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is the folio equivalent of page_to_nid(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- include/linux/mm.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index 7ad3c8a115ae..48d2ccd9f8c3 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1429,6 +1429,11 @@ static inline int page_to_nid(const struct page *page) } #endif +static inline int folio_nid(const struct folio *folio) +{ + return page_to_nid(&folio->page); +} + #ifdef CONFIG_NUMA_BALANCING static inline int cpu_pid_to_cpupid(int cpu, int pid) { From patchwork Wed Jun 30 04:00:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12351179 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7633BC11F65 for ; Wed, 30 Jun 2021 04:03:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 01DEB61D28 for ; Wed, 30 Jun 2021 04:03:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 01DEB61D28 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 67CED8D0168; Wed, 30 Jun 2021 00:03:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6538A8D0160; Wed, 30 Jun 2021 00:03:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 51B938D0168; Wed, 30 Jun 2021 00:03:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0023.hostedemail.com [216.40.44.23]) by kanga.kvack.org (Postfix) with ESMTP id 27E478D0160 for ; Wed, 30 Jun 2021 00:03:09 -0400 (EDT) Received: from smtpin32.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 723632197E for ; Wed, 30 Jun 2021 04:03:07 +0000 (UTC) X-FDA: 78309044814.32.69EC329 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id 1D4581000206 for ; Wed, 30 Jun 2021 04:03:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=7xNRxU2d+g0pxJV1UZSJ1U58/qvZb9UUyyUY3cXQBnc=; b=Rfjq8Aj/6MOwV343S7QdscUBVq LKt2sfFOgujJxKkJAmsYe5V80hkmSbEcYAKXW6FoJQ/UPdV0aN+wIwh+ZAT/YA2goBWPQbJch+0ls z6Lg6+WZrodAiYe6F7D/4v+NbyjpyVDrKEDqxIuH/eZCQtqnPL/RyYS8QUhpcBl/QwCoWOg/NCX2H bMSRtZTWe/bxNEeA+oZt5Z5kaYmSl2Oeye7sAVd6E661TrdKnwPOErvidqPJfd/3RqECS0/BMEfYf kmY0S/SLhejpH6fyV3fYpaz29+E2rPKQt/OFHFysMXsdJn4tbc4aySwjXAOUcJsewn+gRAb6CNlVW ZGA51HpQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1lyRQQ-004qpS-BM; Wed, 30 Jun 2021 04:02:21 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, cgroups@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Johannes Weiner , Michal Hocko , Vladimir Davydov , Christoph Hellwig , Michal Hocko Subject: [PATCH v3 02/18] mm/memcg: Remove 'page' parameter to mem_cgroup_charge_statistics() Date: Wed, 30 Jun 2021 05:00:18 +0100 Message-Id: <20210630040034.1155892-3-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210630040034.1155892-1-willy@infradead.org> References: <20210630040034.1155892-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="Rfjq8Aj/"; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 1D4581000206 X-Stat-Signature: 9mekocohjyrpgnfcs59psc9d1j3dwyxh X-HE-Tag: 1625025786-156288 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The last use of 'page' was removed by commit 468c398233da ("mm: memcontrol: switch to native NR_ANON_THPS counter"), so we can now remove the parameter from the function. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Michal Hocko Acked-by: Johannes Weiner --- mm/memcontrol.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 4ee243ce6135..25cad0fb7d4e 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -826,7 +826,6 @@ static unsigned long memcg_events_local(struct mem_cgroup *memcg, int event) } static void mem_cgroup_charge_statistics(struct mem_cgroup *memcg, - struct page *page, int nr_pages) { /* pagein of a big page is an event. So, ignore page size */ @@ -5687,9 +5686,9 @@ static int mem_cgroup_move_account(struct page *page, ret = 0; local_irq_disable(); - mem_cgroup_charge_statistics(to, page, nr_pages); + mem_cgroup_charge_statistics(to, nr_pages); memcg_check_events(to, page); - mem_cgroup_charge_statistics(from, page, -nr_pages); + mem_cgroup_charge_statistics(from, -nr_pages); memcg_check_events(from, page); local_irq_enable(); out_unlock: @@ -6710,7 +6709,7 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, commit_charge(page, memcg); local_irq_disable(); - mem_cgroup_charge_statistics(memcg, page, nr_pages); + mem_cgroup_charge_statistics(memcg, nr_pages); memcg_check_events(memcg, page); local_irq_enable(); out: @@ -7001,7 +7000,7 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) commit_charge(newpage, memcg); local_irq_save(flags); - mem_cgroup_charge_statistics(memcg, newpage, nr_pages); + mem_cgroup_charge_statistics(memcg, nr_pages); memcg_check_events(memcg, newpage); local_irq_restore(flags); } @@ -7231,7 +7230,7 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry) * only synchronisation we have for updating the per-CPU variables. */ VM_BUG_ON(!irqs_disabled()); - mem_cgroup_charge_statistics(memcg, page, -nr_entries); + mem_cgroup_charge_statistics(memcg, -nr_entries); memcg_check_events(memcg, page); css_put(&memcg->css); From patchwork Wed Jun 30 04:00:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12351181 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86FE8C11F65 for ; Wed, 30 Jun 2021 04:03:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F1E5461D2B for ; Wed, 30 Jun 2021 04:03:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F1E5461D2B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 67B0D8D0169; Wed, 30 Jun 2021 00:03:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 62B0B8D0160; Wed, 30 Jun 2021 00:03:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4F3DF8D0169; Wed, 30 Jun 2021 00:03:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0137.hostedemail.com [216.40.44.137]) by kanga.kvack.org (Postfix) with ESMTP id 2CD968D0160 for ; Wed, 30 Jun 2021 00:03:52 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 01A631808F566 for ; Wed, 30 Jun 2021 04:03:52 +0000 (UTC) X-FDA: 78309046704.21.666C541 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf01.hostedemail.com (Postfix) with ESMTP id AE24AD001467 for ; Wed, 30 Jun 2021 04:03:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=61msnB23CZehcqWgrUMdSZxzGrV4zpn68Z+u3tCZTOE=; b=uNMObRprg4+OmqsxA0UfUTIJkx c3zHhQf7t6L2nqJM4ojCzgrX6z24uOr/+MKd0azZ+sbs5dGRWZhqPoJjcWYddv4WD+F4FP/P+y0OT aybe9mP3u9dHSn3FWQJLveBE6zgBPuYzcnC01BdfugK6Juqj1IDJv4ZcTWC9bmBvSuEC9gdaf4KbK T0BPioCtVfEbHk//sTi07twwcr7MBIHZbpvb53MPhg8lyjbqTg4GpP9PpcZm6gEwWIRWXBfgl1YG7 nAX/VHI0UR6VhYXq2Fbn279d6pDKESdnAmreE7QLhovuKPtBdXCMDSyvYsHUAIWM6xmBAieKk9eJo bMi4fGhA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1lyRQx-004qr9-Fv; Wed, 30 Jun 2021 04:02:55 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, cgroups@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Johannes Weiner , Michal Hocko , Vladimir Davydov Subject: [PATCH v3 03/18] mm/memcg: Use the node id in mem_cgroup_update_tree() Date: Wed, 30 Jun 2021 05:00:19 +0100 Message-Id: <20210630040034.1155892-4-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210630040034.1155892-1-willy@infradead.org> References: <20210630040034.1155892-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: AE24AD001467 X-Stat-Signature: u3po6mgme4twsrmqegyetkshisi7hjk3 Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=uNMObRpr; dmarc=none; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1625025831-18089 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: By using the node id in mem_cgroup_update_tree(), we can delete soft_limit_tree_from_page() and mem_cgroup_page_nodeinfo(). Saves 42 bytes of kernel text on my config. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Michal Hocko Acked-by: Johannes Weiner Reviewed-by: Christoph Hellwig --- mm/memcontrol.c | 24 ++++-------------------- 1 file changed, 4 insertions(+), 20 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 25cad0fb7d4e..29b28a050707 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -446,28 +446,12 @@ ino_t page_cgroup_ino(struct page *page) return ino; } -static struct mem_cgroup_per_node * -mem_cgroup_page_nodeinfo(struct mem_cgroup *memcg, struct page *page) -{ - int nid = page_to_nid(page); - - return memcg->nodeinfo[nid]; -} - static struct mem_cgroup_tree_per_node * soft_limit_tree_node(int nid) { return soft_limit_tree.rb_tree_per_node[nid]; } -static struct mem_cgroup_tree_per_node * -soft_limit_tree_from_page(struct page *page) -{ - int nid = page_to_nid(page); - - return soft_limit_tree.rb_tree_per_node[nid]; -} - static void __mem_cgroup_insert_exceeded(struct mem_cgroup_per_node *mz, struct mem_cgroup_tree_per_node *mctz, unsigned long new_usage_in_excess) @@ -538,13 +522,13 @@ static unsigned long soft_limit_excess(struct mem_cgroup *memcg) return excess; } -static void mem_cgroup_update_tree(struct mem_cgroup *memcg, struct page *page) +static void mem_cgroup_update_tree(struct mem_cgroup *memcg, int nid) { unsigned long excess; struct mem_cgroup_per_node *mz; struct mem_cgroup_tree_per_node *mctz; - mctz = soft_limit_tree_from_page(page); + mctz = soft_limit_tree_node(nid); if (!mctz) return; /* @@ -552,7 +536,7 @@ static void mem_cgroup_update_tree(struct mem_cgroup *memcg, struct page *page) * because their event counter is not touched. */ for (; memcg; memcg = parent_mem_cgroup(memcg)) { - mz = mem_cgroup_page_nodeinfo(memcg, page); + mz = memcg->nodeinfo[nid]; excess = soft_limit_excess(memcg); /* * We have to update the tree if mz is on RB-tree or @@ -879,7 +863,7 @@ static void memcg_check_events(struct mem_cgroup *memcg, struct page *page) MEM_CGROUP_TARGET_SOFTLIMIT); mem_cgroup_threshold(memcg); if (unlikely(do_softlimit)) - mem_cgroup_update_tree(memcg, page); + mem_cgroup_update_tree(memcg, page_to_nid(page)); } } From patchwork Wed Jun 30 04:00:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12351183 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5135DC11F65 for ; Wed, 30 Jun 2021 04:04:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D3FD161D28 for ; Wed, 30 Jun 2021 04:04:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D3FD161D28 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 430E58D016A; Wed, 30 Jun 2021 00:04:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 407D28D0160; Wed, 30 Jun 2021 00:04:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2CFE88D016A; Wed, 30 Jun 2021 00:04:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0220.hostedemail.com [216.40.44.220]) by kanga.kvack.org (Postfix) with ESMTP id 0BBFB8D0160 for ; Wed, 30 Jun 2021 00:04:17 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id D63391803978E for ; Wed, 30 Jun 2021 04:04:16 +0000 (UTC) X-FDA: 78309047712.18.F59616E Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id 858281000203 for ; Wed, 30 Jun 2021 04:04:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=PwsSA74q8jzQkF60ZcOSZIYYGpRCVr7nhS/J6QbnC8s=; b=Qi1z/+jkseioZ7eFE1HwqIaJK7 mRtcW1jc9s+65POwMrRHHKi6gRQ5P0x5/+G5inWeDfLGFPlWY2MP05cZzdXhew6BePbvBgc10D4l9 f3d2tAYn2pSFaRv3TyqN3ilt0S+1QtHjAZSg5CdYmgPiSB2BnkyUMG2CL/9r3A0KZh7EbSY/DK3Ir /JLUFlWTLEHs8AB1JzUHs9Pg7W+Yz4EPmaF0+Kfrohbnpu6jvWPUdsMQ/JMchFIFVjuluVXz8VdkN CaP7bYSyzrpr1/S1AXn8vwZlB49O+pCzeEVJUGzYMGTXSWQaxUW12rIG+HJCYQhM9dQJ9pCsWcHG2 wB/w9Wug==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1lyRRS-004qt4-CH; Wed, 30 Jun 2021 04:03:24 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, cgroups@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Johannes Weiner , Michal Hocko , Vladimir Davydov Subject: [PATCH v3 04/18] mm/memcg: Remove soft_limit_tree_node() Date: Wed, 30 Jun 2021 05:00:20 +0100 Message-Id: <20210630040034.1155892-5-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210630040034.1155892-1-willy@infradead.org> References: <20210630040034.1155892-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="Qi1z/+jk"; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 858281000203 X-Stat-Signature: h8pk9neye3a3456z91spmnoxjmtck8mw X-HE-Tag: 1625025856-457481 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Opencode this one-line function in its three callers. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Michal Hocko Acked-by: Johannes Weiner Reviewed-by: Christoph Hellwig --- mm/memcontrol.c | 12 +++--------- 1 file changed, 3 insertions(+), 9 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 29b28a050707..29fdb70dca42 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -446,12 +446,6 @@ ino_t page_cgroup_ino(struct page *page) return ino; } -static struct mem_cgroup_tree_per_node * -soft_limit_tree_node(int nid) -{ - return soft_limit_tree.rb_tree_per_node[nid]; -} - static void __mem_cgroup_insert_exceeded(struct mem_cgroup_per_node *mz, struct mem_cgroup_tree_per_node *mctz, unsigned long new_usage_in_excess) @@ -528,7 +522,7 @@ static void mem_cgroup_update_tree(struct mem_cgroup *memcg, int nid) struct mem_cgroup_per_node *mz; struct mem_cgroup_tree_per_node *mctz; - mctz = soft_limit_tree_node(nid); + mctz = soft_limit_tree.rb_tree_per_node[nid]; if (!mctz) return; /* @@ -567,7 +561,7 @@ static void mem_cgroup_remove_from_trees(struct mem_cgroup *memcg) for_each_node(nid) { mz = memcg->nodeinfo[nid]; - mctz = soft_limit_tree_node(nid); + mctz = soft_limit_tree.rb_tree_per_node[nid]; if (mctz) mem_cgroup_remove_exceeded(mz, mctz); } @@ -3415,7 +3409,7 @@ unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, if (order > 0) return 0; - mctz = soft_limit_tree_node(pgdat->node_id); + mctz = soft_limit_tree.rb_tree_per_node[pgdat->node_id]; /* * Do not even bother to check the largest node if the root From patchwork Wed Jun 30 04:00:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12351185 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41BC7C11F65 for ; Wed, 30 Jun 2021 04:04:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DD723613CA for ; Wed, 30 Jun 2021 04:04:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DD723613CA Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5419A8D016B; Wed, 30 Jun 2021 00:04:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 518378D0160; Wed, 30 Jun 2021 00:04:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3E1498D016B; Wed, 30 Jun 2021 00:04:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0163.hostedemail.com [216.40.44.163]) by kanga.kvack.org (Postfix) with ESMTP id 1C0778D0160 for ; Wed, 30 Jun 2021 00:04:52 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id E46A0181D46E8 for ; Wed, 30 Jun 2021 04:04:51 +0000 (UTC) X-FDA: 78309049182.28.BC169A7 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf24.hostedemail.com (Postfix) with ESMTP id 628BEB00009A for ; Wed, 30 Jun 2021 04:04:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=MPvBRvoA+lVjGYcBT1v6tGu4QtgMR9ceT5UjEW39UbU=; b=uvP2jhH1pd1n+xqgse8zq2VjRM bI5UKTYOqu+MvrP1XastqesPZxl+wtn/9e8o/iO0gQ/vAzmnbUPczl2z/m4KsyR8+TG1F6qJIH5NZ hYFcs3vjznfGR8mo/UPM9bPvFwWuSPvgRMBJqhtL7QhQgaqW2iWQPTsJiqd1MJwTeWX0tY2m2d1X5 ivfxnRstvAx5aLn64MT0K3LrFOi2d0gKV3Wiz06idtCwFVt1w3gpFSTWP+HKdbXB0giVoW+NesqV5 03tZJ38K6dcd1RJSKw8di3c2EHha9PQF9ECMyt7mK+F0GTnK7lCapcQleyCrMIHqgjEhObnacPOQO 4IMoHV/A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1lyRS2-004qv0-Iy; Wed, 30 Jun 2021 04:03:59 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, cgroups@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Johannes Weiner , Michal Hocko , Vladimir Davydov Subject: [PATCH v3 05/18] mm/memcg: Convert memcg_check_events to take a node ID Date: Wed, 30 Jun 2021 05:00:21 +0100 Message-Id: <20210630040034.1155892-6-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210630040034.1155892-1-willy@infradead.org> References: <20210630040034.1155892-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=uvP2jhH1; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: qx511m67368of7x6sau5p5sbmw63szur X-Rspamd-Queue-Id: 628BEB00009A X-Rspamd-Server: rspam06 X-HE-Tag: 1625025891-762590 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: memcg_check_events only uses the page's nid, so call page_to_nid in the callers to make the folio conversion easier. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Michal Hocko Reviewed-by: Christoph Hellwig --- mm/memcontrol.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 29fdb70dca42..5d143d46a8a4 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -846,7 +846,7 @@ static bool mem_cgroup_event_ratelimit(struct mem_cgroup *memcg, * Check events in order. * */ -static void memcg_check_events(struct mem_cgroup *memcg, struct page *page) +static void memcg_check_events(struct mem_cgroup *memcg, int nid) { /* threshold event is triggered in finer grain than soft limit */ if (unlikely(mem_cgroup_event_ratelimit(memcg, @@ -857,7 +857,7 @@ static void memcg_check_events(struct mem_cgroup *memcg, struct page *page) MEM_CGROUP_TARGET_SOFTLIMIT); mem_cgroup_threshold(memcg); if (unlikely(do_softlimit)) - mem_cgroup_update_tree(memcg, page_to_nid(page)); + mem_cgroup_update_tree(memcg, nid); } } @@ -5573,7 +5573,7 @@ static int mem_cgroup_move_account(struct page *page, struct lruvec *from_vec, *to_vec; struct pglist_data *pgdat; unsigned int nr_pages = compound ? thp_nr_pages(page) : 1; - int ret; + int nid, ret; VM_BUG_ON(from == to); VM_BUG_ON_PAGE(PageLRU(page), page); @@ -5662,12 +5662,13 @@ static int mem_cgroup_move_account(struct page *page, __unlock_page_memcg(from); ret = 0; + nid = page_to_nid(page); local_irq_disable(); mem_cgroup_charge_statistics(to, nr_pages); - memcg_check_events(to, page); + memcg_check_events(to, nid); mem_cgroup_charge_statistics(from, -nr_pages); - memcg_check_events(from, page); + memcg_check_events(from, nid); local_irq_enable(); out_unlock: unlock_page(page); @@ -6688,7 +6689,7 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, local_irq_disable(); mem_cgroup_charge_statistics(memcg, nr_pages); - memcg_check_events(memcg, page); + memcg_check_events(memcg, page_to_nid(page)); local_irq_enable(); out: return ret; @@ -6796,7 +6797,7 @@ struct uncharge_gather { unsigned long nr_memory; unsigned long pgpgout; unsigned long nr_kmem; - struct page *dummy_page; + int nid; }; static inline void uncharge_gather_clear(struct uncharge_gather *ug) @@ -6820,7 +6821,7 @@ static void uncharge_batch(const struct uncharge_gather *ug) local_irq_save(flags); __count_memcg_events(ug->memcg, PGPGOUT, ug->pgpgout); __this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, ug->nr_memory); - memcg_check_events(ug->memcg, ug->dummy_page); + memcg_check_events(ug->memcg, ug->nid); local_irq_restore(flags); /* drop reference from uncharge_page */ @@ -6861,7 +6862,7 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) uncharge_gather_clear(ug); } ug->memcg = memcg; - ug->dummy_page = page; + ug->nid = page_to_nid(page); /* pairs with css_put in uncharge_batch */ css_get(&memcg->css); @@ -6979,7 +6980,7 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) local_irq_save(flags); mem_cgroup_charge_statistics(memcg, nr_pages); - memcg_check_events(memcg, newpage); + memcg_check_events(memcg, page_to_nid(newpage)); local_irq_restore(flags); } @@ -7209,7 +7210,7 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry) */ VM_BUG_ON(!irqs_disabled()); mem_cgroup_charge_statistics(memcg, -nr_entries); - memcg_check_events(memcg, page); + memcg_check_events(memcg, page_to_nid(page)); css_put(&memcg->css); } From patchwork Wed Jun 30 04:00:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12351187 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1740CC11F65 for ; Wed, 30 Jun 2021 04:05:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AD7E2613C8 for ; Wed, 30 Jun 2021 04:05:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AD7E2613C8 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 25B038D016C; Wed, 30 Jun 2021 00:05:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 20A128D0160; Wed, 30 Jun 2021 00:05:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0AB768D016C; Wed, 30 Jun 2021 00:05:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0160.hostedemail.com [216.40.44.160]) by kanga.kvack.org (Postfix) with ESMTP id DD58F8D0160 for ; Wed, 30 Jun 2021 00:05:25 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id A6DA218200A05 for ; Wed, 30 Jun 2021 04:05:25 +0000 (UTC) X-FDA: 78309050610.03.F8BFEFC Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf12.hostedemail.com (Postfix) with ESMTP id 05D04100009D for ; Wed, 30 Jun 2021 04:05:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ki5pZ/qQeG2SxQRlplVzUvKFrggRHFHRQLQsRaj7OYs=; b=i8CI1d+7AEaEQ8bLlL4Ci3vgAx LfsCtf3XNFXO42ckT8xnpDyftnhOVmpNY16y4m/xIo1vrxzEghe4nb7y/1TXVDAfNE7vlmQIdWMCl oBTlHXpb0QyJDxVmzr7V5LFwsobkLdlkQVXPLGTajFj8auTu+fz0xqc0F6ssn54HH5L1/7VniKH2/ 9f0eIXArk522TEonTAQkG7vDuT/D5pjaq7L4RMTHhcDpt78MXYjYjcbzyRI/Yr4S5M+9/I3rY1egQ 4qUamzcD7xvrPKpWW5e+lPRCWnzH8VCMqABwjMlk/G7Z0EqPlqbA8WnT8mLkZc7n2rb08m7EGHAhd +bcmIcLg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1lyRSW-004qw3-Ss; Wed, 30 Jun 2021 04:04:35 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, cgroups@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Johannes Weiner , Michal Hocko , Vladimir Davydov Subject: [PATCH v3 06/18] mm/memcg: Add folio_memcg() and related functions Date: Wed, 30 Jun 2021 05:00:22 +0100 Message-Id: <20210630040034.1155892-7-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210630040034.1155892-1-willy@infradead.org> References: <20210630040034.1155892-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=i8CI1d+7; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: riurkrcyn3bdz4tqny5rononffijxy4e X-Rspamd-Queue-Id: 05D04100009D X-Rspamd-Server: rspam06 X-HE-Tag: 1625025924-670255 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: memcg information is only stored in the head page, so the memcg subsystem needs to assure that all accesses are to the head page. The first step is converting page_memcg() to folio_memcg(). Retain page_memcg() as a wrapper around folio_memcg() and PageMemcgKmem() as a wrapper around folio_memcg_kmem() but convert __page_memcg() to __folio_memcg() and __page_objcg() to __folio_objcg(). Signed-off-by: Matthew Wilcox (Oracle) Reported-by: kernel test robot Reviewed-by: Christoph Hellwig --- include/linux/memcontrol.h | 105 +++++++++++++++++++++---------------- mm/memcontrol.c | 21 ++++---- 2 files changed, 73 insertions(+), 53 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 6d66037be646..92689fb2dab4 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -372,6 +372,7 @@ enum page_memcg_data_flags { #define MEMCG_DATA_FLAGS_MASK (__NR_MEMCG_DATA_FLAGS - 1) static inline bool PageMemcgKmem(struct page *page); +static inline bool folio_memcg_kmem(struct folio *folio); /* * After the initialization objcg->memcg is always pointing at @@ -386,73 +387,78 @@ static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg) } /* - * __page_memcg - get the memory cgroup associated with a non-kmem page - * @page: a pointer to the page struct + * __folio_memcg - Get the memory cgroup associated with a non-kmem folio + * @folio: Pointer to the folio. * - * Returns a pointer to the memory cgroup associated with the page, - * or NULL. This function assumes that the page is known to have a + * Returns a pointer to the memory cgroup associated with the folio, + * or NULL. This function assumes that the folio is known to have a * proper memory cgroup pointer. It's not safe to call this function - * against some type of pages, e.g. slab pages or ex-slab pages or - * kmem pages. + * against some type of folios, e.g. slab folios or ex-slab folios or + * kmem folios. */ -static inline struct mem_cgroup *__page_memcg(struct page *page) +static inline struct mem_cgroup *__folio_memcg(struct folio *folio) { - unsigned long memcg_data = page->memcg_data; + unsigned long memcg_data = folio->memcg_data; - VM_BUG_ON_PAGE(PageSlab(page), page); - VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_OBJCGS, page); - VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_KMEM, page); + VM_BUG_ON_FOLIO(folio_slab(folio), folio); + VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio); + VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_KMEM, folio); return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); } /* - * __page_objcg - get the object cgroup associated with a kmem page - * @page: a pointer to the page struct + * __folio_objcg - get the object cgroup associated with a kmem folio. + * @folio: Pointer to the folio. * - * Returns a pointer to the object cgroup associated with the page, - * or NULL. This function assumes that the page is known to have a + * Returns a pointer to the object cgroup associated with the folio, + * or NULL. This function assumes that the folio is known to have a * proper object cgroup pointer. It's not safe to call this function - * against some type of pages, e.g. slab pages or ex-slab pages or - * LRU pages. + * against some type of folios, e.g. slab folios or ex-slab folios or + * LRU folios. */ -static inline struct obj_cgroup *__page_objcg(struct page *page) +static inline struct obj_cgroup *__folio_objcg(struct folio *folio) { - unsigned long memcg_data = page->memcg_data; + unsigned long memcg_data = folio->memcg_data; - VM_BUG_ON_PAGE(PageSlab(page), page); - VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_OBJCGS, page); - VM_BUG_ON_PAGE(!(memcg_data & MEMCG_DATA_KMEM), page); + VM_BUG_ON_FOLIO(folio_slab(folio), folio); + VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio); + VM_BUG_ON_FOLIO(!(memcg_data & MEMCG_DATA_KMEM), folio); return (struct obj_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); } /* - * page_memcg - get the memory cgroup associated with a page - * @page: a pointer to the page struct + * folio_memcg - Get the memory cgroup associated with a folio. + * @folio: Pointer to the folio. * - * Returns a pointer to the memory cgroup associated with the page, - * or NULL. This function assumes that the page is known to have a + * Returns a pointer to the memory cgroup associated with the folio, + * or NULL. This function assumes that the folio is known to have a * proper memory cgroup pointer. It's not safe to call this function - * against some type of pages, e.g. slab pages or ex-slab pages. + * against some type of folios, e.g. slab folios or ex-slab folios. * - * For a non-kmem page any of the following ensures page and memcg binding + * For a non-kmem folio any of the following ensures folio and memcg binding * stability: * - * - the page lock + * - the folio lock * - LRU isolation * - lock_page_memcg() * - exclusive reference * - * For a kmem page a caller should hold an rcu read lock to protect memcg - * associated with a kmem page from being released. + * For a kmem folio a caller should hold an rcu read lock to protect memcg + * associated with a kmem folio from being released. */ -static inline struct mem_cgroup *page_memcg(struct page *page) +static inline struct mem_cgroup *folio_memcg(struct folio *folio) { - if (PageMemcgKmem(page)) - return obj_cgroup_memcg(__page_objcg(page)); + if (folio_memcg_kmem(folio)) + return obj_cgroup_memcg(__folio_objcg(folio)); else - return __page_memcg(page); + return __folio_memcg(folio); +} + +static inline struct mem_cgroup *page_memcg(struct page *page) +{ + return folio_memcg(page_folio(page)); } /* @@ -525,17 +531,18 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page) #ifdef CONFIG_MEMCG_KMEM /* - * PageMemcgKmem - check if the page has MemcgKmem flag set - * @page: a pointer to the page struct + * folio_memcg_kmem - Check if the folio has the memcg_kmem flag set. + * @folio: Pointer to the folio. * - * Checks if the page has MemcgKmem flag set. The caller must ensure that - * the page has an associated memory cgroup. It's not safe to call this function - * against some types of pages, e.g. slab pages. + * Checks if the folio has MemcgKmem flag set. The caller must ensure + * that the folio has an associated memory cgroup. It's not safe to call + * this function against some types of folios, e.g. slab folios. */ -static inline bool PageMemcgKmem(struct page *page) +static inline bool folio_memcg_kmem(struct folio *folio) { - VM_BUG_ON_PAGE(page->memcg_data & MEMCG_DATA_OBJCGS, page); - return page->memcg_data & MEMCG_DATA_KMEM; + VM_BUG_ON_PGFLAGS(PageTail(&folio->page), &folio->page); + VM_BUG_ON_FOLIO(folio->memcg_data & MEMCG_DATA_OBJCGS, folio); + return folio->memcg_data & MEMCG_DATA_KMEM; } /* @@ -579,7 +586,7 @@ static inline struct obj_cgroup **page_objcgs_check(struct page *page) } #else -static inline bool PageMemcgKmem(struct page *page) +static inline bool folio_memcg_kmem(struct folio *folio) { return false; } @@ -595,6 +602,11 @@ static inline struct obj_cgroup **page_objcgs_check(struct page *page) } #endif +static inline bool PageMemcgKmem(struct page *page) +{ + return folio_memcg_kmem(page_folio(page)); +} + static __always_inline bool memcg_stat_item_in_bytes(int idx) { if (idx == MEMCG_PERCPU_B) @@ -1122,6 +1134,11 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page) return NULL; } +static inline bool folio_memcg_kmem(struct folio *folio) +{ + return false; +} + static inline bool PageMemcgKmem(struct page *page) { return false; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 5d143d46a8a4..f369bbaf584b 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3045,15 +3045,16 @@ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order) */ void __memcg_kmem_uncharge_page(struct page *page, int order) { + struct folio *folio = page_folio(page); struct obj_cgroup *objcg; unsigned int nr_pages = 1 << order; - if (!PageMemcgKmem(page)) + if (!folio_memcg_kmem(folio)) return; - objcg = __page_objcg(page); + objcg = __folio_objcg(folio); obj_cgroup_uncharge_pages(objcg, nr_pages); - page->memcg_data = 0; + folio->memcg_data = 0; obj_cgroup_put(objcg); } @@ -3285,17 +3286,18 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size) */ void split_page_memcg(struct page *head, unsigned int nr) { - struct mem_cgroup *memcg = page_memcg(head); + struct folio *folio = page_folio(head); + struct mem_cgroup *memcg = folio_memcg(folio); int i; if (mem_cgroup_disabled() || !memcg) return; for (i = 1; i < nr; i++) - head[i].memcg_data = head->memcg_data; + folio_page(folio, i)->memcg_data = folio->memcg_data; - if (PageMemcgKmem(head)) - obj_cgroup_get_many(__page_objcg(head), nr - 1); + if (folio_memcg_kmem(folio)) + obj_cgroup_get_many(__folio_objcg(folio), nr - 1); else css_get_many(&memcg->css, nr - 1); } @@ -6830,6 +6832,7 @@ static void uncharge_batch(const struct uncharge_gather *ug) static void uncharge_page(struct page *page, struct uncharge_gather *ug) { + struct folio *folio = page_folio(page); unsigned long nr_pages; struct mem_cgroup *memcg; struct obj_cgroup *objcg; @@ -6843,14 +6846,14 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) * exclusive access to the page. */ if (use_objcg) { - objcg = __page_objcg(page); + objcg = __folio_objcg(folio); /* * This get matches the put at the end of the function and * kmem pages do not hold memcg references anymore. */ memcg = get_mem_cgroup_from_objcg(objcg); } else { - memcg = __page_memcg(page); + memcg = __folio_memcg(folio); } if (!memcg) From patchwork Wed Jun 30 04:00:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12351189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2D3EC11F65 for ; Wed, 30 Jun 2021 04:06:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8F13661D2B for ; Wed, 30 Jun 2021 04:06:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8F13661D2B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BF5FC8D016D; Wed, 30 Jun 2021 00:06:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B7F1A8D0160; Wed, 30 Jun 2021 00:06:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A20008D016D; Wed, 30 Jun 2021 00:06:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0108.hostedemail.com [216.40.44.108]) by kanga.kvack.org (Postfix) with ESMTP id 755B88D0160 for ; Wed, 30 Jun 2021 00:06:19 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 498EE824999B for ; Wed, 30 Jun 2021 04:06:19 +0000 (UTC) X-FDA: 78309052878.06.642014D Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf03.hostedemail.com (Postfix) with ESMTP id D530B300009B for ; Wed, 30 Jun 2021 04:06:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=WeFvvQ3KBB1ENMOJ6f7Rya8a7gf7U8HCBtx5RYCkXSI=; b=nkJu5V1G++weobkGUK8VPbe6Il kHIIIWXJl9U4sT99L6Dc6dg0ogtHumvfiYsBTr3T68ycJIBUewoBwOSLBOdPJyJ+2EYNZ0lRPKrHc HyWsoHw/gtA8ugWteFXp6GFJyRobvmmveV2j9bAro+YlgI6BbGgBSQodvQBSK5XhdpxpUrnF7B8pg M6wSI99jxh1KiAY5/FLXbBjhZQfKpqYB11Si6Fr+mGpYLjY28WL/SkyPUeh2ie8DzkWpgRNuiImHP X6E5+AZk/ELvbXZyQfs6AQxrmgiPsrRkyee77wUqz673EIZHlyxPNhmG3Wy15fZtWwj/NWH6OY0gx 04kRg+wQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1lyRT3-004r4f-NE; Wed, 30 Jun 2021 04:05:04 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, cgroups@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Johannes Weiner , Michal Hocko , Vladimir Davydov , Christoph Hellwig , Michal Hocko Subject: [PATCH v3 07/18] mm/memcg: Convert commit_charge() to take a folio Date: Wed, 30 Jun 2021 05:00:23 +0100 Message-Id: <20210630040034.1155892-8-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210630040034.1155892-1-willy@infradead.org> References: <20210630040034.1155892-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=nkJu5V1G; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: 5idxphg6dmg65pkjgbiihadutz67nmde X-Rspamd-Queue-Id: D530B300009B X-Rspamd-Server: rspam06 X-HE-Tag: 1625025978-399966 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The memcg_data is only set on the head page, so enforce that by typing it as a folio. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Michal Hocko --- mm/memcontrol.c | 27 +++++++++++++-------------- 1 file changed, 13 insertions(+), 14 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index f369bbaf584b..727bd578ca7d 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2764,9 +2764,9 @@ static void cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages) } #endif -static void commit_charge(struct page *page, struct mem_cgroup *memcg) +static void commit_charge(struct folio *folio, struct mem_cgroup *memcg) { - VM_BUG_ON_PAGE(page_memcg(page), page); + VM_BUG_ON_FOLIO(folio_memcg(folio), folio); /* * Any of the following ensures page's memcg stability: * @@ -2775,7 +2775,7 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg) * - lock_page_memcg() * - exclusive reference */ - page->memcg_data = (unsigned long)memcg; + folio->memcg_data = (unsigned long)memcg; } static struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg) @@ -6679,7 +6679,8 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root, static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, gfp_t gfp) { - unsigned int nr_pages = thp_nr_pages(page); + struct folio *folio = page_folio(page); + unsigned int nr_pages = folio_nr_pages(folio); int ret; ret = try_charge(memcg, gfp, nr_pages); @@ -6687,7 +6688,7 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, goto out; css_get(&memcg->css); - commit_charge(page, memcg); + commit_charge(folio, memcg); local_irq_disable(); mem_cgroup_charge_statistics(memcg, nr_pages); @@ -6947,21 +6948,21 @@ void mem_cgroup_uncharge_list(struct list_head *page_list) */ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) { + struct folio *newfolio = page_folio(newpage); struct mem_cgroup *memcg; - unsigned int nr_pages; + unsigned int nr_pages = folio_nr_pages(newfolio); unsigned long flags; VM_BUG_ON_PAGE(!PageLocked(oldpage), oldpage); - VM_BUG_ON_PAGE(!PageLocked(newpage), newpage); - VM_BUG_ON_PAGE(PageAnon(oldpage) != PageAnon(newpage), newpage); - VM_BUG_ON_PAGE(PageTransHuge(oldpage) != PageTransHuge(newpage), - newpage); + VM_BUG_ON_FOLIO(!folio_locked(newfolio), newfolio); + VM_BUG_ON_FOLIO(PageAnon(oldpage) != folio_anon(newfolio), newfolio); + VM_BUG_ON_FOLIO(compound_nr(oldpage) != nr_pages, newfolio); if (mem_cgroup_disabled()) return; /* Page cache replacement: new page already charged? */ - if (page_memcg(newpage)) + if (folio_memcg(newfolio)) return; memcg = page_memcg(oldpage); @@ -6970,8 +6971,6 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) return; /* Force-charge the new page. The old one will be freed soon */ - nr_pages = thp_nr_pages(newpage); - if (!mem_cgroup_is_root(memcg)) { page_counter_charge(&memcg->memory, nr_pages); if (do_memsw_account()) @@ -6979,7 +6978,7 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) } css_get(&memcg->css); - commit_charge(newpage, memcg); + commit_charge(newfolio, memcg); local_irq_save(flags); mem_cgroup_charge_statistics(memcg, nr_pages); From patchwork Wed Jun 30 04:00:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12351191 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1124BC11F65 for ; Wed, 30 Jun 2021 04:06:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B1DEE613C8 for ; Wed, 30 Jun 2021 04:06:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B1DEE613C8 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 20A3C8D016E; Wed, 30 Jun 2021 00:06:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 194C48D0160; Wed, 30 Jun 2021 00:06:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 00CD28D016E; Wed, 30 Jun 2021 00:06:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0095.hostedemail.com [216.40.44.95]) by kanga.kvack.org (Postfix) with ESMTP id C97CE8D0160 for ; Wed, 30 Jun 2021 00:06:50 -0400 (EDT) Received: from smtpin34.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 981B423E4F for ; Wed, 30 Jun 2021 04:06:50 +0000 (UTC) X-FDA: 78309054180.34.A7212D4 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf11.hostedemail.com (Postfix) with ESMTP id 231F0F0000A3 for ; Wed, 30 Jun 2021 04:06:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=BxzDO/vdys0RWGOPT4+6zHhBViUDbOJIbJsHaZ/Vz+o=; b=XRv2ogDboZtaaWFkyoPr3thfFu depMYF97pvL/PLOTumReG7m77y6YZiBR9m6wPSu8rGvCvIcFoYxr1koJY3JNg4xdg2G95gdpZ4wY/ mLddnZNYsE/uVAk0AJFr3tGdQihDF6Rwv8o5Aex42jx3kFvSCREbDwdHO3T+hepNPaSqeQpa2+87K Eo4IXIV9dzu7i+8m2YPrQZuh8KH72dNtb9OHkG6Z1y7woy5kWt3T83yAG1my5iMyLd0VBxEGXDmI5 ipGOFFRuVlM2BsDMBulwvphlU/Ti4s9O3eKB6BL1OcpDE5hiSblEWL9k2DGptx53qHiLuhF2EET8Q IT5XVHlg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1lyRTm-004r6q-K7; Wed, 30 Jun 2021 04:05:50 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, cgroups@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Johannes Weiner , Michal Hocko , Vladimir Davydov Subject: [PATCH v3 08/18] mm/memcg: Convert mem_cgroup_charge() to take a folio Date: Wed, 30 Jun 2021 05:00:24 +0100 Message-Id: <20210630040034.1155892-9-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210630040034.1155892-1-willy@infradead.org> References: <20210630040034.1155892-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=XRv2ogDb; dmarc=none; spf=none (imf11.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Stat-Signature: 98ephehr4tcgdbr7kk895qtpb6kywa4i X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 231F0F0000A3 X-HE-Tag: 1625026010-94430 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert all callers of mem_cgroup_charge() to call page_folio() on the page they're currently passing in. Many of them will be converted to use folios themselves soon. Signed-off-by: Matthew Wilcox (Oracle) Reported-by: kernel test robot Reviewed-by: Christoph Hellwig --- include/linux/memcontrol.h | 6 +++--- kernel/events/uprobes.c | 3 ++- mm/filemap.c | 2 +- mm/huge_memory.c | 2 +- mm/khugepaged.c | 4 ++-- mm/ksm.c | 3 ++- mm/memcontrol.c | 26 +++++++++++++------------- mm/memory.c | 9 +++++---- mm/migrate.c | 2 +- mm/shmem.c | 2 +- mm/userfaultfd.c | 2 +- 11 files changed, 32 insertions(+), 29 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 92689fb2dab4..90d48b0e3191 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -705,7 +705,7 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg) page_counter_read(&memcg->memory); } -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask); +int mem_cgroup_charge(struct folio *, struct mm_struct *, gfp_t); int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm, gfp_t gfp, swp_entry_t entry); void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry); @@ -1186,8 +1186,8 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg) return false; } -static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm, - gfp_t gfp_mask) +static inline int mem_cgroup_charge(struct folio *folio, + struct mm_struct *mm, gfp_t gfp) { return 0; } diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index af24dc3febbe..6357c3580d07 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -167,7 +167,8 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr, addr + PAGE_SIZE); if (new_page) { - err = mem_cgroup_charge(new_page, vma->vm_mm, GFP_KERNEL); + err = mem_cgroup_charge(page_folio(new_page), vma->vm_mm, + GFP_KERNEL); if (err) return err; } diff --git a/mm/filemap.c b/mm/filemap.c index 3fb48130f980..9600bca84162 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -872,7 +872,7 @@ noinline int __add_to_page_cache_locked(struct page *page, page->index = offset; if (!huge) { - error = mem_cgroup_charge(page, NULL, gfp); + error = mem_cgroup_charge(page_folio(page), NULL, gfp); if (error) goto error; charged = true; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 6d2a0119fc58..662f92bb1953 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -594,7 +594,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, VM_BUG_ON_PAGE(!PageCompound(page), page); - if (mem_cgroup_charge(page, vma->vm_mm, gfp)) { + if (mem_cgroup_charge(page_folio(page), vma->vm_mm, gfp)) { put_page(page); count_vm_event(THP_FAULT_FALLBACK); count_vm_event(THP_FAULT_FALLBACK_CHARGE); diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 6c0185fdd815..0daa21fbdd71 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1088,7 +1088,7 @@ static void collapse_huge_page(struct mm_struct *mm, goto out_nolock; } - if (unlikely(mem_cgroup_charge(new_page, mm, gfp))) { + if (unlikely(mem_cgroup_charge(page_folio(new_page), mm, gfp))) { result = SCAN_CGROUP_CHARGE_FAIL; goto out_nolock; } @@ -1659,7 +1659,7 @@ static void collapse_file(struct mm_struct *mm, goto out; } - if (unlikely(mem_cgroup_charge(new_page, mm, gfp))) { + if (unlikely(mem_cgroup_charge(page_folio(new_page), mm, gfp))) { result = SCAN_CGROUP_CHARGE_FAIL; goto out; } diff --git a/mm/ksm.c b/mm/ksm.c index 3fa9bc8a67cf..23d36b59f997 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -2580,7 +2580,8 @@ struct page *ksm_might_need_to_copy(struct page *page, return page; /* let do_swap_page report the error */ new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, address); - if (new_page && mem_cgroup_charge(new_page, vma->vm_mm, GFP_KERNEL)) { + if (new_page && + mem_cgroup_charge(page_folio(new_page), vma->vm_mm, GFP_KERNEL)) { put_page(new_page); new_page = NULL; } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 727bd578ca7d..1dc02ecd3257 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6676,10 +6676,9 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root, atomic_long_read(&parent->memory.children_low_usage))); } -static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, +static int __mem_cgroup_charge(struct folio *folio, struct mem_cgroup *memcg, gfp_t gfp) { - struct folio *folio = page_folio(page); unsigned int nr_pages = folio_nr_pages(folio); int ret; @@ -6692,27 +6691,27 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, local_irq_disable(); mem_cgroup_charge_statistics(memcg, nr_pages); - memcg_check_events(memcg, page_to_nid(page)); + memcg_check_events(memcg, folio_nid(folio)); local_irq_enable(); out: return ret; } /** - * mem_cgroup_charge - charge a newly allocated page to a cgroup - * @page: page to charge - * @mm: mm context of the victim - * @gfp_mask: reclaim mode + * mem_cgroup_charge - Charge a newly allocated folio to a cgroup. + * @folio: Folio to charge. + * @mm: mm context of the allocating task. + * @gfp: reclaim mode * - * Try to charge @page to the memcg that @mm belongs to, reclaiming - * pages according to @gfp_mask if necessary. if @mm is NULL, try to + * Try to charge @folio to the memcg that @mm belongs to, reclaiming + * pages according to @gfp if necessary. If @mm is NULL, try to * charge to the active memcg. * - * Do not use this for pages allocated for swapin. + * Do not use this for folios allocated for swapin. * * Returns 0 on success. Otherwise, an error code is returned. */ -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask) +int mem_cgroup_charge(struct folio *folio, struct mm_struct *mm, gfp_t gfp) { struct mem_cgroup *memcg; int ret; @@ -6721,7 +6720,7 @@ int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask) return 0; memcg = get_mem_cgroup_from_mm(mm); - ret = __mem_cgroup_charge(page, memcg, gfp_mask); + ret = __mem_cgroup_charge(folio, memcg, gfp); css_put(&memcg->css); return ret; @@ -6742,6 +6741,7 @@ int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask) int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm, gfp_t gfp, swp_entry_t entry) { + struct folio *folio = page_folio(page); struct mem_cgroup *memcg; unsigned short id; int ret; @@ -6756,7 +6756,7 @@ int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm, memcg = get_mem_cgroup_from_mm(mm); rcu_read_unlock(); - ret = __mem_cgroup_charge(page, memcg, gfp); + ret = __mem_cgroup_charge(folio, memcg, gfp); css_put(&memcg->css); return ret; diff --git a/mm/memory.c b/mm/memory.c index afff080caf8b..2e7a568e2c15 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -915,7 +915,7 @@ page_copy_prealloc(struct mm_struct *src_mm, struct vm_area_struct *vma, if (!new_page) return NULL; - if (mem_cgroup_charge(new_page, src_mm, GFP_KERNEL)) { + if (mem_cgroup_charge(page_folio(new_page), src_mm, GFP_KERNEL)) { put_page(new_page); return NULL; } @@ -2922,7 +2922,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) } } - if (mem_cgroup_charge(new_page, mm, GFP_KERNEL)) + if (mem_cgroup_charge(page_folio(new_page), mm, GFP_KERNEL)) goto oom_free_new; cgroup_throttle_swaprate(new_page, GFP_KERNEL); @@ -3640,7 +3640,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) if (!page) goto oom; - if (mem_cgroup_charge(page, vma->vm_mm, GFP_KERNEL)) + if (mem_cgroup_charge(page_folio(page), vma->vm_mm, GFP_KERNEL)) goto oom_free_page; cgroup_throttle_swaprate(page, GFP_KERNEL); @@ -4053,7 +4053,8 @@ static vm_fault_t do_cow_fault(struct vm_fault *vmf) if (!vmf->cow_page) return VM_FAULT_OOM; - if (mem_cgroup_charge(vmf->cow_page, vma->vm_mm, GFP_KERNEL)) { + if (mem_cgroup_charge(page_folio(vmf->cow_page), vma->vm_mm, + GFP_KERNEL)) { put_page(vmf->cow_page); return VM_FAULT_OOM; } diff --git a/mm/migrate.c b/mm/migrate.c index 380ca57b9031..94efe09bb2a0 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2914,7 +2914,7 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate, if (unlikely(anon_vma_prepare(vma))) goto abort; - if (mem_cgroup_charge(page, vma->vm_mm, GFP_KERNEL)) + if (mem_cgroup_charge(page_folio(page), vma->vm_mm, GFP_KERNEL)) goto abort; /* diff --git a/mm/shmem.c b/mm/shmem.c index 6268b9b4e41a..3cc5ddd5cc6b 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -685,7 +685,7 @@ static int shmem_add_to_page_cache(struct page *page, page->index = index; if (!PageSwapCache(page)) { - error = mem_cgroup_charge(page, charge_mm, gfp); + error = mem_cgroup_charge(page_folio(page), charge_mm, gfp); if (error) { if (PageTransHuge(page)) { count_vm_event(THP_FILE_FALLBACK); diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 63a73e164d55..bc8a2c3e5aea 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -96,7 +96,7 @@ static int mcopy_atomic_pte(struct mm_struct *dst_mm, __SetPageUptodate(page); ret = -ENOMEM; - if (mem_cgroup_charge(page, dst_mm, GFP_KERNEL)) + if (mem_cgroup_charge(page_folio(page), dst_mm, GFP_KERNEL)) goto out_release; _dst_pte = pte_mkdirty(mk_pte(page, dst_vma->vm_page_prot)); From patchwork Wed Jun 30 04:00:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12351193 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F8A7C11F65 for ; Wed, 30 Jun 2021 04:07:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 49BD0613CE for ; Wed, 30 Jun 2021 04:07:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 49BD0613CE Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AE2A08D016F; Wed, 30 Jun 2021 00:07:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A92C68D0160; Wed, 30 Jun 2021 00:07:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 95B598D016F; Wed, 30 Jun 2021 00:07:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0243.hostedemail.com [216.40.44.243]) by kanga.kvack.org (Postfix) with ESMTP id 6C96F8D0160 for ; Wed, 30 Jun 2021 00:07:24 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 4864718088E05 for ; Wed, 30 Jun 2021 04:07:24 +0000 (UTC) X-FDA: 78309055608.27.2F92DCE Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf17.hostedemail.com (Postfix) with ESMTP id 063F9F0001B7 for ; Wed, 30 Jun 2021 04:07:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=BjFTpfwTcvz8MB4hey3lD2XB4nBfnZtvn9hoGEU7psY=; b=LhcrdRX8xHPlCZzxWD5rgHNQLd 7RTIJiPMacDiMun10jtFkRbyaEkpzow5sMxM6qGhikcFkuPAATjrwgrFRzbppUmCySfcvM+pXj23P 2RlWftAPIjXeEFiuy0cIoqKeR2SdibjNjZB0eH3B+IcuQ42Pw2Y2za20GQnawP0fzwzL/z/VBvuAq uQhlXvycqBr9s9YEDhS98nFcK7Ii4sNPP1Ob6GgBBdYj6E4BnrqlCUtQLwQF2x522STDwMbycFcFt xqAe87EyGlgno5lX8pCoIqhk8zDhmyEOq050WAGeguJw1CmKMuzR4wFqQHXsT1yMzrNUGSlZMpHoG oayCqTSA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1lyRUP-004r8f-Ok; Wed, 30 Jun 2021 04:06:26 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, cgroups@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Johannes Weiner , Michal Hocko , Vladimir Davydov Subject: [PATCH v3 09/18] mm/memcg: Convert uncharge_page() to uncharge_folio() Date: Wed, 30 Jun 2021 05:00:25 +0100 Message-Id: <20210630040034.1155892-10-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210630040034.1155892-1-willy@infradead.org> References: <20210630040034.1155892-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=LhcrdRX8; dmarc=none; spf=none (imf17.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspamd-Server: rspam02 X-Stat-Signature: fr4hk5jpajp5j38kc8gjq1az7934c75w X-Rspamd-Queue-Id: 063F9F0001B7 X-HE-Tag: 1625026043-960817 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use a folio rather than a page to ensure that we're only operating on base or head pages, and not tail pages. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- mm/memcontrol.c | 29 ++++++++++++++--------------- 1 file changed, 14 insertions(+), 15 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 1dc02ecd3257..21b791935957 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6827,24 +6827,23 @@ static void uncharge_batch(const struct uncharge_gather *ug) memcg_check_events(ug->memcg, ug->nid); local_irq_restore(flags); - /* drop reference from uncharge_page */ + /* drop reference from uncharge_folio */ css_put(&ug->memcg->css); } -static void uncharge_page(struct page *page, struct uncharge_gather *ug) +static void uncharge_folio(struct folio *folio, struct uncharge_gather *ug) { - struct folio *folio = page_folio(page); unsigned long nr_pages; struct mem_cgroup *memcg; struct obj_cgroup *objcg; - bool use_objcg = PageMemcgKmem(page); + bool use_objcg = folio_memcg_kmem(folio); - VM_BUG_ON_PAGE(PageLRU(page), page); + VM_BUG_ON_FOLIO(folio_lru(folio), folio); /* * Nobody should be changing or seriously looking at - * page memcg or objcg at this point, we have fully - * exclusive access to the page. + * folio memcg or objcg at this point, we have fully + * exclusive access to the folio. */ if (use_objcg) { objcg = __folio_objcg(folio); @@ -6866,19 +6865,19 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) uncharge_gather_clear(ug); } ug->memcg = memcg; - ug->nid = page_to_nid(page); + ug->nid = folio_nid(folio); /* pairs with css_put in uncharge_batch */ css_get(&memcg->css); } - nr_pages = compound_nr(page); + nr_pages = folio_nr_pages(folio); if (use_objcg) { ug->nr_memory += nr_pages; ug->nr_kmem += nr_pages; - page->memcg_data = 0; + folio->memcg_data = 0; obj_cgroup_put(objcg); } else { /* LRU pages aren't accounted at the root level */ @@ -6886,7 +6885,7 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) ug->nr_memory += nr_pages; ug->pgpgout++; - page->memcg_data = 0; + folio->memcg_data = 0; } css_put(&memcg->css); @@ -6910,7 +6909,7 @@ void mem_cgroup_uncharge(struct page *page) return; uncharge_gather_clear(&ug); - uncharge_page(page, &ug); + uncharge_folio(page_folio(page), &ug); uncharge_batch(&ug); } @@ -6924,14 +6923,14 @@ void mem_cgroup_uncharge(struct page *page) void mem_cgroup_uncharge_list(struct list_head *page_list) { struct uncharge_gather ug; - struct page *page; + struct folio *folio; if (mem_cgroup_disabled()) return; uncharge_gather_clear(&ug); - list_for_each_entry(page, page_list, lru) - uncharge_page(page, &ug); + list_for_each_entry(folio, page_list, lru) + uncharge_folio(folio, &ug); if (ug.memcg) uncharge_batch(&ug); } From patchwork Wed Jun 30 04:00:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12351195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C8C7C11F69 for ; Wed, 30 Jun 2021 04:08:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 27AE7613C8 for ; Wed, 30 Jun 2021 04:08:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 27AE7613C8 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8E7108D0170; Wed, 30 Jun 2021 00:08:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8962D8D0160; Wed, 30 Jun 2021 00:08:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7374D8D0170; Wed, 30 Jun 2021 00:08:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0231.hostedemail.com [216.40.44.231]) by kanga.kvack.org (Postfix) with ESMTP id 4FAAA8D0160 for ; Wed, 30 Jun 2021 00:08:00 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 237FB1823331C for ; Wed, 30 Jun 2021 04:08:00 +0000 (UTC) X-FDA: 78309057120.27.D9A1DA4 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf17.hostedemail.com (Postfix) with ESMTP id D3B3CF0001B8 for ; Wed, 30 Jun 2021 04:07:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=EAl8xxbwMFaWRPB7s9hTtKvYo+m1NveKdSWstmV/WJk=; b=E1WsjlCvrvuiYDpezV19Cl3cAT BoWd0Q8zE8NLd+F7x73qBzKOktyZHKZNFTm32F0+86/EwEH9z+6hCSSGFmagT2jR9CXUgKNx+EOrh UUbqF48yV09RjFRTzUKJ7PXEwNgCIPif4OM+shQKUVRuVpP2PnngusDAr/qZy1fwW50NF5K6e9Otr s4CU7fTp/ZLuQMsuXPZiGT0LB9d0VYarpQNssWq+eosXw90fYakSQ/nxk001OuhmPLpnxwJ+kez8U C0kNZjPNeirSUJq/q4gQuw6LigAl+pCfck+EmuLHAo8JGx0xq3A2mz+MB5dQ3RhRr+Y2uRcb5uhTQ BD02m8Rw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1lyRV6-004rD2-L9; Wed, 30 Jun 2021 04:07:13 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, cgroups@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Johannes Weiner , Michal Hocko , Vladimir Davydov Subject: [PATCH v3 10/18] mm/memcg: Convert mem_cgroup_uncharge() to take a folio Date: Wed, 30 Jun 2021 05:00:26 +0100 Message-Id: <20210630040034.1155892-11-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210630040034.1155892-1-willy@infradead.org> References: <20210630040034.1155892-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=E1WsjlCv; spf=none (imf17.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: D3B3CF0001B8 X-Stat-Signature: z9qcxw384dhkbsbgk4ixnjgydj3pzgxa X-HE-Tag: 1625026079-516764 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert all the callers to call page_folio(). Signed-off-by: Matthew Wilcox (Oracle) Reported-by: kernel test robot Reviewed-by: Christoph Hellwig --- include/linux/memcontrol.h | 4 ++-- mm/filemap.c | 2 +- mm/khugepaged.c | 4 ++-- mm/memcontrol.c | 14 +++++++------- mm/memory-failure.c | 2 +- mm/memremap.c | 2 +- mm/page_alloc.c | 2 +- mm/swap.c | 2 +- 8 files changed, 16 insertions(+), 16 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 90d48b0e3191..d6386a2b9d7a 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -710,7 +710,7 @@ int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm, gfp_t gfp, swp_entry_t entry); void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry); -void mem_cgroup_uncharge(struct page *page); +void mem_cgroup_uncharge(struct folio *); void mem_cgroup_uncharge_list(struct list_head *page_list); void mem_cgroup_migrate(struct page *oldpage, struct page *newpage); @@ -1202,7 +1202,7 @@ static inline void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry) { } -static inline void mem_cgroup_uncharge(struct page *page) +static inline void mem_cgroup_uncharge(struct folio *folio) { } diff --git a/mm/filemap.c b/mm/filemap.c index 9600bca84162..0008ada132c4 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -923,7 +923,7 @@ noinline int __add_to_page_cache_locked(struct page *page, if (xas_error(&xas)) { error = xas_error(&xas); if (charged) - mem_cgroup_uncharge(page); + mem_cgroup_uncharge(page_folio(page)); goto error; } diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 0daa21fbdd71..988a230c7a41 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1212,7 +1212,7 @@ static void collapse_huge_page(struct mm_struct *mm, mmap_write_unlock(mm); out_nolock: if (!IS_ERR_OR_NULL(*hpage)) - mem_cgroup_uncharge(*hpage); + mem_cgroup_uncharge(page_folio(*hpage)); trace_mm_collapse_huge_page(mm, isolated, result); return; } @@ -1963,7 +1963,7 @@ static void collapse_file(struct mm_struct *mm, out: VM_BUG_ON(!list_empty(&pagelist)); if (!IS_ERR_OR_NULL(*hpage)) - mem_cgroup_uncharge(*hpage); + mem_cgroup_uncharge(page_folio(*hpage)); /* TODO: tracepoints */ } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 21b791935957..90a53f554371 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6892,24 +6892,24 @@ static void uncharge_folio(struct folio *folio, struct uncharge_gather *ug) } /** - * mem_cgroup_uncharge - uncharge a page - * @page: page to uncharge + * mem_cgroup_uncharge - Uncharge a folio. + * @folio: Folio to uncharge. * - * Uncharge a page previously charged with mem_cgroup_charge(). + * Uncharge a folio previously charged with folio_charge_cgroup(). */ -void mem_cgroup_uncharge(struct page *page) +void mem_cgroup_uncharge(struct folio *folio) { struct uncharge_gather ug; if (mem_cgroup_disabled()) return; - /* Don't touch page->lru of any random page, pre-check: */ - if (!page_memcg(page)) + /* Don't touch folio->lru of any random page, pre-check: */ + if (!folio_memcg(folio)) return; uncharge_gather_clear(&ug); - uncharge_folio(page_folio(page), &ug); + uncharge_folio(folio, &ug); uncharge_batch(&ug); } diff --git a/mm/memory-failure.c b/mm/memory-failure.c index e5a1531f7f4e..7ada5959b5ad 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -750,7 +750,7 @@ static int delete_from_lru_cache(struct page *p) * Poisoned page might never drop its ref count to 0 so we have * to uncharge it manually from its memcg. */ - mem_cgroup_uncharge(p); + mem_cgroup_uncharge(page_folio(p)); /* * drop the page count elevated by isolate_lru_page() diff --git a/mm/memremap.c b/mm/memremap.c index 15a074ffb8d7..6eac40f9f62a 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -508,7 +508,7 @@ void free_devmap_managed_page(struct page *page) __ClearPageWaiters(page); - mem_cgroup_uncharge(page); + mem_cgroup_uncharge(page_folio(page)); /* * When a device_private page is freed, the page->mapping field diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0817d88383d5..5a5fcd4f21a8 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -737,7 +737,7 @@ static inline void free_the_page(struct page *page, unsigned int order) void free_compound_page(struct page *page) { - mem_cgroup_uncharge(page); + mem_cgroup_uncharge(page_folio(page)); free_the_page(page, compound_order(page)); } diff --git a/mm/swap.c b/mm/swap.c index 6954cfebab4f..8ba62a930370 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -94,7 +94,7 @@ static void __page_cache_release(struct page *page) static void __put_single_page(struct page *page) { __page_cache_release(page); - mem_cgroup_uncharge(page); + mem_cgroup_uncharge(page_folio(page)); free_unref_page(page, 0); } From patchwork Wed Jun 30 04:00:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12351197 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F1714C11F65 for ; Wed, 30 Jun 2021 04:08:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A0526613CE for ; Wed, 30 Jun 2021 04:08:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A0526613CE Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 11D058D0160; Wed, 30 Jun 2021 00:08:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0F3EF8D0171; Wed, 30 Jun 2021 00:08:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EBB4E8D0160; Wed, 30 Jun 2021 00:08:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0105.hostedemail.com [216.40.44.105]) by kanga.kvack.org (Postfix) with ESMTP id C5BFD8D0160 for ; Wed, 30 Jun 2021 00:08:58 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 95C248249980 for ; Wed, 30 Jun 2021 04:08:58 +0000 (UTC) X-FDA: 78309059556.13.821E290 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf23.hostedemail.com (Postfix) with ESMTP id 56A42900009F for ; Wed, 30 Jun 2021 04:08:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=kOgSDY9XpY3a3MJvXCpmTtvtHJLW2b/IsU2xttrkmZc=; b=angQ8OeGW1ANOe7Wc5hCMAE2pV VS6H/3c3vVjq8xVxfb5QSFIN/87lwgYubymYCPw7ZZA14WiFYps87oyvm15mHKC3Q6Z0WrmKtPIDl 4Gdfy/OzoHckMrwrswdknlFdXeq1yXVm0VQ2z8Pe0McAEp82ECN613VRTnqkv3+THsfRmGmptBy1h qDhS4mNjS4ON7k+i5StuOVpSNcyWSAtOzDPIToBO8vlLMlWGLEHrrpw1uzYuY8BYYrxU+/pi/jOxz Mbp7OO0Li8W9a5h01mJnUsR38sMq1iecvZNlFEfob4vIjswQzB/jgtYfYlp1Lyp7HOQVzhHQfCqvA f9Tye7IQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1lyRVe-004rEf-Iw; Wed, 30 Jun 2021 04:07:45 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, cgroups@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Johannes Weiner , Michal Hocko , Vladimir Davydov Subject: [PATCH v3 11/18] mm/memcg: Convert mem_cgroup_migrate() to take folios Date: Wed, 30 Jun 2021 05:00:27 +0100 Message-Id: <20210630040034.1155892-12-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210630040034.1155892-1-willy@infradead.org> References: <20210630040034.1155892-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=angQ8OeG; dmarc=none; spf=none (imf23.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspamd-Server: rspam02 X-Stat-Signature: 5errnqdifssgpbf69zeyx1z378wrfg1i X-Rspamd-Queue-Id: 56A42900009F X-HE-Tag: 1625026138-224799 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert all callers of mem_cgroup_migrate() to call page_folio() first. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- include/linux/memcontrol.h | 4 ++-- mm/filemap.c | 4 +++- mm/memcontrol.c | 35 +++++++++++++++++------------------ mm/migrate.c | 4 +++- mm/shmem.c | 5 ++++- 5 files changed, 29 insertions(+), 23 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index d6386a2b9d7a..2c57a405acd2 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -713,7 +713,7 @@ void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry); void mem_cgroup_uncharge(struct folio *); void mem_cgroup_uncharge_list(struct list_head *page_list); -void mem_cgroup_migrate(struct page *oldpage, struct page *newpage); +void mem_cgroup_migrate(struct folio *old, struct folio *new); /** * mem_cgroup_lruvec - get the lru list vector for a memcg & node @@ -1210,7 +1210,7 @@ static inline void mem_cgroup_uncharge_list(struct list_head *page_list) { } -static inline void mem_cgroup_migrate(struct page *old, struct page *new) +static inline void mem_cgroup_migrate(struct folio *old, struct folio *new) { } diff --git a/mm/filemap.c b/mm/filemap.c index 0008ada132c4..964f1643dd97 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -817,6 +817,8 @@ EXPORT_SYMBOL(file_write_and_wait_range); */ void replace_page_cache_page(struct page *old, struct page *new) { + struct folio *fold = page_folio(old); + struct folio *fnew = page_folio(new); struct address_space *mapping = old->mapping; void (*freepage)(struct page *) = mapping->a_ops->freepage; pgoff_t offset = old->index; @@ -831,7 +833,7 @@ void replace_page_cache_page(struct page *old, struct page *new) new->mapping = mapping; new->index = offset; - mem_cgroup_migrate(old, new); + mem_cgroup_migrate(fold, fnew); xas_lock_irqsave(&xas, flags); xas_store(&xas, new); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 90a53f554371..4ce2f2eb81d8 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6936,36 +6936,35 @@ void mem_cgroup_uncharge_list(struct list_head *page_list) } /** - * mem_cgroup_migrate - charge a page's replacement - * @oldpage: currently circulating page - * @newpage: replacement page + * mem_cgroup_migrate - Charge a folio's replacement. + * @old: Currently circulating folio. + * @new: Replacement folio. * - * Charge @newpage as a replacement page for @oldpage. @oldpage will + * Charge @new as a replacement folio for @old. @old will * be uncharged upon free. * - * Both pages must be locked, @newpage->mapping must be set up. + * Both folios must be locked, @new->mapping must be set up. */ -void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) +void mem_cgroup_migrate(struct folio *old, struct folio *new) { - struct folio *newfolio = page_folio(newpage); struct mem_cgroup *memcg; - unsigned int nr_pages = folio_nr_pages(newfolio); + unsigned int nr_pages = folio_nr_pages(new); unsigned long flags; - VM_BUG_ON_PAGE(!PageLocked(oldpage), oldpage); - VM_BUG_ON_FOLIO(!folio_locked(newfolio), newfolio); - VM_BUG_ON_FOLIO(PageAnon(oldpage) != folio_anon(newfolio), newfolio); - VM_BUG_ON_FOLIO(compound_nr(oldpage) != nr_pages, newfolio); + VM_BUG_ON_FOLIO(!folio_locked(old), old); + VM_BUG_ON_FOLIO(!folio_locked(new), new); + VM_BUG_ON_FOLIO(folio_anon(old) != folio_anon(new), new); + VM_BUG_ON_FOLIO(folio_nr_pages(old) != nr_pages, new); if (mem_cgroup_disabled()) return; - /* Page cache replacement: new page already charged? */ - if (folio_memcg(newfolio)) + /* Page cache replacement: new folio already charged? */ + if (folio_memcg(new)) return; - memcg = page_memcg(oldpage); - VM_WARN_ON_ONCE_PAGE(!memcg, oldpage); + memcg = folio_memcg(old); + VM_WARN_ON_ONCE_FOLIO(!memcg, old); if (!memcg) return; @@ -6977,11 +6976,11 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) } css_get(&memcg->css); - commit_charge(newfolio, memcg); + commit_charge(new, memcg); local_irq_save(flags); mem_cgroup_charge_statistics(memcg, nr_pages); - memcg_check_events(memcg, page_to_nid(newpage)); + memcg_check_events(memcg, folio_nid(new)); local_irq_restore(flags); } diff --git a/mm/migrate.c b/mm/migrate.c index 94efe09bb2a0..f71e72f9c812 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -582,6 +582,8 @@ static void copy_huge_page(struct page *dst, struct page *src) */ void migrate_page_states(struct page *newpage, struct page *page) { + struct folio *folio = page_folio(page); + struct folio *newfolio = page_folio(newpage); int cpupid; if (PageError(page)) @@ -646,7 +648,7 @@ void migrate_page_states(struct page *newpage, struct page *page) copy_page_owner(page, newpage); if (!PageHuge(page)) - mem_cgroup_migrate(page, newpage); + mem_cgroup_migrate(folio, newfolio); } EXPORT_SYMBOL(migrate_page_states); diff --git a/mm/shmem.c b/mm/shmem.c index 3cc5ddd5cc6b..1e172fb40fb3 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1619,6 +1619,7 @@ static int shmem_replace_page(struct page **pagep, gfp_t gfp, struct shmem_inode_info *info, pgoff_t index) { struct page *oldpage, *newpage; + struct folio *old, *new; struct address_space *swap_mapping; swp_entry_t entry; pgoff_t swap_index; @@ -1655,7 +1656,9 @@ static int shmem_replace_page(struct page **pagep, gfp_t gfp, xa_lock_irq(&swap_mapping->i_pages); error = shmem_replace_entry(swap_mapping, swap_index, oldpage, newpage); if (!error) { - mem_cgroup_migrate(oldpage, newpage); + old = page_folio(oldpage); + new = page_folio(newpage); + mem_cgroup_migrate(old, new); __inc_lruvec_page_state(newpage, NR_FILE_PAGES); __dec_lruvec_page_state(oldpage, NR_FILE_PAGES); } From patchwork Wed Jun 30 04:00:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12351199 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08EBEC11F65 for ; Wed, 30 Jun 2021 04:09:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ADFEC614A7 for ; Wed, 30 Jun 2021 04:09:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ADFEC614A7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2546C8D0172; Wed, 30 Jun 2021 00:09:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 22ACF8D0171; Wed, 30 Jun 2021 00:09:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0CCB08D0172; Wed, 30 Jun 2021 00:09:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0197.hostedemail.com [216.40.44.197]) by kanga.kvack.org (Postfix) with ESMTP id DF7658D0171 for ; Wed, 30 Jun 2021 00:09:50 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id C192118209A83 for ; Wed, 30 Jun 2021 04:09:50 +0000 (UTC) X-FDA: 78309061740.07.2BFC368 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf23.hostedemail.com (Postfix) with ESMTP id 7DC26900009F for ; Wed, 30 Jun 2021 04:09:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=XpL9mSB1/nC9mL/c7VMPmLAgXM0YlVQrzvZC5z/pLHE=; b=ossPoxGa6GiSdoiW5T49jOjf5+ YfBKUKY1L/d9XQwsjT0lHFMUvUUFqo28fDYTCPynU2ZOCz+hDFC/PR41RPIqP/HLbxRLrxhPC77UZ 5g2L5zJFADW1YIh4tConLlic8WB0C/PlyoD7bmoM89OWD3wuBtlXRfbJ01qZ4CII2mfhm/rXZIsvI dfRZV1URS2RjeEcyI8O80M2MoqxMtqGiiQhSPCLt8ohVIxA40sBUivAi3hapKt1Rk5IHRyWKB+QIP Q1FAkGBXBdbiCX9Wb+gKvsQuqlL9w8R0xt4rzvrNIyZmqzEZW1fywrONoEDItpzsCivVaACQkIcGF r/YG+MrQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1lyRWI-004rHg-Ry; Wed, 30 Jun 2021 04:08:40 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, cgroups@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Johannes Weiner , Michal Hocko , Vladimir Davydov , Christoph Hellwig Subject: [PATCH v3 12/18] mm/memcg: Convert mem_cgroup_track_foreign_dirty_slowpath() to folio Date: Wed, 30 Jun 2021 05:00:28 +0100 Message-Id: <20210630040034.1155892-13-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210630040034.1155892-1-willy@infradead.org> References: <20210630040034.1155892-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ossPoxGa; spf=none (imf23.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: y11fp9osnd6w4xsabczbtd6i38krbzhu X-Rspamd-Queue-Id: 7DC26900009F X-Rspamd-Server: rspam06 X-HE-Tag: 1625026190-808314 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The page was only being used for the memcg and to gather trace information, so this is a simple conversion. The only caller of mem_cgroup_track_foreign_dirty() will be converted to folios in a later patch, so doing this now makes that patch simpler. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- include/linux/memcontrol.h | 7 ++++--- include/trace/events/writeback.h | 8 ++++---- mm/memcontrol.c | 6 +++--- 3 files changed, 11 insertions(+), 10 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 2c57a405acd2..ef79f9c0b296 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1556,17 +1556,18 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages, unsigned long *pheadroom, unsigned long *pdirty, unsigned long *pwriteback); -void mem_cgroup_track_foreign_dirty_slowpath(struct page *page, +void mem_cgroup_track_foreign_dirty_slowpath(struct folio *folio, struct bdi_writeback *wb); static inline void mem_cgroup_track_foreign_dirty(struct page *page, struct bdi_writeback *wb) { + struct folio *folio = page_folio(page); if (mem_cgroup_disabled()) return; - if (unlikely(&page_memcg(page)->css != wb->memcg_css)) - mem_cgroup_track_foreign_dirty_slowpath(page, wb); + if (unlikely(&folio_memcg(folio)->css != wb->memcg_css)) + mem_cgroup_track_foreign_dirty_slowpath(folio, wb); } void mem_cgroup_flush_foreign(struct bdi_writeback *wb); diff --git a/include/trace/events/writeback.h b/include/trace/events/writeback.h index 1efa463c4979..80b24801bbf7 100644 --- a/include/trace/events/writeback.h +++ b/include/trace/events/writeback.h @@ -235,9 +235,9 @@ TRACE_EVENT(inode_switch_wbs, TRACE_EVENT(track_foreign_dirty, - TP_PROTO(struct page *page, struct bdi_writeback *wb), + TP_PROTO(struct folio *folio, struct bdi_writeback *wb), - TP_ARGS(page, wb), + TP_ARGS(folio, wb), TP_STRUCT__entry( __array(char, name, 32) @@ -249,7 +249,7 @@ TRACE_EVENT(track_foreign_dirty, ), TP_fast_assign( - struct address_space *mapping = page_mapping(page); + struct address_space *mapping = folio_mapping(folio); struct inode *inode = mapping ? mapping->host : NULL; strscpy_pad(__entry->name, bdi_dev_name(wb->bdi), 32); @@ -257,7 +257,7 @@ TRACE_EVENT(track_foreign_dirty, __entry->ino = inode ? inode->i_ino : 0; __entry->memcg_id = wb->memcg_css->id; __entry->cgroup_ino = __trace_wb_assign_cgroup(wb); - __entry->page_cgroup_ino = cgroup_ino(page_memcg(page)->css.cgroup); + __entry->page_cgroup_ino = cgroup_ino(folio_memcg(folio)->css.cgroup); ), TP_printk("bdi %s[%llu]: ino=%lu memcg_id=%u cgroup_ino=%lu page_cgroup_ino=%lu", diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 4ce2f2eb81d8..b925bdce0c6e 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -4566,17 +4566,17 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages, * As being wrong occasionally doesn't matter, updates and accesses to the * records are lockless and racy. */ -void mem_cgroup_track_foreign_dirty_slowpath(struct page *page, +void mem_cgroup_track_foreign_dirty_slowpath(struct folio *folio, struct bdi_writeback *wb) { - struct mem_cgroup *memcg = page_memcg(page); + struct mem_cgroup *memcg = folio_memcg(folio); struct memcg_cgwb_frn *frn; u64 now = get_jiffies_64(); u64 oldest_at = now; int oldest = -1; int i; - trace_track_foreign_dirty(page, wb); + trace_track_foreign_dirty(folio, wb); /* * Pick the slot to use. If there is already a slot for @wb, keep From patchwork Wed Jun 30 04:00:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12351201 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15A19C11F68 for ; Wed, 30 Jun 2021 04:10:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B1B176162C for ; Wed, 30 Jun 2021 04:10:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B1B176162C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 22CDC8D0173; Wed, 30 Jun 2021 00:10:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2035E8D0171; Wed, 30 Jun 2021 00:10:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0A4308D0173; Wed, 30 Jun 2021 00:10:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0235.hostedemail.com [216.40.44.235]) by kanga.kvack.org (Postfix) with ESMTP id DAF5D8D0171 for ; Wed, 30 Jun 2021 00:10:16 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id B9A261822CC44 for ; Wed, 30 Jun 2021 04:10:16 +0000 (UTC) X-FDA: 78309062832.24.1A9A60F Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf05.hostedemail.com (Postfix) with ESMTP id 6836950003AB for ; Wed, 30 Jun 2021 04:10:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Xs+C8igKGtsfobOsWI7v7pFbJ5dQkVc5kSbNpGFbjuk=; b=hBR/N+GqwANlp/g5TmqeHCtGAg C8T+YV+0kVEgaWdbkCvDXfY+uM/hxn0n5HxcG2qh+E32tX2mXG1PsiDEjhEUqlyBn0mNfdnTFTHzJ w3QAfffEj0GP9aNtpsoNpz2I4xDedCYKqLw5O2vz+ylbF+cHS3aLrkIuR7xWAlbhwvLZvWMmz8mNN f//ECNgEu84NMivae2qsAk9WxY8jL4sXa4Sfck2wDsJnoWROIj2dn0Pco67o0qU3XBVvBquyABvAB NxXPIrEtwIB+b4bgt6jJ2j9zykeuE3kA6LJD5GsBUuAKhyw5hYXiOJDX7poFYT9l8ClaPQXlBDFlN AMi9b5MA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1lyRXF-004rJw-TY; Wed, 30 Jun 2021 04:09:24 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, cgroups@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Johannes Weiner , Michal Hocko , Vladimir Davydov Subject: [PATCH v3 13/18] mm/memcg: Add folio_memcg_lock() and folio_memcg_unlock() Date: Wed, 30 Jun 2021 05:00:29 +0100 Message-Id: <20210630040034.1155892-14-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210630040034.1155892-1-willy@infradead.org> References: <20210630040034.1155892-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="hBR/N+Gq"; dmarc=none; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspamd-Server: rspam02 X-Stat-Signature: o9gehynhq1bsxszn7ido4utabxy8hrwx X-Rspamd-Queue-Id: 6836950003AB X-HE-Tag: 1625026216-785765 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These are the folio equivalents of lock_page_memcg() and unlock_page_memcg(). Reimplement them as wrappers. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/memcontrol.h | 10 +++++++++ mm/memcontrol.c | 45 ++++++++++++++++++++++++-------------- 2 files changed, 39 insertions(+), 16 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index ef79f9c0b296..279ea2640365 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -951,6 +951,8 @@ void mem_cgroup_print_oom_group(struct mem_cgroup *memcg); extern bool cgroup_memory_noswap; #endif +void folio_memcg_lock(struct folio *folio); +void folio_memcg_unlock(struct folio *folio); void lock_page_memcg(struct page *page); void unlock_page_memcg(struct page *page); @@ -1363,6 +1365,14 @@ static inline void unlock_page_memcg(struct page *page) { } +static inline void folio_memcg_lock(struct folio *folio) +{ +} + +static inline void folio_memcg_unlock(struct folio *folio) +{ +} + static inline void mem_cgroup_handle_over_high(void) { } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index b925bdce0c6e..b94a6122f27d 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1960,18 +1960,17 @@ void mem_cgroup_print_oom_group(struct mem_cgroup *memcg) } /** - * lock_page_memcg - lock a page and memcg binding - * @page: the page + * folio_memcg_lock - Bind a folio to its memcg. + * @folio: The folio. * - * This function protects unlocked LRU pages from being moved to + * This function prevents unlocked LRU folios from being moved to * another cgroup. * - * It ensures lifetime of the locked memcg. Caller is responsible - * for the lifetime of the page. + * It ensures lifetime of the bound memcg. The caller is responsible + * for the lifetime of the folio. */ -void lock_page_memcg(struct page *page) +void folio_memcg_lock(struct folio *folio) { - struct page *head = compound_head(page); /* rmap on tail pages */ struct mem_cgroup *memcg; unsigned long flags; @@ -1985,7 +1984,7 @@ void lock_page_memcg(struct page *page) if (mem_cgroup_disabled()) return; again: - memcg = page_memcg(head); + memcg = folio_memcg(folio); if (unlikely(!memcg)) return; @@ -1999,7 +1998,7 @@ void lock_page_memcg(struct page *page) return; spin_lock_irqsave(&memcg->move_lock, flags); - if (memcg != page_memcg(head)) { + if (memcg != folio_memcg(folio)) { spin_unlock_irqrestore(&memcg->move_lock, flags); goto again; } @@ -2013,9 +2012,15 @@ void lock_page_memcg(struct page *page) memcg->move_lock_task = current; memcg->move_lock_flags = flags; } +EXPORT_SYMBOL(folio_memcg_lock); + +void lock_page_memcg(struct page *page) +{ + folio_memcg_lock(page_folio(page)); +} EXPORT_SYMBOL(lock_page_memcg); -static void __unlock_page_memcg(struct mem_cgroup *memcg) +static void __memcg_unlock(struct mem_cgroup *memcg) { if (memcg && memcg->move_lock_task == current) { unsigned long flags = memcg->move_lock_flags; @@ -2030,14 +2035,22 @@ static void __unlock_page_memcg(struct mem_cgroup *memcg) } /** - * unlock_page_memcg - unlock a page and memcg binding - * @page: the page + * folio_memcg_unlock - Release the binding between a folio and its memcg. + * @folio: The folio. + * + * This releases the binding created by folio_memcg_lock(). This does + * not change the accounting of this folio to its memcg, but it does + * permit others to change it. */ -void unlock_page_memcg(struct page *page) +void folio_memcg_unlock(struct folio *folio) { - struct page *head = compound_head(page); + __memcg_unlock(folio_memcg(folio)); +} +EXPORT_SYMBOL(folio_memcg_unlock); - __unlock_page_memcg(page_memcg(head)); +void unlock_page_memcg(struct page *page) +{ + folio_memcg_unlock(page_folio(page)); } EXPORT_SYMBOL(unlock_page_memcg); @@ -5661,7 +5674,7 @@ static int mem_cgroup_move_account(struct page *page, page->memcg_data = (unsigned long)to; - __unlock_page_memcg(from); + __memcg_unlock(from); ret = 0; nid = page_to_nid(page); From patchwork Wed Jun 30 04:00:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12351203 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03A94C11F68 for ; Wed, 30 Jun 2021 04:11:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 90857614A7 for ; Wed, 30 Jun 2021 04:11:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 90857614A7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 07D7D8D0174; Wed, 30 Jun 2021 00:11:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 054058D0171; Wed, 30 Jun 2021 00:11:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E37BC8D0174; Wed, 30 Jun 2021 00:11:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0069.hostedemail.com [216.40.44.69]) by kanga.kvack.org (Postfix) with ESMTP id BF6C28D0171 for ; Wed, 30 Jun 2021 00:11:07 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 9B33721234 for ; Wed, 30 Jun 2021 04:11:07 +0000 (UTC) X-FDA: 78309064974.01.0520CE8 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf01.hostedemail.com (Postfix) with ESMTP id 58B3DD00013B for ; Wed, 30 Jun 2021 04:11:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=6f2u0OBpdjFMGS2fAPLgyHx8ij4yt1XVTz/CfYcHZnY=; b=WDVCtVkTmC2BLZGefiF7n9mIRD ELmXOagilDbdiGvcGFCJ96R68FVOmg3UMfAY92cj9F79YaObUPxgzEWmIYB66mEb2cjLLZlygiXrF H/kiQYjuMWAyiA0Vm/8dVMcevftiK1tSfaewGoxqzSS442aacZv26cqvzgxxhS1RHDtW4P0F2pbe1 AZOnD6z3ySk1ppyXLJqfahVn4orBGV4OXHUjQH+0TiUfTAY66CoRrLlXZSmIdfTOXfNzrePHu+5Wb HRRGaOnymSI0KkswqTHbCG3XBtX6KHJ5VKdbnhw0RaDv92NEgm3n2hKU2GsYk1b+awX2U4ZKc/78q Rn0rCeJA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1lyRXp-004rKu-DO; Wed, 30 Jun 2021 04:10:04 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, cgroups@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Johannes Weiner , Michal Hocko , Vladimir Davydov Subject: [PATCH v3 14/18] mm/memcg: Convert mem_cgroup_move_account() to use a folio Date: Wed, 30 Jun 2021 05:00:30 +0100 Message-Id: <20210630040034.1155892-15-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210630040034.1155892-1-willy@infradead.org> References: <20210630040034.1155892-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=WDVCtVkT; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 58B3DD00013B X-Stat-Signature: 6a1w89krghm5ggby69crka866qs95tyc X-HE-Tag: 1625026267-650749 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This saves dozens of bytes of text by eliminating a lot of calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/memcontrol.c | 37 +++++++++++++++++++------------------ 1 file changed, 19 insertions(+), 18 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index b94a6122f27d..95795b65ae3e 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5585,38 +5585,39 @@ static int mem_cgroup_move_account(struct page *page, struct mem_cgroup *from, struct mem_cgroup *to) { + struct folio *folio = page_folio(page); struct lruvec *from_vec, *to_vec; struct pglist_data *pgdat; - unsigned int nr_pages = compound ? thp_nr_pages(page) : 1; + unsigned int nr_pages = compound ? folio_nr_pages(folio) : 1; int nid, ret; VM_BUG_ON(from == to); - VM_BUG_ON_PAGE(PageLRU(page), page); - VM_BUG_ON(compound && !PageTransHuge(page)); + VM_BUG_ON_FOLIO(folio_lru(folio), folio); + VM_BUG_ON(compound && !folio_multi(folio)); /* * Prevent mem_cgroup_migrate() from looking at * page's memory cgroup of its source page while we change it. */ ret = -EBUSY; - if (!trylock_page(page)) + if (!folio_trylock(folio)) goto out; ret = -EINVAL; - if (page_memcg(page) != from) + if (folio_memcg(folio) != from) goto out_unlock; - pgdat = page_pgdat(page); + pgdat = folio_pgdat(folio); from_vec = mem_cgroup_lruvec(from, pgdat); to_vec = mem_cgroup_lruvec(to, pgdat); - lock_page_memcg(page); + folio_memcg_lock(folio); - if (PageAnon(page)) { - if (page_mapped(page)) { + if (folio_anon(folio)) { + if (folio_mapped(folio)) { __mod_lruvec_state(from_vec, NR_ANON_MAPPED, -nr_pages); __mod_lruvec_state(to_vec, NR_ANON_MAPPED, nr_pages); - if (PageTransHuge(page)) { + if (folio_multi(folio)) { __mod_lruvec_state(from_vec, NR_ANON_THPS, -nr_pages); __mod_lruvec_state(to_vec, NR_ANON_THPS, @@ -5627,18 +5628,18 @@ static int mem_cgroup_move_account(struct page *page, __mod_lruvec_state(from_vec, NR_FILE_PAGES, -nr_pages); __mod_lruvec_state(to_vec, NR_FILE_PAGES, nr_pages); - if (PageSwapBacked(page)) { + if (folio_swapbacked(folio)) { __mod_lruvec_state(from_vec, NR_SHMEM, -nr_pages); __mod_lruvec_state(to_vec, NR_SHMEM, nr_pages); } - if (page_mapped(page)) { + if (folio_mapped(folio)) { __mod_lruvec_state(from_vec, NR_FILE_MAPPED, -nr_pages); __mod_lruvec_state(to_vec, NR_FILE_MAPPED, nr_pages); } - if (PageDirty(page)) { - struct address_space *mapping = page_mapping(page); + if (folio_dirty(folio)) { + struct address_space *mapping = folio_mapping(folio); if (mapping_can_writeback(mapping)) { __mod_lruvec_state(from_vec, NR_FILE_DIRTY, @@ -5649,7 +5650,7 @@ static int mem_cgroup_move_account(struct page *page, } } - if (PageWriteback(page)) { + if (folio_writeback(folio)) { __mod_lruvec_state(from_vec, NR_WRITEBACK, -nr_pages); __mod_lruvec_state(to_vec, NR_WRITEBACK, nr_pages); } @@ -5672,12 +5673,12 @@ static int mem_cgroup_move_account(struct page *page, css_get(&to->css); css_put(&from->css); - page->memcg_data = (unsigned long)to; + folio->memcg_data = (unsigned long)to; __memcg_unlock(from); ret = 0; - nid = page_to_nid(page); + nid = folio_nid(folio); local_irq_disable(); mem_cgroup_charge_statistics(to, nr_pages); @@ -5686,7 +5687,7 @@ static int mem_cgroup_move_account(struct page *page, memcg_check_events(from, nid); local_irq_enable(); out_unlock: - unlock_page(page); + folio_unlock(folio); out: return ret; } From patchwork Wed Jun 30 04:00:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12351205 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E74BC11F65 for ; Wed, 30 Jun 2021 04:11:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1E9EF614A7 for ; Wed, 30 Jun 2021 04:11:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1E9EF614A7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 86C678D0175; Wed, 30 Jun 2021 00:11:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 842B08D0171; Wed, 30 Jun 2021 00:11:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 70B6F8D0175; Wed, 30 Jun 2021 00:11:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0253.hostedemail.com [216.40.44.253]) by kanga.kvack.org (Postfix) with ESMTP id 4E3FD8D0171 for ; Wed, 30 Jun 2021 00:11:32 -0400 (EDT) Received: from smtpin35.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 31DBF1823367B for ; Wed, 30 Jun 2021 04:11:32 +0000 (UTC) X-FDA: 78309066024.35.9D23285 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf06.hostedemail.com (Postfix) with ESMTP id ECD3E801ACB5 for ; Wed, 30 Jun 2021 04:11:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=o6pKETfaUH00IzELahKAqvYiyZSdREp3JjqSImxgqHs=; b=v6eo7OkRNK0b1zjWIbAX/JOtzm 5ym+SCJfzbikRiR1bQbmwq/9IGTB+sfJDFL6IOm5nY6xeIehRR//kPCBMkhV1ZqrXey1BEudgY+kD JEje6Y7MiYJeDWj1SsaedR/t06oiddeSGC1fAqw55HDsmiCQ9YZh1SYvGmAoTz5q+PQ/PSGrUPm1R MBD/vCHOOg04jqveeSxVaT2mqzukv2gnUBZiqWFAg518nn0Sy/DDdu5ab8EwtMOOjz9sLEzo49cYI eM5nupiadJETh0G6Kg0OcSB/XpSxfGtLdO5UwgUX9wEJAZWEMar7F2R/QzRJ1DgRkzHGZatP5MphK Mf5bXsKg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1lyRYb-004rNk-4i; Wed, 30 Jun 2021 04:10:51 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, cgroups@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Johannes Weiner , Michal Hocko , Vladimir Davydov Subject: [PATCH v3 15/18] mm/memcg: Add mem_cgroup_folio_lruvec() Date: Wed, 30 Jun 2021 05:00:31 +0100 Message-Id: <20210630040034.1155892-16-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210630040034.1155892-1-willy@infradead.org> References: <20210630040034.1155892-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=v6eo7OkR; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: ztnf9qp1y6ddzwcpya8uhps5qywzghj6 X-Rspamd-Queue-Id: ECD3E801ACB5 X-Rspamd-Server: rspam06 X-HE-Tag: 1625026291-757453 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is the folio equivalent of mem_cgroup_page_lruvec(). Signed-off-by: Matthew Wilcox (Oracle) Reported-by: kernel test robot --- include/linux/memcontrol.h | 25 ++++++++++++++----------- 1 file changed, 14 insertions(+), 11 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 279ea2640365..a7e1ccbc7ed6 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -752,18 +752,17 @@ static inline struct lruvec *mem_cgroup_lruvec(struct mem_cgroup *memcg, } /** - * mem_cgroup_page_lruvec - return lruvec for isolating/putting an LRU page - * @page: the page + * mem_cgroup_folio_lruvec - return lruvec for isolating/putting an LRU folio + * @folio: Pointer to the folio. * - * This function relies on page->mem_cgroup being stable. + * This function relies on folio->mem_cgroup being stable. */ -static inline struct lruvec *mem_cgroup_page_lruvec(struct page *page) +static inline struct lruvec *mem_cgroup_folio_lruvec(struct folio *folio) { - pg_data_t *pgdat = page_pgdat(page); - struct mem_cgroup *memcg = page_memcg(page); + struct mem_cgroup *memcg = folio_memcg(folio); - VM_WARN_ON_ONCE_PAGE(!memcg && !mem_cgroup_disabled(), page); - return mem_cgroup_lruvec(memcg, pgdat); + VM_WARN_ON_ONCE_FOLIO(!memcg && !mem_cgroup_disabled(), folio); + return mem_cgroup_lruvec(memcg, folio_pgdat(folio)); } struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p); @@ -1222,10 +1221,9 @@ static inline struct lruvec *mem_cgroup_lruvec(struct mem_cgroup *memcg, return &pgdat->__lruvec; } -static inline struct lruvec *mem_cgroup_page_lruvec(struct page *page) +static inline struct lruvec *mem_cgroup_folio_lruvec(struct folio *folio) { - pg_data_t *pgdat = page_pgdat(page); - + struct pglist_data *pgdat = folio_pgdat(folio); return &pgdat->__lruvec; } @@ -1485,6 +1483,11 @@ unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, } #endif /* CONFIG_MEMCG */ +static inline struct lruvec *mem_cgroup_page_lruvec(struct page *page) +{ + return mem_cgroup_folio_lruvec(page_folio(page)); +} + static inline void __inc_lruvec_kmem_state(void *p, enum node_stat_item idx) { __mod_lruvec_kmem_state(p, idx, 1); From patchwork Wed Jun 30 04:00:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12351207 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 214DAC11F68 for ; Wed, 30 Jun 2021 04:12:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C53036162C for ; Wed, 30 Jun 2021 04:12:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C53036162C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3D7418D0176; Wed, 30 Jun 2021 00:12:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3ADB38D0171; Wed, 30 Jun 2021 00:12:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 24FC28D0176; Wed, 30 Jun 2021 00:12:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0160.hostedemail.com [216.40.44.160]) by kanga.kvack.org (Postfix) with ESMTP id 03D858D0171 for ; Wed, 30 Jun 2021 00:12:11 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id D2A40181C77CA for ; Wed, 30 Jun 2021 04:12:11 +0000 (UTC) X-FDA: 78309067662.29.468E011 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf11.hostedemail.com (Postfix) with ESMTP id 8F15DF00008A for ; Wed, 30 Jun 2021 04:12:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=IMpVf9S+XpwTzl+bYUatz0DuE99ehch4OO1u76GrFbo=; b=EPruxIwYKmHbDTPvTFOqdEZR9o NTbmzVxzZRc/8LuMzZR+Yzs5KICK0JEqyLk33tjEirYkv44wun0qqn/LP1ph5TY7tsobiqnwKsRlZ TX1XEnO/FRoyW7EfxWuDH1JS6ARXyWVx67JwH+jkgsrA1Fp4o2FyF2/cbhYb7evLiG3fwGPrwYprk xdgDKmaSpIZxJgJrFuLof39Z2fT8hleDaZToPNJzRPZSZFwvz7qECaFTT2LSRrjJuzsHcp0zLNbQ5 AOd7d9L51RZTjpw+O25JR0rmxQDmeqQa7kW0v8xNugU6DtUldGmLKJ13Ck1Wx01vgTJpR/bPywsF4 Alo9C9PA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1lyRZ8-004rRW-Cn; Wed, 30 Jun 2021 04:11:20 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, cgroups@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Johannes Weiner , Michal Hocko , Vladimir Davydov Subject: [PATCH v3 16/18] mm/memcg: Add folio_lruvec_lock() and similar functions Date: Wed, 30 Jun 2021 05:00:32 +0100 Message-Id: <20210630040034.1155892-17-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210630040034.1155892-1-willy@infradead.org> References: <20210630040034.1155892-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=EPruxIwY; spf=none (imf11.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 8F15DF00008A X-Stat-Signature: 66dqf8thz5z8euxmmt1h3eq1ycy74i1s X-HE-Tag: 1625026331-178403 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These are the folio equivalents of lock_page_lruvec() and similar functions. Retain lock_page_lruvec() as wrappers so we don't have to convert all these functions twice. Also convert lruvec_memcg_debug() to take a folio. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/memcontrol.h | 39 +++++++++++++++++++++++++++----------- mm/compaction.c | 2 +- mm/memcontrol.c | 33 ++++++++++++++------------------ 3 files changed, 43 insertions(+), 31 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index a7e1ccbc7ed6..b21a77669277 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -769,15 +769,16 @@ struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p); struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm); -struct lruvec *lock_page_lruvec(struct page *page); -struct lruvec *lock_page_lruvec_irq(struct page *page); -struct lruvec *lock_page_lruvec_irqsave(struct page *page, +struct lruvec *folio_lruvec_lock(struct folio *folio); +struct lruvec *folio_lruvec_lock_irq(struct folio *folio); +struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags); #ifdef CONFIG_DEBUG_VM -void lruvec_memcg_debug(struct lruvec *lruvec, struct page *page); +void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio); #else -static inline void lruvec_memcg_debug(struct lruvec *lruvec, struct page *page) +static inline +void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) { } #endif @@ -1257,26 +1258,26 @@ static inline void mem_cgroup_put(struct mem_cgroup *memcg) { } -static inline struct lruvec *lock_page_lruvec(struct page *page) +static inline struct lruvec *folio_lruvec_lock(struct folio *folio) { - struct pglist_data *pgdat = page_pgdat(page); + struct pglist_data *pgdat = folio_pgdat(folio); spin_lock(&pgdat->__lruvec.lru_lock); return &pgdat->__lruvec; } -static inline struct lruvec *lock_page_lruvec_irq(struct page *page) +static inline struct lruvec *folio_lruvec_lock_irq(struct folio *folio) { - struct pglist_data *pgdat = page_pgdat(page); + struct pglist_data *pgdat = folio_pgdat(folio); spin_lock_irq(&pgdat->__lruvec.lru_lock); return &pgdat->__lruvec; } -static inline struct lruvec *lock_page_lruvec_irqsave(struct page *page, +static inline struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flagsp) { - struct pglist_data *pgdat = page_pgdat(page); + struct pglist_data *pgdat = folio_pgdat(folio); spin_lock_irqsave(&pgdat->__lruvec.lru_lock, *flagsp); return &pgdat->__lruvec; @@ -1488,6 +1489,22 @@ static inline struct lruvec *mem_cgroup_page_lruvec(struct page *page) return mem_cgroup_folio_lruvec(page_folio(page)); } +static inline struct lruvec *lock_page_lruvec(struct page *page) +{ + return folio_lruvec_lock(page_folio(page)); +} + +static inline struct lruvec *lock_page_lruvec_irq(struct page *page) +{ + return folio_lruvec_lock_irq(page_folio(page)); +} + +static inline struct lruvec *lock_page_lruvec_irqsave(struct page *page, + unsigned long *flags) +{ + return folio_lruvec_lock_irqsave(page_folio(page), flags); +} + static inline void __inc_lruvec_kmem_state(void *p, enum node_stat_item idx) { __mod_lruvec_kmem_state(p, idx, 1); diff --git a/mm/compaction.c b/mm/compaction.c index 3a509fbf2bea..8b0da04b70f2 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -1038,7 +1038,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); locked = lruvec; - lruvec_memcg_debug(lruvec, page); + lruvec_memcg_debug(lruvec, page_folio(page)); /* Try get exclusive access under lock */ if (!skip_updated) { diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 95795b65ae3e..23b166917def 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1153,19 +1153,19 @@ int mem_cgroup_scan_tasks(struct mem_cgroup *memcg, } #ifdef CONFIG_DEBUG_VM -void lruvec_memcg_debug(struct lruvec *lruvec, struct page *page) +void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) { struct mem_cgroup *memcg; if (mem_cgroup_disabled()) return; - memcg = page_memcg(page); + memcg = folio_memcg(folio); if (!memcg) - VM_BUG_ON_PAGE(lruvec_memcg(lruvec) != root_mem_cgroup, page); + VM_BUG_ON_FOLIO(lruvec_memcg(lruvec) != root_mem_cgroup, folio); else - VM_BUG_ON_PAGE(lruvec_memcg(lruvec) != memcg, page); + VM_BUG_ON_FOLIO(lruvec_memcg(lruvec) != memcg, folio); } #endif @@ -1179,38 +1179,33 @@ void lruvec_memcg_debug(struct lruvec *lruvec, struct page *page) * - lock_page_memcg() * - page->_refcount is zero */ -struct lruvec *lock_page_lruvec(struct page *page) +struct lruvec *folio_lruvec_lock(struct folio *folio) { - struct lruvec *lruvec; + struct lruvec *lruvec = mem_cgroup_folio_lruvec(folio); - lruvec = mem_cgroup_page_lruvec(page); spin_lock(&lruvec->lru_lock); - - lruvec_memcg_debug(lruvec, page); + lruvec_memcg_debug(lruvec, folio); return lruvec; } -struct lruvec *lock_page_lruvec_irq(struct page *page) +struct lruvec *folio_lruvec_lock_irq(struct folio *folio) { - struct lruvec *lruvec; + struct lruvec *lruvec = mem_cgroup_folio_lruvec(folio); - lruvec = mem_cgroup_page_lruvec(page); spin_lock_irq(&lruvec->lru_lock); - - lruvec_memcg_debug(lruvec, page); + lruvec_memcg_debug(lruvec, folio); return lruvec; } -struct lruvec *lock_page_lruvec_irqsave(struct page *page, unsigned long *flags) +struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio, + unsigned long *flags) { - struct lruvec *lruvec; + struct lruvec *lruvec = mem_cgroup_folio_lruvec(folio); - lruvec = mem_cgroup_page_lruvec(page); spin_lock_irqsave(&lruvec->lru_lock, *flags); - - lruvec_memcg_debug(lruvec, page); + lruvec_memcg_debug(lruvec, folio); return lruvec; } From patchwork Wed Jun 30 04:00:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12351209 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 586D1C11F65 for ; Wed, 30 Jun 2021 04:12:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 059EF614A7 for ; Wed, 30 Jun 2021 04:12:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 059EF614A7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6E4EB8D0177; Wed, 30 Jun 2021 00:12:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6BAB38D0171; Wed, 30 Jun 2021 00:12:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 582AF8D0177; Wed, 30 Jun 2021 00:12:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0231.hostedemail.com [216.40.44.231]) by kanga.kvack.org (Postfix) with ESMTP id 36DC58D0171 for ; Wed, 30 Jun 2021 00:12:36 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 2347923E58 for ; Wed, 30 Jun 2021 04:12:36 +0000 (UTC) X-FDA: 78309068712.17.F9F2819 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf30.hostedemail.com (Postfix) with ESMTP id CC0FBE001687 for ; Wed, 30 Jun 2021 04:12:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=k2cIxuyIWNq/4pAHuKjXPfaEbxUqkyWNYHWDk+v+iO4=; b=CqBPJcxFJ0HP3oRMWS9y2/Spj+ kAIZV8s2nVejVxTxIvnacW3NYPMIPlrBZWZaA+5VpWEQeJvlsUvVowI4XnWvGClWkUPDh7RfbXmvj bZPKRrbd+eh1tVy3xrHua2/s3X/y/Ofn+mAALk3BrcfjB4Nq3ezAHJqFVf8aVcuJgp/ZhQE9Jrqv5 0rx9Gxbmrp2wG5T+SBCuztQeBRwkwOS4f696HjjCSqa+LgC0iUuw2nWA+tCbdpQgWQIBMkwx2IZfy mYufz88FocyF5ihPH2TNDg1PGVkzH7Z9HrDWwNi/EcRrNl0obQV6MVUrFXZPJQIhADYFgXQx9BMlq YJ14X0QA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1lyRZa-004rSI-Kk; Wed, 30 Jun 2021 04:11:49 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, cgroups@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Johannes Weiner , Michal Hocko , Vladimir Davydov Subject: [PATCH v3 17/18] mm/memcg: Add folio_lruvec_relock_irq() and folio_lruvec_relock_irqsave() Date: Wed, 30 Jun 2021 05:00:33 +0100 Message-Id: <20210630040034.1155892-18-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210630040034.1155892-1-willy@infradead.org> References: <20210630040034.1155892-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=CqBPJcxF; dmarc=none; spf=none (imf30.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspamd-Server: rspam02 X-Stat-Signature: 5td1z7bfp3f57s4t7h6c9dy5c3o3gw65 X-Rspamd-Queue-Id: CC0FBE001687 X-HE-Tag: 1625026355-16287 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These are the folio equivalents of relock_page_lruvec_irq() and folio_lruvec_relock_irqsave(), which are retained as compatibility wrappers. Also convert lruvec_holds_page_lru_lock() to folio_lruvec_holds_lru_lock(). Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/memcontrol.h | 31 ++++++++++++++++++++++--------- mm/vmscan.c | 2 +- 2 files changed, 23 insertions(+), 10 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index b21a77669277..e6b5e8fbf770 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1545,40 +1545,53 @@ static inline void unlock_page_lruvec_irqrestore(struct lruvec *lruvec, } /* Test requires a stable page->memcg binding, see page_memcg() */ -static inline bool page_matches_lruvec(struct page *page, struct lruvec *lruvec) +static inline bool folio_matches_lruvec(struct folio *folio, + struct lruvec *lruvec) { - return lruvec_pgdat(lruvec) == page_pgdat(page) && - lruvec_memcg(lruvec) == page_memcg(page); + return lruvec_pgdat(lruvec) == folio_pgdat(folio) && + lruvec_memcg(lruvec) == folio_memcg(folio); } /* Don't lock again iff page's lruvec locked */ -static inline struct lruvec *relock_page_lruvec_irq(struct page *page, +static inline struct lruvec *folio_lruvec_relock_irq(struct folio *folio, struct lruvec *locked_lruvec) { if (locked_lruvec) { - if (page_matches_lruvec(page, locked_lruvec)) + if (folio_matches_lruvec(folio, locked_lruvec)) return locked_lruvec; unlock_page_lruvec_irq(locked_lruvec); } - return lock_page_lruvec_irq(page); + return folio_lruvec_lock_irq(folio); } /* Don't lock again iff page's lruvec locked */ -static inline struct lruvec *relock_page_lruvec_irqsave(struct page *page, +static inline struct lruvec *folio_lruvec_relock_irqsave(struct folio *folio, struct lruvec *locked_lruvec, unsigned long *flags) { if (locked_lruvec) { - if (page_matches_lruvec(page, locked_lruvec)) + if (folio_matches_lruvec(folio, locked_lruvec)) return locked_lruvec; unlock_page_lruvec_irqrestore(locked_lruvec, *flags); } - return lock_page_lruvec_irqsave(page, flags); + return folio_lruvec_lock_irqsave(folio, flags); } +static inline struct lruvec *relock_page_lruvec_irq(struct page *page, + struct lruvec *locked_lruvec) +{ + return folio_lruvec_relock_irq(page_folio(page), locked_lruvec); +} + +static inline struct lruvec *relock_page_lruvec_irqsave(struct page *page, + struct lruvec *locked_lruvec, unsigned long *flags) +{ + return folio_lruvec_relock_irqsave(page_folio(page), locked_lruvec, + flags); +} #ifdef CONFIG_CGROUP_WRITEBACK struct wb_domain *mem_cgroup_wb_domain(struct bdi_writeback *wb); diff --git a/mm/vmscan.c b/mm/vmscan.c index d7c3cb8688dd..a8d8f4673451 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2063,7 +2063,7 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec, * All pages were isolated from the same lruvec (and isolation * inhibits memcg migration). */ - VM_BUG_ON_PAGE(!page_matches_lruvec(page, lruvec), page); + VM_BUG_ON_PAGE(!folio_matches_lruvec(page_folio(page), lruvec), page); add_page_to_lru_list(page, lruvec); nr_pages = thp_nr_pages(page); nr_moved += nr_pages; From patchwork Wed Jun 30 04:00:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12351215 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33FC3C11F65 for ; Wed, 30 Jun 2021 04:13:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D30AC61D28 for ; Wed, 30 Jun 2021 04:13:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D30AC61D28 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 44AD78D0178; Wed, 30 Jun 2021 00:13:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3FA928D0171; Wed, 30 Jun 2021 00:13:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2C27C8D0178; Wed, 30 Jun 2021 00:13:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0182.hostedemail.com [216.40.44.182]) by kanga.kvack.org (Postfix) with ESMTP id 04E4D8D0171 for ; Wed, 30 Jun 2021 00:13:17 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id DC2E71923A for ; Wed, 30 Jun 2021 04:13:17 +0000 (UTC) X-FDA: 78309070434.11.F2BA5B2 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id 9B3221000208 for ; Wed, 30 Jun 2021 04:13:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=IUJn7fv6b1sKrSHn34p73lOPM2JMvc5qmpjo7WAOjUQ=; b=GDhSlS4SW0/M6iFfmbhYQKgnwA /7k/QjUaQF3EKvwpuYLYPBklvJytSFFa6SZ/eUuS50goObx4WKUqfgT1Qx81zc7+aa2mPBCNnsQYN efzKJ7mM9bu/k0RvCMYvjwSbOMxs/UF8qof8Xcd0cG/P/NI0q3X/RTvY0EICE9e0H9eoj5HfT/+Ou Pl/FEyyKAPcCa716TeYODolUUPRLzL4sJrJB2+g8LjDkSHp99/uYsO5lAXpLS9ULNzLjz20XbiQOF uu+GQvB6P/90lh7OLuaXumL5vq4xIEtKUM2uSspYswC46drHRdydKjTj2Zmof+b2iKL1UltqGlROx 2pjLzc9Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1lyRaA-004rTs-Sr; Wed, 30 Jun 2021 04:12:23 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, cgroups@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Johannes Weiner , Michal Hocko , Vladimir Davydov , Christoph Hellwig Subject: [PATCH v3 18/18] mm/workingset: Convert workingset_activation to take a folio Date: Wed, 30 Jun 2021 05:00:34 +0100 Message-Id: <20210630040034.1155892-19-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210630040034.1155892-1-willy@infradead.org> References: <20210630040034.1155892-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 9B3221000208 Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=GDhSlS4S; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: 7n7br6cbruecsix6ugmt6sxxr5jokb71 X-HE-Tag: 1625026397-105412 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This function already assumed it was being passed a head page. No real change here, except that thp_nr_pages() compiles away on kernels with THP compiled out while folio_nr_pages() is always present. Also convert page_memcg_rcu() to folio_memcg_rcu(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- include/linux/memcontrol.h | 18 +++++++++--------- include/linux/swap.h | 2 +- mm/swap.c | 2 +- mm/workingset.c | 10 +++++----- 4 files changed, 16 insertions(+), 16 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index e6b5e8fbf770..be131c28b3bc 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -462,19 +462,19 @@ static inline struct mem_cgroup *page_memcg(struct page *page) } /* - * page_memcg_rcu - locklessly get the memory cgroup associated with a page - * @page: a pointer to the page struct + * folio_memcg_rcu - Locklessly get the memory cgroup associated with a folio. + * @folio: Pointer to the folio. * - * Returns a pointer to the memory cgroup associated with the page, - * or NULL. This function assumes that the page is known to have a + * Returns a pointer to the memory cgroup associated with the folio, + * or NULL. This function assumes that the folio is known to have a * proper memory cgroup pointer. It's not safe to call this function - * against some type of pages, e.g. slab pages or ex-slab pages. + * against some type of folios, e.g. slab folios or ex-slab folios. */ -static inline struct mem_cgroup *page_memcg_rcu(struct page *page) +static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) { - unsigned long memcg_data = READ_ONCE(page->memcg_data); + unsigned long memcg_data = READ_ONCE(folio->memcg_data); - VM_BUG_ON_PAGE(PageSlab(page), page); + VM_BUG_ON_FOLIO(folio_slab(folio), folio); WARN_ON_ONCE(!rcu_read_lock_held()); if (memcg_data & MEMCG_DATA_KMEM) { @@ -1125,7 +1125,7 @@ static inline struct mem_cgroup *page_memcg(struct page *page) return NULL; } -static inline struct mem_cgroup *page_memcg_rcu(struct page *page) +static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) { WARN_ON_ONCE(!rcu_read_lock_held()); return NULL; diff --git a/include/linux/swap.h b/include/linux/swap.h index 950dd96007ad..614bbef65777 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -325,7 +325,7 @@ static inline swp_entry_t folio_swap_entry(struct folio *folio) void workingset_age_nonresident(struct lruvec *lruvec, unsigned long nr_pages); void *workingset_eviction(struct page *page, struct mem_cgroup *target_memcg); void workingset_refault(struct page *page, void *shadow); -void workingset_activation(struct page *page); +void workingset_activation(struct folio *folio); /* Only track the nodes of mappings with shadow entries */ void workingset_update_node(struct xa_node *node); diff --git a/mm/swap.c b/mm/swap.c index 8ba62a930370..3c817717af0c 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -447,7 +447,7 @@ void mark_page_accessed(struct page *page) else __lru_cache_activate_page(page); ClearPageReferenced(page); - workingset_activation(page); + workingset_activation(page_folio(page)); } if (page_is_idle(page)) clear_page_idle(page); diff --git a/mm/workingset.c b/mm/workingset.c index 4f7a306ce75a..86e239ec0306 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -390,9 +390,9 @@ void workingset_refault(struct page *page, void *shadow) /** * workingset_activation - note a page activation - * @page: page that is being activated + * @folio: Folio that is being activated. */ -void workingset_activation(struct page *page) +void workingset_activation(struct folio *folio) { struct mem_cgroup *memcg; struct lruvec *lruvec; @@ -405,11 +405,11 @@ void workingset_activation(struct page *page) * XXX: See workingset_refault() - this should return * root_mem_cgroup even for !CONFIG_MEMCG. */ - memcg = page_memcg_rcu(page); + memcg = folio_memcg_rcu(folio); if (!mem_cgroup_disabled() && !memcg) goto out; - lruvec = mem_cgroup_page_lruvec(page); - workingset_age_nonresident(lruvec, thp_nr_pages(page)); + lruvec = mem_cgroup_folio_lruvec(folio); + workingset_age_nonresident(lruvec, folio_nr_pages(folio)); out: rcu_read_unlock(); }