From patchwork Wed Jun 30 04:00:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12351189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2D3EC11F65 for ; Wed, 30 Jun 2021 04:06:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8F13661D2B for ; Wed, 30 Jun 2021 04:06:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8F13661D2B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BF5FC8D016D; Wed, 30 Jun 2021 00:06:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B7F1A8D0160; Wed, 30 Jun 2021 00:06:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A20008D016D; Wed, 30 Jun 2021 00:06:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0108.hostedemail.com [216.40.44.108]) by kanga.kvack.org (Postfix) with ESMTP id 755B88D0160 for ; Wed, 30 Jun 2021 00:06:19 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 498EE824999B for ; Wed, 30 Jun 2021 04:06:19 +0000 (UTC) X-FDA: 78309052878.06.642014D Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf03.hostedemail.com (Postfix) with ESMTP id D530B300009B for ; Wed, 30 Jun 2021 04:06:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=WeFvvQ3KBB1ENMOJ6f7Rya8a7gf7U8HCBtx5RYCkXSI=; b=nkJu5V1G++weobkGUK8VPbe6Il kHIIIWXJl9U4sT99L6Dc6dg0ogtHumvfiYsBTr3T68ycJIBUewoBwOSLBOdPJyJ+2EYNZ0lRPKrHc HyWsoHw/gtA8ugWteFXp6GFJyRobvmmveV2j9bAro+YlgI6BbGgBSQodvQBSK5XhdpxpUrnF7B8pg M6wSI99jxh1KiAY5/FLXbBjhZQfKpqYB11Si6Fr+mGpYLjY28WL/SkyPUeh2ie8DzkWpgRNuiImHP X6E5+AZk/ELvbXZyQfs6AQxrmgiPsrRkyee77wUqz673EIZHlyxPNhmG3Wy15fZtWwj/NWH6OY0gx 04kRg+wQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1lyRT3-004r4f-NE; Wed, 30 Jun 2021 04:05:04 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, cgroups@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Johannes Weiner , Michal Hocko , Vladimir Davydov , Christoph Hellwig , Michal Hocko Subject: [PATCH v3 07/18] mm/memcg: Convert commit_charge() to take a folio Date: Wed, 30 Jun 2021 05:00:23 +0100 Message-Id: <20210630040034.1155892-8-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210630040034.1155892-1-willy@infradead.org> References: <20210630040034.1155892-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=nkJu5V1G; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: 5idxphg6dmg65pkjgbiihadutz67nmde X-Rspamd-Queue-Id: D530B300009B X-Rspamd-Server: rspam06 X-HE-Tag: 1625025978-399966 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The memcg_data is only set on the head page, so enforce that by typing it as a folio. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Michal Hocko --- mm/memcontrol.c | 27 +++++++++++++-------------- 1 file changed, 13 insertions(+), 14 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index f369bbaf584b..727bd578ca7d 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2764,9 +2764,9 @@ static void cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages) } #endif -static void commit_charge(struct page *page, struct mem_cgroup *memcg) +static void commit_charge(struct folio *folio, struct mem_cgroup *memcg) { - VM_BUG_ON_PAGE(page_memcg(page), page); + VM_BUG_ON_FOLIO(folio_memcg(folio), folio); /* * Any of the following ensures page's memcg stability: * @@ -2775,7 +2775,7 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg) * - lock_page_memcg() * - exclusive reference */ - page->memcg_data = (unsigned long)memcg; + folio->memcg_data = (unsigned long)memcg; } static struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg) @@ -6679,7 +6679,8 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root, static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, gfp_t gfp) { - unsigned int nr_pages = thp_nr_pages(page); + struct folio *folio = page_folio(page); + unsigned int nr_pages = folio_nr_pages(folio); int ret; ret = try_charge(memcg, gfp, nr_pages); @@ -6687,7 +6688,7 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, goto out; css_get(&memcg->css); - commit_charge(page, memcg); + commit_charge(folio, memcg); local_irq_disable(); mem_cgroup_charge_statistics(memcg, nr_pages); @@ -6947,21 +6948,21 @@ void mem_cgroup_uncharge_list(struct list_head *page_list) */ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) { + struct folio *newfolio = page_folio(newpage); struct mem_cgroup *memcg; - unsigned int nr_pages; + unsigned int nr_pages = folio_nr_pages(newfolio); unsigned long flags; VM_BUG_ON_PAGE(!PageLocked(oldpage), oldpage); - VM_BUG_ON_PAGE(!PageLocked(newpage), newpage); - VM_BUG_ON_PAGE(PageAnon(oldpage) != PageAnon(newpage), newpage); - VM_BUG_ON_PAGE(PageTransHuge(oldpage) != PageTransHuge(newpage), - newpage); + VM_BUG_ON_FOLIO(!folio_locked(newfolio), newfolio); + VM_BUG_ON_FOLIO(PageAnon(oldpage) != folio_anon(newfolio), newfolio); + VM_BUG_ON_FOLIO(compound_nr(oldpage) != nr_pages, newfolio); if (mem_cgroup_disabled()) return; /* Page cache replacement: new page already charged? */ - if (page_memcg(newpage)) + if (folio_memcg(newfolio)) return; memcg = page_memcg(oldpage); @@ -6970,8 +6971,6 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) return; /* Force-charge the new page. The old one will be freed soon */ - nr_pages = thp_nr_pages(newpage); - if (!mem_cgroup_is_root(memcg)) { page_counter_charge(&memcg->memory, nr_pages); if (do_memsw_account()) @@ -6979,7 +6978,7 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) } css_get(&memcg->css); - commit_charge(newpage, memcg); + commit_charge(newfolio, memcg); local_irq_save(flags); mem_cgroup_charge_statistics(memcg, nr_pages);