From patchwork Thu Feb 25 15:06:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12104249 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BAE08C433DB for ; Thu, 25 Feb 2021 15:08:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4159264EF5 for ; Thu, 25 Feb 2021 15:08:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4159264EF5 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C619D8D0002; Thu, 25 Feb 2021 10:08:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C12AA6B0074; Thu, 25 Feb 2021 10:08:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B4F0D8D0002; Thu, 25 Feb 2021 10:08:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0142.hostedemail.com [216.40.44.142]) by kanga.kvack.org (Postfix) with ESMTP id A00066B0073 for ; Thu, 25 Feb 2021 10:08:42 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 57F7918026591 for ; Thu, 25 Feb 2021 15:08:42 +0000 (UTC) X-FDA: 77857122084.06.D2D4743 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf23.hostedemail.com (Postfix) with ESMTP id 0E11FA00060C for ; Thu, 25 Feb 2021 15:08:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=kUFxZYw0tluMvezd9LI5ILmHxTWpl0lzxkRn9KMu/Yo=; b=FlPWuoL0FLK94H3nbS6WuF5z5D RGnlsVpfhfmQsiF7uYsOh7EDn5Y5Oj5aiadxoIbBIXvwJyE7zr4HIM+NmwGS4VJME2cW5dFOOsP7J H7+DG+TDT5jjLBttMIKeIjwjQUjp6rmeS3FRut1tuS9OASkXPkV262ZtS+NLa/1Qh+xqTnMwCMwbL VxsK4+1Nu5uZxgbV9tSi+iRwm2APPpvy5Y2WCUchcvneW23dO+L52ofx6PMyK/MvL1L4VoQa0zmeL JZUHQ3dsq0HZ29TEcUY2pIgv6j55dVGBrZNcj+cmyd6AC6g24DFHDMtZvvRvypn6IZiM/pXuWQuaH yoLpPDBA==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lFIFA-00AppC-GY; Thu, 25 Feb 2021 15:08:07 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, Mike Rapoport , Vlastimil Babka , Michal Hocko Subject: [PATCH v3 3/7] mm/page_alloc: Combine __alloc_pages and __alloc_pages_nodemask Date: Thu, 25 Feb 2021 15:06:38 +0000 Message-Id: <20210225150642.2582252-4-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210225150642.2582252-1-willy@infradead.org> References: <20210225150642.2582252-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 0E11FA00060C X-Stat-Signature: yf3zdmyhn5388uiezyow4k5ycdwotd4f Received-SPF: none (infradead.org>: No applicable sender policy available) receiver=imf23; identity=mailfrom; envelope-from=""; helo=casper.infradead.org; client-ip=90.155.50.34 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1614265718-675295 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There are only two callers of __alloc_pages() so prune the thicket of alloc_page variants by combining the two functions together. Current callers of __alloc_pages() simply add an extra 'NULL' parameter and current callers of __alloc_pages_nodemask() call __alloc_pages() instead. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Vlastimil Babka Acked-by: Michal Hocko --- Documentation/admin-guide/mm/transhuge.rst | 2 +- include/linux/gfp.h | 13 +++---------- mm/hugetlb.c | 2 +- mm/internal.h | 4 ++-- mm/mempolicy.c | 6 +++--- mm/migrate.c | 2 +- mm/page_alloc.c | 5 ++--- 7 files changed, 13 insertions(+), 21 deletions(-) diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst index 3b8a336511a4..c9c37f16eef8 100644 --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -402,7 +402,7 @@ compact_fail but failed. It is possible to establish how long the stalls were using the function -tracer to record how long was spent in __alloc_pages_nodemask and +tracer to record how long was spent in __alloc_pages() and using the mm_page_alloc tracepoint to identify which allocations were for huge pages. diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 8572a1474e16..f39de931bdf9 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -515,15 +515,8 @@ static inline int arch_make_page_accessible(struct page *page) } #endif -struct page * -__alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid, - nodemask_t *nodemask); - -static inline struct page * -__alloc_pages(gfp_t gfp_mask, unsigned int order, int preferred_nid) -{ - return __alloc_pages_nodemask(gfp_mask, order, preferred_nid, NULL); -} +struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid, + nodemask_t *nodemask); /* * Allocate pages, preferring the node given as nid. The node must be valid and @@ -535,7 +528,7 @@ __alloc_pages_node(int nid, gfp_t gfp_mask, unsigned int order) VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES); VM_WARN_ON((gfp_mask & __GFP_THISNODE) && !node_online(nid)); - return __alloc_pages(gfp_mask, order, nid); + return __alloc_pages(gfp_mask, order, nid, NULL); } /* diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 8fb42c6dd74b..10b87bfa124e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1592,7 +1592,7 @@ static struct page *alloc_buddy_huge_page(struct hstate *h, gfp_mask |= __GFP_RETRY_MAYFAIL; if (nid == NUMA_NO_NODE) nid = numa_mem_id(); - page = __alloc_pages_nodemask(gfp_mask, order, nid, nmask); + page = __alloc_pages(gfp_mask, order, nid, nmask); if (page) __count_vm_event(HTLB_BUDDY_PGALLOC); else diff --git a/mm/internal.h b/mm/internal.h index 9902648f2206..0c593c142175 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -126,10 +126,10 @@ extern pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address); * family of functions. * * nodemask, migratetype and highest_zoneidx are initialized only once in - * __alloc_pages_nodemask() and then never change. + * __alloc_pages() and then never change. * * zonelist, preferred_zone and highest_zoneidx are set first in - * __alloc_pages_nodemask() for the fast path, and might be later changed + * __alloc_pages() for the fast path, and might be later changed * in __alloc_pages_slowpath(). All other functions pass the whole structure * by a const pointer. */ diff --git a/mm/mempolicy.c b/mm/mempolicy.c index ab51132547b8..5f0d20298736 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2140,7 +2140,7 @@ static struct page *alloc_page_interleave(gfp_t gfp, unsigned order, { struct page *page; - page = __alloc_pages(gfp, order, nid); + page = __alloc_pages(gfp, order, nid, NULL); /* skip NUMA_INTERLEAVE_HIT counter update if numa stats is disabled */ if (!static_branch_likely(&vm_numa_stat_key)) return page; @@ -2237,7 +2237,7 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, nmask = policy_nodemask(gfp, pol); preferred_nid = policy_node(gfp, pol, node); - page = __alloc_pages_nodemask(gfp, order, preferred_nid, nmask); + page = __alloc_pages(gfp, order, preferred_nid, nmask); mpol_cond_put(pol); out: return page; @@ -2274,7 +2274,7 @@ struct page *alloc_pages_current(gfp_t gfp, unsigned order) if (pol->mode == MPOL_INTERLEAVE) page = alloc_page_interleave(gfp, order, interleave_nodes(pol)); else - page = __alloc_pages_nodemask(gfp, order, + page = __alloc_pages(gfp, order, policy_node(gfp, pol, numa_node_id()), policy_nodemask(gfp, pol)); diff --git a/mm/migrate.c b/mm/migrate.c index 62b81d5257aa..47df0df8f21a 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1617,7 +1617,7 @@ struct page *alloc_migration_target(struct page *page, unsigned long private) if (is_highmem_idx(zidx) || zidx == ZONE_MOVABLE) gfp_mask |= __GFP_HIGHMEM; - new_page = __alloc_pages_nodemask(gfp_mask, order, nid, mtc->nmask); + new_page = __alloc_pages(gfp_mask, order, nid, mtc->nmask); if (new_page && PageTransHuge(new_page)) prep_transhuge_page(new_page); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 77ab734914dd..eed60c30d74f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4984,8 +4984,7 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order, /* * This is the 'heart' of the zoned buddy allocator. */ -struct page * -__alloc_pages_nodemask(gfp_t gfp, unsigned int order, int preferred_nid, +struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid, nodemask_t *nodemask) { struct page *page; @@ -5047,7 +5046,7 @@ __alloc_pages_nodemask(gfp_t gfp, unsigned int order, int preferred_nid, return page; } -EXPORT_SYMBOL(__alloc_pages_nodemask); +EXPORT_SYMBOL(__alloc_pages); /* * Common helper functions. Never use with __GFP_HIGHMEM because the returned