From patchwork Tue May 31 15:06:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12865812 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7C04C433F5 for ; Tue, 31 May 2022 15:06:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 08A366B0072; Tue, 31 May 2022 11:06:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 011C46B0073; Tue, 31 May 2022 11:06:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DF24D6B0075; Tue, 31 May 2022 11:06:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id CBA786B0073 for ; Tue, 31 May 2022 11:06:17 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id AADA43519D for ; Tue, 31 May 2022 15:06:17 +0000 (UTC) X-FDA: 79526363994.21.5C94911 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf11.hostedemail.com (Postfix) with ESMTP id D311C40045 for ; Tue, 31 May 2022 15:06:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=UOkf0e9ux4aTVo+ZI2ScVHHiqspjKXPlVY8JsIxmlIY=; b=bEtp8eiGhh1nmiw0dlLP36WRwx Rk+Nylq6nnTrWPO2CuzOfjC+8VG7cdOJQ/OubnhtotJcTWgS4J/3kue9UCL6fa99j7iVA4dTOSCUQ nqiYEzQ0y92H+LUTA8wNAFseWvebss2j+NiQ/ua3525+ZjpCySZ6bbHdNWay5lOgxW7HithifsiJe EHrQX5uOkfiQohyAzJxuwj0C/tyFRDltV/TDDQKQfhryVLqhL0WyxBsLqSZ1Z8nTA8eWicPU4F8CJ 2JtF2iKZGIOsDfHHqukmaQ9ymB3xZSY7ifnLvmI50h5clVxuj5XcL5k/8QU4fqO2qZ3udZoyLm1QQ Ls71Ulmw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nw3Rl-005T1S-9R; Tue, 31 May 2022 15:06:13 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 1/6] mm/page_alloc: Remove zone parameter from free_one_page() Date: Tue, 31 May 2022 16:06:06 +0100 Message-Id: <20220531150611.1303156-2-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220531150611.1303156-1-willy@infradead.org> References: <20220531150611.1303156-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=bEtp8eiG; dmarc=none; spf=none (imf11.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: D311C40045 X-Stat-Signature: 4augjadcw54qcet9ry8dbuk1qzhumkpe X-HE-Tag: 1654009568-190606 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Both callers pass in page_zone(page), so move that into free_one_page(). Shrinks page_alloc.o by 196 bytes with allmodconfig. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: David Hildenbrand Reviewed-by: Miaohe Lin --- mm/page_alloc.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 29d775b60cf9..68bb77900f67 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1529,11 +1529,10 @@ static void free_pcppages_bulk(struct zone *zone, int count, spin_unlock(&zone->lock); } -static void free_one_page(struct zone *zone, - struct page *page, unsigned long pfn, - unsigned int order, - int migratetype, fpi_t fpi_flags) +static void free_one_page(struct page *page, unsigned long pfn, + unsigned int order, int migratetype, fpi_t fpi_flags) { + struct zone *zone = page_zone(page); unsigned long flags; spin_lock_irqsave(&zone->lock, flags); @@ -3448,7 +3447,7 @@ void free_unref_page(struct page *page, unsigned int order) migratetype = get_pcppage_migratetype(page); if (unlikely(migratetype >= MIGRATE_PCPTYPES)) { if (unlikely(is_migrate_isolate(migratetype))) { - free_one_page(page_zone(page), page, pfn, order, migratetype, FPI_NONE); + free_one_page(page, pfn, order, migratetype, FPI_NONE); return; } migratetype = MIGRATE_MOVABLE; @@ -3484,7 +3483,7 @@ void free_unref_page_list(struct list_head *list) migratetype = get_pcppage_migratetype(page); if (unlikely(is_migrate_isolate(migratetype))) { list_del(&page->lru); - free_one_page(page_zone(page), page, pfn, 0, migratetype, FPI_NONE); + free_one_page(page, pfn, 0, migratetype, FPI_NONE); continue; } } From patchwork Tue May 31 15:06:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12865815 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C87EC433FE for ; Tue, 31 May 2022 15:06:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5D5066B0075; Tue, 31 May 2022 11:06:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 587D16B0078; Tue, 31 May 2022 11:06:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3FDAC6B007B; Tue, 31 May 2022 11:06:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 2AF736B0075 for ; Tue, 31 May 2022 11:06:22 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id A522A3444E for ; Tue, 31 May 2022 15:06:21 +0000 (UTC) X-FDA: 79526364162.05.08540A8 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf06.hostedemail.com (Postfix) with ESMTP id 4B07118004D for ; Tue, 31 May 2022 15:06:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=U21F7mEdCIyeLJkhpDaUduZ+OYTD90bWKWNzs0G7FtI=; b=Iw4qhUE/q1Gn5lLvoLzQL8MQvE hYv8eCAG9fSrrY+iiJCz/AaLP0fv9Vdu12ZJGYHPfZQ2ySr3Mc/hAbrIWYx2ArW6vUCPrRXYjBs39 NNiDXcM37SqMvH8OOkILVicyOHeSBgeGidTggH1Ae9Yd2aeHr/J/Mdx1p2hI78zOup2lHhNizkRcn 09yTV/pPS2vn1vN5Q5glPA4vqVCOV1047WP1spTLVwoovd8ohBcP235j6pgpM0+uPuug9cHYoWP7a 9Z2+R1ZzNddcTpHQN19BvCucpHy7PEefw5cqS8ROxgBD+DNfztDSzbd9oQUp2yYJ9SZuwyJLJ5LWt yRf49RPg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nw3Rl-005T1U-Bw; Tue, 31 May 2022 15:06:13 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 2/6] mm/page_alloc: Rename free_the_page() to free_frozen_pages() Date: Tue, 31 May 2022 16:06:07 +0100 Message-Id: <20220531150611.1303156-3-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220531150611.1303156-1-willy@infradead.org> References: <20220531150611.1303156-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 4B07118004D X-Stat-Signature: osac6uuo4sc71knqrrow6jx1iwmyrkfk Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="Iw4qhUE/"; dmarc=none; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1654009577-816466 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In preparation for making this function available outside page_alloc, rename it to free_frozen_pages(), which fits better with the other memory allocation/free functions. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: David Hildenbrand Reviewed-by: Miaohe Lin --- mm/page_alloc.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 68bb77900f67..6a8676cb69db 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -687,7 +687,7 @@ static inline bool pcp_allowed_order(unsigned int order) return false; } -static inline void free_the_page(struct page *page, unsigned int order) +static inline void free_frozen_pages(struct page *page, unsigned int order) { if (pcp_allowed_order(order)) /* Via pcp? */ free_unref_page(page, order); @@ -713,7 +713,7 @@ static inline void free_the_page(struct page *page, unsigned int order) void free_compound_page(struct page *page) { mem_cgroup_uncharge(page_folio(page)); - free_the_page(page, compound_order(page)); + free_frozen_pages(page, compound_order(page)); } static void prep_compound_head(struct page *page, unsigned int order) @@ -5507,10 +5507,10 @@ EXPORT_SYMBOL(get_zeroed_page); void __free_pages(struct page *page, unsigned int order) { if (put_page_testzero(page)) - free_the_page(page, order); + free_frozen_pages(page, order); else if (!PageHead(page)) while (order-- > 0) - free_the_page(page + (1 << order), order); + free_frozen_pages(page + (1 << order), order); } EXPORT_SYMBOL(__free_pages); @@ -5561,7 +5561,7 @@ void __page_frag_cache_drain(struct page *page, unsigned int count) VM_BUG_ON_PAGE(page_ref_count(page) == 0, page); if (page_ref_sub_and_test(page, count)) - free_the_page(page, compound_order(page)); + free_frozen_pages(page, compound_order(page)); } EXPORT_SYMBOL(__page_frag_cache_drain); @@ -5602,7 +5602,7 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, goto refill; if (unlikely(nc->pfmemalloc)) { - free_the_page(page, compound_order(page)); + free_frozen_pages(page, compound_order(page)); goto refill; } @@ -5634,7 +5634,7 @@ void page_frag_free(void *addr) struct page *page = virt_to_head_page(addr); if (unlikely(put_page_testzero(page))) - free_the_page(page, compound_order(page)); + free_frozen_pages(page, compound_order(page)); } EXPORT_SYMBOL(page_frag_free); From patchwork Tue May 31 15:06:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12865816 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 635E0C433EF for ; Tue, 31 May 2022 15:06:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C2F906B0078; Tue, 31 May 2022 11:06:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BD9BC6B007B; Tue, 31 May 2022 11:06:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A794D6B007D; Tue, 31 May 2022 11:06:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 94A686B0078 for ; Tue, 31 May 2022 11:06:24 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 69F2281281 for ; Tue, 31 May 2022 15:06:24 +0000 (UTC) X-FDA: 79526364288.20.0DACEAC Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf27.hostedemail.com (Postfix) with ESMTP id 425174005C for ; Tue, 31 May 2022 15:06:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=62ABkmOOgORMI+bm0hKB0xxnAyhzK/P1vkVkvs5jI+o=; b=EhNulFSHHx6F4EI0eiU6dBA9/s 4mBuBAzozDcafhVRJwolO376xS2oEoGqyoSgRnADwfD13H/i5kSLz1q1IdMYwZsKLg/dJSAuhvKKE Z69+h7mKFuZguSKd9YiIg4JJO5QwH6GSey2fuTIX6cKq+uyGAmjx3RzqTFUcCSiNzhDgI//EWFpxy 6JcgGk46OSY3PtNS19pMSMomJIPmMgkYLUlTxiVXPcWO0N8Khit6mClGAfGgL2mr/v3LepkF3jFsL yeKu8Qz1+KmJwCqeomhmWDsQFDHHfr3ITVizxxOAysbFCGXrHjMaKEWDIgKCoa1XVBNh9NshKSsHu r/5te+lg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nw3Rl-005T1W-EK; Tue, 31 May 2022 15:06:13 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 3/6] mm/page_alloc: Export free_frozen_pages() instead of free_unref_page() Date: Tue, 31 May 2022 16:06:08 +0100 Message-Id: <20220531150611.1303156-4-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220531150611.1303156-1-willy@infradead.org> References: <20220531150611.1303156-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=EhNulFSH; dmarc=none; spf=none (imf27.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 425174005C X-Stat-Signature: 8xtgkhef4xxja3twu4jmbxki6y6qj86u X-HE-Tag: 1654009580-853696 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This API makes more sense for slab to use and it works perfectly well for swap. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: David Hildenbrand --- mm/internal.h | 4 ++-- mm/page_alloc.c | 18 +++++++++--------- mm/swap.c | 2 +- 3 files changed, 12 insertions(+), 12 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index c0f8fbe0445b..f1c0dab2b98e 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -362,8 +362,8 @@ extern void post_alloc_hook(struct page *page, unsigned int order, gfp_t gfp_flags); extern int user_min_free_kbytes; -extern void free_unref_page(struct page *page, unsigned int order); -extern void free_unref_page_list(struct list_head *list); +void free_frozen_pages(struct page *, unsigned int order); +void free_unref_page_list(struct list_head *list); extern void zone_pcp_update(struct zone *zone, int cpu_online); extern void zone_pcp_reset(struct zone *zone); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 6a8676cb69db..825922000781 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -687,14 +687,6 @@ static inline bool pcp_allowed_order(unsigned int order) return false; } -static inline void free_frozen_pages(struct page *page, unsigned int order) -{ - if (pcp_allowed_order(order)) /* Via pcp? */ - free_unref_page(page, order); - else - __free_pages_ok(page, order, FPI_NONE); -} - /* * Higher-order pages are called "compound pages". They are structured thusly: * @@ -3428,7 +3420,7 @@ static void free_unref_page_commit(struct page *page, int migratetype, /* * Free a pcp page */ -void free_unref_page(struct page *page, unsigned int order) +static void free_unref_page(struct page *page, unsigned int order) { unsigned long flags; unsigned long pfn = page_to_pfn(page); @@ -3458,6 +3450,14 @@ void free_unref_page(struct page *page, unsigned int order) local_unlock_irqrestore(&pagesets.lock, flags); } +void free_frozen_pages(struct page *page, unsigned int order) +{ + if (pcp_allowed_order(order)) /* Via pcp? */ + free_unref_page(page, order); + else + __free_pages_ok(page, order, FPI_NONE); +} + /* * Free a list of 0-order pages */ diff --git a/mm/swap.c b/mm/swap.c index f3922a96b2e9..c474bdf838e3 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -103,7 +103,7 @@ static void __put_single_page(struct page *page) { __page_cache_release(page); mem_cgroup_uncharge(page_folio(page)); - free_unref_page(page, 0); + free_frozen_pages(page, 0); } static void __put_compound_page(struct page *page) From patchwork Tue May 31 15:06:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12865818 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC233C433EF for ; Tue, 31 May 2022 15:06:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3A4B56B007D; Tue, 31 May 2022 11:06:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 32E966B007E; Tue, 31 May 2022 11:06:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1F6866B0080; Tue, 31 May 2022 11:06:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 0A4906B007D for ; Tue, 31 May 2022 11:06:31 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id CFC453533F for ; Tue, 31 May 2022 15:06:30 +0000 (UTC) X-FDA: 79526364540.20.F9882AA Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf03.hostedemail.com (Postfix) with ESMTP id A1B1520052 for ; Tue, 31 May 2022 15:06:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=TSKydin8CSjzrQk8aEZX2Nw1h5HXz6kdLwwIhB8/XBM=; b=Q4Ojy7HdsM3SwrPq/Qr2C+3gxF k55ZKje9FDTiwbm+oyXL550KyftMx7L00OcSBoIM8QzB16TWxNb3bKIJkD9KuUQMWSxGQaOMvoRtD GY44QwgMNGjJZtLHlcwAAo2/FYjQ8T5RHgfGZZmapG+Pn3nrWnhDLfWTvOQCBbQFi6y7zHh61u+JT hVlNQclsWPPkI+hiMYRb2/Bggz8eY1Idn6pOG60poKC8b1v9S59tbW2JEOEzxvZY8MA9gBv9XWaLL 0cQSlrY7vnWnrWqoJS1i4NPEC2PMXtQxw0H9SYcvrb4ATBlQkjvGu9+C1+6O1Gw3SiqjVy9vo0Xp9 iaJ2rrhA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nw3Rl-005T1Y-GT; Tue, 31 May 2022 15:06:13 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 4/6] mm/page_alloc: Add alloc_frozen_pages() Date: Tue, 31 May 2022 16:06:09 +0100 Message-Id: <20220531150611.1303156-5-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220531150611.1303156-1-willy@infradead.org> References: <20220531150611.1303156-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: A1B1520052 X-Rspam-User: X-Stat-Signature: pzo5efqwtry54151y9in9sechbs1jf8i Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Q4Ojy7Hd; dmarc=none; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1654009575-818056 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Provide an interface to allocate pages from the page allocator without incrementing their refcount. This saves an atomic operation on free, which may be beneficial to some users (eg slab). Signed-off-by: Matthew Wilcox (Oracle) --- mm/internal.h | 11 +++++++++ mm/mempolicy.c | 61 ++++++++++++++++++++++++++++++------------------- mm/page_alloc.c | 18 +++++++++++---- 3 files changed, 63 insertions(+), 27 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index f1c0dab2b98e..bf70ee2e38e9 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -362,9 +362,20 @@ extern void post_alloc_hook(struct page *page, unsigned int order, gfp_t gfp_flags); extern int user_min_free_kbytes; +struct page *__alloc_frozen_pages(gfp_t, unsigned int order, int nid, + nodemask_t *); void free_frozen_pages(struct page *, unsigned int order); void free_unref_page_list(struct list_head *list); +#ifdef CONFIG_NUMA +struct page *alloc_frozen_pages(gfp_t, unsigned int order); +#else +static inline struct page *alloc_frozen_pages(gfp_t gfp, unsigned int order) +{ + return __alloc_frozen_pages(gfp, order, numa_node_id(), NULL); +} +#endif + extern void zone_pcp_update(struct zone *zone, int cpu_online); extern void zone_pcp_reset(struct zone *zone); extern void zone_pcp_disable(struct zone *zone); diff --git a/mm/mempolicy.c b/mm/mempolicy.c index d39b01fd52fe..ac7c45d0f7dc 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2102,7 +2102,7 @@ static struct page *alloc_page_interleave(gfp_t gfp, unsigned order, { struct page *page; - page = __alloc_pages(gfp, order, nid, NULL); + page = __alloc_frozen_pages(gfp, order, nid, NULL); /* skip NUMA_INTERLEAVE_HIT counter update if numa stats is disabled */ if (!static_branch_likely(&vm_numa_stat_key)) return page; @@ -2128,9 +2128,9 @@ static struct page *alloc_pages_preferred_many(gfp_t gfp, unsigned int order, */ preferred_gfp = gfp | __GFP_NOWARN; preferred_gfp &= ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL); - page = __alloc_pages(preferred_gfp, order, nid, &pol->nodes); + page = __alloc_frozen_pages(preferred_gfp, order, nid, &pol->nodes); if (!page) - page = __alloc_pages(gfp, order, nid, NULL); + page = __alloc_frozen_pages(gfp, order, nid, NULL); return page; } @@ -2169,8 +2169,11 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, mpol_cond_put(pol); gfp |= __GFP_COMP; page = alloc_page_interleave(gfp, order, nid); - if (page && order > 1) - prep_transhuge_page(page); + if (page) { + set_page_refcounted(page); + if (order > 1) + prep_transhuge_page(page); + } folio = (struct folio *)page; goto out; } @@ -2182,8 +2185,11 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, gfp |= __GFP_COMP; page = alloc_pages_preferred_many(gfp, order, node, pol); mpol_cond_put(pol); - if (page && order > 1) - prep_transhuge_page(page); + if (page) { + set_page_refcounted(page); + if (order > 1) + prep_transhuge_page(page); + } folio = (struct folio *)page; goto out; } @@ -2237,21 +2243,7 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, } EXPORT_SYMBOL(vma_alloc_folio); -/** - * alloc_pages - Allocate pages. - * @gfp: GFP flags. - * @order: Power of two of number of pages to allocate. - * - * Allocate 1 << @order contiguous pages. The physical address of the - * first page is naturally aligned (eg an order-3 allocation will be aligned - * to a multiple of 8 * PAGE_SIZE bytes). The NUMA policy of the current - * process is honoured when in process context. - * - * Context: Can be called from any context, providing the appropriate GFP - * flags are used. - * Return: The page on success or NULL if allocation fails. - */ -struct page *alloc_pages(gfp_t gfp, unsigned order) +struct page *alloc_frozen_pages(gfp_t gfp, unsigned order) { struct mempolicy *pol = &default_policy; struct page *page; @@ -2269,12 +2261,35 @@ struct page *alloc_pages(gfp_t gfp, unsigned order) page = alloc_pages_preferred_many(gfp, order, policy_node(gfp, pol, numa_node_id()), pol); else - page = __alloc_pages(gfp, order, + page = __alloc_frozen_pages(gfp, order, policy_node(gfp, pol, numa_node_id()), policy_nodemask(gfp, pol)); return page; } + +/** + * alloc_pages - Allocate pages. + * @gfp: GFP flags. + * @order: Power of two of number of pages to allocate. + * + * Allocate 1 << @order contiguous pages. The physical address of the + * first page is naturally aligned (eg an order-3 allocation will be aligned + * to a multiple of 8 * PAGE_SIZE bytes). The NUMA policy of the current + * process is honoured when in process context. + * + * Context: Can be called from any context, providing the appropriate GFP + * flags are used. + * Return: The page on success or NULL if allocation fails. + */ +struct page *alloc_pages(gfp_t gfp, unsigned order) +{ + struct page *page = alloc_frozen_pages(gfp, order); + + if (page) + set_page_refcounted(page); + return page; +} EXPORT_SYMBOL(alloc_pages); struct folio *folio_alloc(gfp_t gfp, unsigned order) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 825922000781..49d8f04d14ef 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2390,7 +2390,6 @@ inline void post_alloc_hook(struct page *page, unsigned int order, bool init_tags = init && (gfp_flags & __GFP_ZEROTAGS); set_page_private(page, 0); - set_page_refcounted(page); arch_alloc_page(page, order); debug_pagealloc_map_pages(page, 1 << order); @@ -5386,8 +5385,8 @@ EXPORT_SYMBOL_GPL(__alloc_pages_bulk); /* * This is the 'heart' of the zoned buddy allocator. */ -struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid, - nodemask_t *nodemask) +struct page *__alloc_frozen_pages(gfp_t gfp, unsigned int order, + int preferred_nid, nodemask_t *nodemask) { struct page *page; unsigned int alloc_flags = ALLOC_WMARK_LOW; @@ -5440,7 +5439,7 @@ struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid, out: if (memcg_kmem_enabled() && (gfp & __GFP_ACCOUNT) && page && unlikely(__memcg_kmem_charge_page(page, gfp, order) != 0)) { - __free_pages(page, order); + free_frozen_pages(page, order); page = NULL; } @@ -5448,6 +5447,17 @@ struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid, return page; } + +struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid, + nodemask_t *nodemask) +{ + struct page *page; + + page = __alloc_frozen_pages(gfp, order, preferred_nid, nodemask); + if (page) + set_page_refcounted(page); + return page; +} EXPORT_SYMBOL(__alloc_pages); struct folio *__folio_alloc(gfp_t gfp, unsigned int order, int preferred_nid, From patchwork Tue May 31 15:06:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12865817 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5D33C433F5 for ; Tue, 31 May 2022 15:06:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 46FC96B007B; Tue, 31 May 2022 11:06:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 41EE36B007D; Tue, 31 May 2022 11:06:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 26B866B007E; Tue, 31 May 2022 11:06:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 144126B007B for ; Tue, 31 May 2022 11:06:28 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id DA2C533383 for ; Tue, 31 May 2022 15:06:27 +0000 (UTC) X-FDA: 79526364414.27.E4DD63A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf30.hostedemail.com (Postfix) with ESMTP id 5E8288005D for ; Tue, 31 May 2022 15:05:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=n0qFSi1QKalEAUEp753ssM4ATrTrrY6uyjfAy6W8jLo=; b=RYFExA/9qWnX+ryr/8XcaWefcJ SSG366b68pRDN6187FgrWCTR8JtnlEi+pSvmT0dCp3byPl9nDi7OImM4yBDpxRo6WPTrnxSF6jZun 1RRdjorN771del6kFDdbueIFdLzWRHahB/oKzTQIPjeiz5sz/R6g6GALsRNbxcYwSWLcwUpbGuUyN fczT0g+p2LfJ+kJ7+e414lsNo3wbPOMeMNdWdmixs3/6H6ZRzXh9VK6dvCUqyCDmC0k0xmo39iP5j 6oygCtrwCqx69DcyVPyKBUubDPtadqvJOnPwVce4KdSfZa/QwoaLeGGHeApGy29vKCHC/csRyF9Vo 36a/JFDQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nw3Rl-005T1a-JT; Tue, 31 May 2022 15:06:13 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 5/6] slab: Allocate frozen pages Date: Tue, 31 May 2022 16:06:10 +0100 Message-Id: <20220531150611.1303156-6-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220531150611.1303156-1-willy@infradead.org> References: <20220531150611.1303156-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 5E8288005D X-Rspam-User: X-Stat-Signature: g3jgjoguq9od4ao1o9a81u9tihdrjau5 Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="RYFExA/9"; dmarc=none; spf=none (imf30.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1654009553-152696 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since slab does not use the page refcount, it can allocate and free frozen pages, saving one atomic operation per free. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slab.c | 23 +++++++++++------------ 1 file changed, 11 insertions(+), 12 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index f8cd00f4ba13..c5c53ed304d1 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1355,23 +1355,23 @@ slab_out_of_memory(struct kmem_cache *cachep, gfp_t gfpflags, int nodeid) static struct slab *kmem_getpages(struct kmem_cache *cachep, gfp_t flags, int nodeid) { - struct folio *folio; + struct page *page; struct slab *slab; flags |= cachep->allocflags; - folio = (struct folio *) __alloc_pages_node(nodeid, flags, cachep->gfporder); - if (!folio) { + page = __alloc_frozen_pages(flags, cachep->gfporder, nodeid, NULL); + if (!page) { slab_out_of_memory(cachep, flags, nodeid); return NULL; } - slab = folio_slab(folio); + __SetPageSlab(page); + slab = (struct slab *)page; account_slab(slab, cachep->gfporder, cachep, flags); - __folio_set_slab(folio); /* Record if ALLOC_NO_WATERMARKS was set when allocating the slab */ - if (sk_memalloc_socks() && page_is_pfmemalloc(folio_page(folio, 0))) + if (sk_memalloc_socks() && page_is_pfmemalloc(page)) slab_set_pfmemalloc(slab); return slab; @@ -1383,18 +1383,17 @@ static struct slab *kmem_getpages(struct kmem_cache *cachep, gfp_t flags, static void kmem_freepages(struct kmem_cache *cachep, struct slab *slab) { int order = cachep->gfporder; - struct folio *folio = slab_folio(slab); + struct page *page = (struct page *)slab; - BUG_ON(!folio_test_slab(folio)); __slab_clear_pfmemalloc(slab); - __folio_clear_slab(folio); - page_mapcount_reset(folio_page(folio, 0)); - folio->mapping = NULL; + __ClearPageSlab(page); + page_mapcount_reset(page); + page->mapping = NULL; if (current->reclaim_state) current->reclaim_state->reclaimed_slab += 1 << order; unaccount_slab(slab, order, cachep); - __free_pages(folio_page(folio, 0), order); + free_frozen_pages(page, order); } static void kmem_rcu_free(struct rcu_head *head) From patchwork Tue May 31 15:06:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12865813 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D226EC433FE for ; Tue, 31 May 2022 15:06:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2E13E6B0073; Tue, 31 May 2022 11:06:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0FD706B0078; Tue, 31 May 2022 11:06:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E692A6B0074; Tue, 31 May 2022 11:06:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id CAB0B6B0072 for ; Tue, 31 May 2022 11:06:17 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 93A0560508 for ; Tue, 31 May 2022 15:06:17 +0000 (UTC) X-FDA: 79526363994.01.6E84A06 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf19.hostedemail.com (Postfix) with ESMTP id B20901A0054 for ; Tue, 31 May 2022 15:06:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=DACXc824H83CIe0LjIokraXXUf/wuVorNIAYQPEe/oE=; b=a7Rvf958Z/RzJACIwfREmGAvJ+ 6yNOQnH7X/i8dt7fz2Ndvn765g/xNbVaDkxrAlXrPlsAcgbLZf/FEFkK2Vppu7YH0+H2p4TEJEr+a 8qeua6qjPxFNLAF6Lbpq0zDo5JdvPX7QBCWyhm/QdRf2PDh9RNnw1V5mmzPdSwOltKgxCOWBcx54r lEZAADUGwWPA3uFerriJng6Ln+x/CPwN6oFqPtZB8Vi8ZePJT+VILVTesDNjUJOCRb+5baadh/5H9 JUP4x9/kv3Q1Zg16Wed4PRlZDKjR8VS6bH6beeN7QuRLT5b6d6MrKB8/vg2Fj37zsly7hpuyeTPX2 PtKNLHKA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nw3Rl-005T1c-Ls; Tue, 31 May 2022 15:06:13 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 6/6] slub: Allocate frozen pages Date: Tue, 31 May 2022 16:06:11 +0100 Message-Id: <20220531150611.1303156-7-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220531150611.1303156-1-willy@infradead.org> References: <20220531150611.1303156-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: B20901A0054 X-Stat-Signature: kwuses9sk58qj7h7kjitgeosg7d4ejxz X-Rspam-User: Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=a7Rvf958; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1654009560-784598 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since slub does not use the page refcount, it can allocate and free frozen pages, saving one atomic operation per free. --- mm/slub.c | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index e5535020e0fd..420a56746a01 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1789,21 +1789,21 @@ static void *setup_object(struct kmem_cache *s, void *object) static inline struct slab *alloc_slab_page(gfp_t flags, int node, struct kmem_cache_order_objects oo) { - struct folio *folio; + struct page *page; struct slab *slab; unsigned int order = oo_order(oo); if (node == NUMA_NO_NODE) - folio = (struct folio *)alloc_pages(flags, order); + page = alloc_frozen_pages(flags, order); else - folio = (struct folio *)__alloc_pages_node(node, flags, order); + page = __alloc_frozen_pages(flags, order, node, NULL); - if (!folio) + if (!page) return NULL; - slab = folio_slab(folio); - __folio_set_slab(folio); - if (page_is_pfmemalloc(folio_page(folio, 0))) + slab = (struct slab *)page; + __SetPageSlab(page); + if (page_is_pfmemalloc(page)) slab_set_pfmemalloc(slab); return slab; @@ -2005,8 +2005,8 @@ static struct slab *new_slab(struct kmem_cache *s, gfp_t flags, int node) static void __free_slab(struct kmem_cache *s, struct slab *slab) { - struct folio *folio = slab_folio(slab); - int order = folio_order(folio); + struct page *page = (struct page *)slab; + int order = compound_order(page); int pages = 1 << order; if (kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS)) { @@ -2018,12 +2018,12 @@ static void __free_slab(struct kmem_cache *s, struct slab *slab) } __slab_clear_pfmemalloc(slab); - __folio_clear_slab(folio); - folio->mapping = NULL; + __ClearPageSlab(page); + page->mapping = NULL; if (current->reclaim_state) current->reclaim_state->reclaimed_slab += pages; unaccount_slab(slab, order, s); - __free_pages(folio_page(folio, 0), order); + free_frozen_pages(page, order); } static void rcu_free_slab(struct rcu_head *h) @@ -3541,7 +3541,7 @@ static inline void free_large_kmalloc(struct folio *folio, void *object) pr_warn_once("object pointer: 0x%p\n", object); kfree_hook(object); - mod_lruvec_page_state(folio_page(folio, 0), NR_SLAB_UNRECLAIMABLE_B, + lruvec_stat_mod_folio(folio, NR_SLAB_UNRECLAIMABLE_B, -(PAGE_SIZE << order)); __free_pages(folio_page(folio, 0), order); }