From patchwork Wed Apr 20 09:59:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 12820008 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C748C433EF for ; Wed, 20 Apr 2022 09:59:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D150C6B007B; Wed, 20 Apr 2022 05:59:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CC4D76B007E; Wed, 20 Apr 2022 05:59:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B8CF06B0081; Wed, 20 Apr 2022 05:59:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id AA8286B007B for ; Wed, 20 Apr 2022 05:59:30 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 7B7C52562F for ; Wed, 20 Apr 2022 09:59:30 +0000 (UTC) X-FDA: 79376810100.06.BE6E90B Received: from outbound-smtp20.blacknight.com (outbound-smtp20.blacknight.com [46.22.139.247]) by imf21.hostedemail.com (Postfix) with ESMTP id D49C11C001A for ; Wed, 20 Apr 2022 09:59:28 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp20.blacknight.com (Postfix) with ESMTPS id B570E1C416E for ; Wed, 20 Apr 2022 10:59:28 +0100 (IST) Received: (qmail 10287 invoked from network); 20 Apr 2022 09:59:28 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 20 Apr 2022 09:59:28 -0000 From: Mel Gorman To: Nicolas Saenz Julienne Cc: Marcelo Tosatti , Vlastimil Babka , Michal Hocko , LKML , Linux-MM , Mel Gorman Subject: [PATCH 1/6] mm/page_alloc: Add page->buddy_list and page->pcp_list Date: Wed, 20 Apr 2022 10:59:01 +0100 Message-Id: <20220420095906.27349-2-mgorman@techsingularity.net> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220420095906.27349-1-mgorman@techsingularity.net> References: <20220420095906.27349-1-mgorman@techsingularity.net> MIME-Version: 1.0 Authentication-Results: imf21.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf21.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.139.247 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: D49C11C001A X-Stat-Signature: j1qguizez5srcxohowqzc5mbkdmznjoa X-HE-Tag: 1650448768-787929 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The page allocator uses page->lru for storing pages on either buddy or PCP lists. Create page->buddy_list and page->pcp_list as a union with page->lru. This is simply to clarify what type of list a page is on in the page allocator. No functional change intended. Signed-off-by: Mel Gorman --- include/linux/mm_types.h | 5 +++++ mm/page_alloc.c | 18 +++++++++--------- 2 files changed, 14 insertions(+), 9 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 8834e38c06a4..a2782e8af307 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -87,6 +87,7 @@ struct page { */ union { struct list_head lru; + /* Or, for the Unevictable "LRU list" slot */ struct { /* Always even, to negate PageTail */ @@ -94,6 +95,10 @@ struct page { /* Count page's or folio's mlocks */ unsigned int mlock_count; }; + + /* Or, free page */ + struct list_head buddy_list; + struct list_head pcp_list; }; /* See page-flags.h for PAGE_MAPPING_FLAGS */ struct address_space *mapping; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 2db95780e003..63976ad4b7f1 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -780,7 +780,7 @@ static inline bool set_page_guard(struct zone *zone, struct page *page, return false; __SetPageGuard(page); - INIT_LIST_HEAD(&page->lru); + INIT_LIST_HEAD(&page->buddy_list); set_page_private(page, order); /* Guard pages are not available for any usage */ __mod_zone_freepage_state(zone, -(1 << order), migratetype); @@ -957,7 +957,7 @@ static inline void add_to_free_list(struct page *page, struct zone *zone, { struct free_area *area = &zone->free_area[order]; - list_add(&page->lru, &area->free_list[migratetype]); + list_add(&page->buddy_list, &area->free_list[migratetype]); area->nr_free++; } @@ -967,7 +967,7 @@ static inline void add_to_free_list_tail(struct page *page, struct zone *zone, { struct free_area *area = &zone->free_area[order]; - list_add_tail(&page->lru, &area->free_list[migratetype]); + list_add_tail(&page->buddy_list, &area->free_list[migratetype]); area->nr_free++; } @@ -981,7 +981,7 @@ static inline void move_to_free_list(struct page *page, struct zone *zone, { struct free_area *area = &zone->free_area[order]; - list_move_tail(&page->lru, &area->free_list[migratetype]); + list_move_tail(&page->buddy_list, &area->free_list[migratetype]); } static inline void del_page_from_free_list(struct page *page, struct zone *zone, @@ -991,7 +991,7 @@ static inline void del_page_from_free_list(struct page *page, struct zone *zone, if (page_reported(page)) __ClearPageReported(page); - list_del(&page->lru); + list_del(&page->buddy_list); __ClearPageBuddy(page); set_page_private(page, 0); zone->free_area[order].nr_free--; @@ -1493,7 +1493,7 @@ static void free_pcppages_bulk(struct zone *zone, int count, mt = get_pcppage_migratetype(page); /* must delete to avoid corrupting pcp list */ - list_del(&page->lru); + list_del(&page->pcp_list); count -= nr_pages; pcp->count -= nr_pages; @@ -3053,7 +3053,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order, * for IO devices that can merge IO requests if the physical * pages are ordered properly. */ - list_add_tail(&page->lru, list); + list_add_tail(&page->pcp_list, list); allocated++; if (is_migrate_cma(get_pcppage_migratetype(page))) __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, @@ -3392,7 +3392,7 @@ static void free_unref_page_commit(struct page *page, int migratetype, __count_vm_event(PGFREE); pcp = this_cpu_ptr(zone->per_cpu_pageset); pindex = order_to_pindex(migratetype, order); - list_add(&page->lru, &pcp->lists[pindex]); + list_add(&page->pcp_list, &pcp->lists[pindex]); pcp->count += 1 << order; /* @@ -3656,7 +3656,7 @@ struct page *__rmqueue_pcplist(struct zone *zone, unsigned int order, } page = list_first_entry(list, struct page, lru); - list_del(&page->lru); + list_del(&page->pcp_list); pcp->count -= 1 << order; } while (check_new_pcp(page, order)); From patchwork Wed Apr 20 09:59:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 12820009 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C52D4C433EF for ; Wed, 20 Apr 2022 09:59:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 39FE06B007E; Wed, 20 Apr 2022 05:59:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 34E8E6B0081; Wed, 20 Apr 2022 05:59:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2170B6B0082; Wed, 20 Apr 2022 05:59:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 13FD16B007E for ; Wed, 20 Apr 2022 05:59:41 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id D7EFF813F0 for ; Wed, 20 Apr 2022 09:59:40 +0000 (UTC) X-FDA: 79376810520.11.DD25948 Received: from outbound-smtp63.blacknight.com (outbound-smtp63.blacknight.com [46.22.136.252]) by imf18.hostedemail.com (Postfix) with ESMTP id 020E41C000D for ; Wed, 20 Apr 2022 09:59:38 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp63.blacknight.com (Postfix) with ESMTPS id 3A7BFFAE19 for ; Wed, 20 Apr 2022 10:59:39 +0100 (IST) Received: (qmail 10849 invoked from network); 20 Apr 2022 09:59:39 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 20 Apr 2022 09:59:38 -0000 From: Mel Gorman To: Nicolas Saenz Julienne Cc: Marcelo Tosatti , Vlastimil Babka , Michal Hocko , LKML , Linux-MM , Mel Gorman Subject: [PATCH 2/6] mm/page_alloc: Use only one PCP list for THP-sized allocations Date: Wed, 20 Apr 2022 10:59:02 +0100 Message-Id: <20220420095906.27349-3-mgorman@techsingularity.net> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220420095906.27349-1-mgorman@techsingularity.net> References: <20220420095906.27349-1-mgorman@techsingularity.net> MIME-Version: 1.0 Authentication-Results: imf18.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf18.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.136.252 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 020E41C000D X-Stat-Signature: 6kdokznkzzey9m1m894d57bjybitnf4e X-HE-Tag: 1650448778-295524 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The per_cpu_pages is cache-aligned on a standard x86-64 distribution configuration but a later patch will add a new field which would push the structure into the next cache line. Use only one list to store THP-sized pages on the per-cpu list. This assumes that the vast majority of THP-sized allocations are GFP_MOVABLE but even if it was another type, it would not contribute to serious fragmentation that potentially causes a later THP allocation failure. Align per_cpu_pages on the cacheline boundary to ensure there is no false cache sharing. After this patch, the structure sizing is; struct per_cpu_pages { int count; /* 0 4 */ int high; /* 4 4 */ int batch; /* 8 4 */ short int free_factor; /* 12 2 */ short int expire; /* 14 2 */ struct list_head lists[13]; /* 16 208 */ /* size: 256, cachelines: 4, members: 6 */ /* padding: 32 */ } __attribute__((__aligned__(64))); Signed-off-by: Mel Gorman --- include/linux/mmzone.h | 11 +++++++---- mm/page_alloc.c | 4 ++-- 2 files changed, 9 insertions(+), 6 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 962b14d403e8..abe530748de6 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -358,15 +358,18 @@ enum zone_watermarks { }; /* - * One per migratetype for each PAGE_ALLOC_COSTLY_ORDER plus one additional - * for pageblock size for THP if configured. + * One per migratetype for each PAGE_ALLOC_COSTLY_ORDER. One additional list + * for THP which will usually be GFP_MOVABLE. Even if it is another type, + * it should not contribute to serious fragmentation causing THP allocation + * failures. */ #ifdef CONFIG_TRANSPARENT_HUGEPAGE #define NR_PCP_THP 1 #else #define NR_PCP_THP 0 #endif -#define NR_PCP_LISTS (MIGRATE_PCPTYPES * (PAGE_ALLOC_COSTLY_ORDER + 1 + NR_PCP_THP)) +#define NR_LOWORDER_PCP_LISTS (MIGRATE_PCPTYPES * (PAGE_ALLOC_COSTLY_ORDER + 1)) +#define NR_PCP_LISTS (NR_LOWORDER_PCP_LISTS + NR_PCP_THP) /* * Shift to encode migratetype and order in the same integer, with order @@ -392,7 +395,7 @@ struct per_cpu_pages { /* Lists of pages, one per migrate type stored on the pcp-lists */ struct list_head lists[NR_PCP_LISTS]; -}; +} ____cacheline_aligned_in_smp; struct per_cpu_zonestat { #ifdef CONFIG_SMP diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 63976ad4b7f1..ed2deb93a758 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -648,7 +648,7 @@ static inline unsigned int order_to_pindex(int migratetype, int order) #ifdef CONFIG_TRANSPARENT_HUGEPAGE if (order > PAGE_ALLOC_COSTLY_ORDER) { VM_BUG_ON(order != pageblock_order); - base = PAGE_ALLOC_COSTLY_ORDER + 1; + return NR_LOWORDER_PCP_LISTS; } #else VM_BUG_ON(order > PAGE_ALLOC_COSTLY_ORDER); @@ -662,7 +662,7 @@ static inline int pindex_to_order(unsigned int pindex) int order = pindex / MIGRATE_PCPTYPES; #ifdef CONFIG_TRANSPARENT_HUGEPAGE - if (order > PAGE_ALLOC_COSTLY_ORDER) + if (pindex == NR_LOWORDER_PCP_LISTS) order = pageblock_order; #else VM_BUG_ON(order > PAGE_ALLOC_COSTLY_ORDER); From patchwork Wed Apr 20 09:59:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 12820010 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30569C433EF for ; Wed, 20 Apr 2022 09:59:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C2BD46B0081; Wed, 20 Apr 2022 05:59:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BD9B56B0082; Wed, 20 Apr 2022 05:59:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AC8F76B0083; Wed, 20 Apr 2022 05:59:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 9D38A6B0081 for ; Wed, 20 Apr 2022 05:59:52 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 9173625786 for ; Wed, 20 Apr 2022 09:59:51 +0000 (UTC) X-FDA: 79376810982.10.BB4F123 Received: from outbound-smtp35.blacknight.com (outbound-smtp35.blacknight.com [46.22.139.218]) by imf19.hostedemail.com (Postfix) with ESMTP id 753291A001A for ; Wed, 20 Apr 2022 09:59:49 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp35.blacknight.com (Postfix) with ESMTPS id 2226F2122 for ; Wed, 20 Apr 2022 10:59:50 +0100 (IST) Received: (qmail 11477 invoked from network); 20 Apr 2022 09:59:49 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 20 Apr 2022 09:59:49 -0000 From: Mel Gorman To: Nicolas Saenz Julienne Cc: Marcelo Tosatti , Vlastimil Babka , Michal Hocko , LKML , Linux-MM , Mel Gorman Subject: [PATCH 3/6] mm/page_alloc: Split out buddy removal code from rmqueue into separate helper Date: Wed, 20 Apr 2022 10:59:03 +0100 Message-Id: <20220420095906.27349-4-mgorman@techsingularity.net> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220420095906.27349-1-mgorman@techsingularity.net> References: <20220420095906.27349-1-mgorman@techsingularity.net> MIME-Version: 1.0 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 753291A001A X-Stat-Signature: qs8zp43giurbrxnenba3ckx7ecekuz8y Authentication-Results: imf19.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf19.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.139.218 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net X-Rspam-User: X-HE-Tag: 1650448789-774293 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is a preparation page to allow the buddy removal code to be reused in a later patch. No functional change. Signed-off-by: Mel Gorman --- mm/page_alloc.c | 87 ++++++++++++++++++++++++++++--------------------- 1 file changed, 50 insertions(+), 37 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index ed2deb93a758..4c1acf666056 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3622,6 +3622,46 @@ static inline void zone_statistics(struct zone *preferred_zone, struct zone *z, #endif } +static __always_inline +struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone, + unsigned int order, unsigned int alloc_flags, + int migratetype) +{ + struct page *page; + unsigned long flags; + + do { + page = NULL; + spin_lock_irqsave(&zone->lock, flags); + /* + * order-0 request can reach here when the pcplist is skipped + * due to non-CMA allocation context. HIGHATOMIC area is + * reserved for high-order atomic allocation, so order-0 + * request should skip it. + */ + if (order > 0 && alloc_flags & ALLOC_HARDER) { + page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC); + if (page) + trace_mm_page_alloc_zone_locked(page, order, migratetype); + } + if (!page) { + page = __rmqueue(zone, order, migratetype, alloc_flags); + if (!page) { + spin_unlock_irqrestore(&zone->lock, flags); + return NULL; + } + } + __mod_zone_freepage_state(zone, -(1 << order), + get_pcppage_migratetype(page)); + spin_unlock_irqrestore(&zone->lock, flags); + } while (check_new_pages(page, order)); + + __count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order); + zone_statistics(preferred_zone, zone, 1); + + return page; +} + /* Remove page from the per-cpu list, caller must protect the list */ static inline struct page *__rmqueue_pcplist(struct zone *zone, unsigned int order, @@ -3702,9 +3742,14 @@ struct page *rmqueue(struct zone *preferred_zone, gfp_t gfp_flags, unsigned int alloc_flags, int migratetype) { - unsigned long flags; struct page *page; + /* + * We most definitely don't want callers attempting to + * allocate greater than order-1 page units with __GFP_NOFAIL. + */ + WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1)); + if (likely(pcp_allowed_order(order))) { /* * MIGRATE_MOVABLE pcplist could have the pages on CMA area and @@ -3718,38 +3763,10 @@ struct page *rmqueue(struct zone *preferred_zone, } } - /* - * We most definitely don't want callers attempting to - * allocate greater than order-1 page units with __GFP_NOFAIL. - */ - WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1)); - - do { - page = NULL; - spin_lock_irqsave(&zone->lock, flags); - /* - * order-0 request can reach here when the pcplist is skipped - * due to non-CMA allocation context. HIGHATOMIC area is - * reserved for high-order atomic allocation, so order-0 - * request should skip it. - */ - if (order > 0 && alloc_flags & ALLOC_HARDER) { - page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC); - if (page) - trace_mm_page_alloc_zone_locked(page, order, migratetype); - } - if (!page) { - page = __rmqueue(zone, order, migratetype, alloc_flags); - if (!page) - goto failed; - } - __mod_zone_freepage_state(zone, -(1 << order), - get_pcppage_migratetype(page)); - spin_unlock_irqrestore(&zone->lock, flags); - } while (check_new_pages(page, order)); - - __count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order); - zone_statistics(preferred_zone, zone, 1); + page = rmqueue_buddy(preferred_zone, zone, order, alloc_flags, + migratetype); + if (unlikely(!page)) + return NULL; out: /* Separate test+clear to avoid unnecessary atomics */ @@ -3760,10 +3777,6 @@ struct page *rmqueue(struct zone *preferred_zone, VM_BUG_ON_PAGE(page && bad_range(zone, page), page); return page; - -failed: - spin_unlock_irqrestore(&zone->lock, flags); - return NULL; } #ifdef CONFIG_FAIL_PAGE_ALLOC From patchwork Wed Apr 20 09:59:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 12820011 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3EF1C433F5 for ; Wed, 20 Apr 2022 10:00:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 60F626B0082; Wed, 20 Apr 2022 06:00:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5BFD06B0083; Wed, 20 Apr 2022 06:00:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4877A6B0085; Wed, 20 Apr 2022 06:00:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 39FA66B0082 for ; Wed, 20 Apr 2022 06:00:03 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 1B1C721333 for ; Wed, 20 Apr 2022 10:00:03 +0000 (UTC) X-FDA: 79376811486.21.290A4C9 Received: from outbound-smtp23.blacknight.com (outbound-smtp23.blacknight.com [81.17.249.191]) by imf15.hostedemail.com (Postfix) with ESMTP id EE57BA0019 for ; Wed, 20 Apr 2022 10:00:00 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp23.blacknight.com (Postfix) with ESMTPS id 21439BEDCC for ; Wed, 20 Apr 2022 11:00:01 +0100 (IST) Received: (qmail 12208 invoked from network); 20 Apr 2022 10:00:01 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 20 Apr 2022 10:00:00 -0000 From: Mel Gorman To: Nicolas Saenz Julienne Cc: Marcelo Tosatti , Vlastimil Babka , Michal Hocko , LKML , Linux-MM , Mel Gorman Subject: [PATCH 4/6] mm/page_alloc: Remove unnecessary page == NULL check in rmqueue Date: Wed, 20 Apr 2022 10:59:04 +0100 Message-Id: <20220420095906.27349-5-mgorman@techsingularity.net> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220420095906.27349-1-mgorman@techsingularity.net> References: <20220420095906.27349-1-mgorman@techsingularity.net> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: EE57BA0019 X-Stat-Signature: 97gsm1immsya33gt7jwec34mjp7gqi4m Authentication-Results: imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of mgorman@techsingularity.net designates 81.17.249.191 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net; dmarc=none X-HE-Tag: 1650448800-343186 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The VM_BUG_ON check for a valid page can be avoided with a simple change in the flow. The ZONE_BOOSTED_WATERMARK is unlikely in general and even more unlikely if the page allocation failed so mark the branch unlikely. Signed-off-by: Mel Gorman --- mm/page_alloc.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 4c1acf666056..dc0fdeb3795c 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3765,17 +3765,18 @@ struct page *rmqueue(struct zone *preferred_zone, page = rmqueue_buddy(preferred_zone, zone, order, alloc_flags, migratetype); - if (unlikely(!page)) - return NULL; out: /* Separate test+clear to avoid unnecessary atomics */ - if (test_bit(ZONE_BOOSTED_WATERMARK, &zone->flags)) { + if (unlikely(test_bit(ZONE_BOOSTED_WATERMARK, &zone->flags))) { clear_bit(ZONE_BOOSTED_WATERMARK, &zone->flags); wakeup_kswapd(zone, 0, 0, zone_idx(zone)); } - VM_BUG_ON_PAGE(page && bad_range(zone, page), page); + if (unlikely(!page)) + return NULL; + + VM_BUG_ON_PAGE(bad_range(zone, page), page); return page; } From patchwork Wed Apr 20 09:59:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 12820012 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DBCC4C433EF for ; Wed, 20 Apr 2022 10:00:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7BDBA6B0083; Wed, 20 Apr 2022 06:00:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 76C616B0085; Wed, 20 Apr 2022 06:00:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6349F6B0087; Wed, 20 Apr 2022 06:00:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 5580B6B0083 for ; Wed, 20 Apr 2022 06:00:13 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 1C201259C5 for ; Wed, 20 Apr 2022 10:00:13 +0000 (UTC) X-FDA: 79376811906.26.A6989CC Received: from outbound-smtp19.blacknight.com (outbound-smtp19.blacknight.com [46.22.139.246]) by imf13.hostedemail.com (Postfix) with ESMTP id 1D4FE2001C for ; Wed, 20 Apr 2022 10:00:10 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp19.blacknight.com (Postfix) with ESMTPS id 5FDB31C41F2 for ; Wed, 20 Apr 2022 11:00:11 +0100 (IST) Received: (qmail 13237 invoked from network); 20 Apr 2022 10:00:11 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 20 Apr 2022 10:00:11 -0000 From: Mel Gorman To: Nicolas Saenz Julienne Cc: Marcelo Tosatti , Vlastimil Babka , Michal Hocko , LKML , Linux-MM , Mel Gorman Subject: [PATCH 5/6] mm/page_alloc: Protect PCP lists with a spinlock Date: Wed, 20 Apr 2022 10:59:05 +0100 Message-Id: <20220420095906.27349-6-mgorman@techsingularity.net> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220420095906.27349-1-mgorman@techsingularity.net> References: <20220420095906.27349-1-mgorman@techsingularity.net> MIME-Version: 1.0 Authentication-Results: imf13.hostedemail.com; dkim=none; spf=pass (imf13.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.139.246 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net; dmarc=none X-Stat-Signature: np8dmhjscrarrbybx7pmq5oh8m59g3rc X-Rspamd-Queue-Id: 1D4FE2001C X-Rspamd-Server: rspam04 X-Rspam-User: X-HE-Tag: 1650448810-991735 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently the PCP lists are protected by using local_lock_irqsave to prevent migration and IRQ reentrancy but this is inconvenient. Remote draining of the lists is impossible and a workqueue is required and every task allocation/free must disable then enable interrupts which is expensive. As preparation for dealing with both of those problems, protect the lists with a spinlock. The IRQ-unsafe version of the lock is used because IRQs are already disabled by local_lock_irqsave. spin_trylock is used in preparation for a time when local_lock could be used instead of lock_lock_irqsave. The per_cpu_pages still fits within the same number of cache lines after this patch relative to before the series. struct per_cpu_pages { spinlock_t lock; /* 0 4 */ int count; /* 4 4 */ int high; /* 8 4 */ int batch; /* 12 4 */ short int free_factor; /* 16 2 */ short int expire; /* 18 2 */ /* XXX 4 bytes hole, try to pack */ struct list_head lists[13]; /* 24 208 */ /* size: 256, cachelines: 4, members: 7 */ /* sum members: 228, holes: 1, sum holes: 4 */ /* padding: 24 */ } __attribute__((__aligned__(64))); Signed-off-by: Mel Gorman --- include/linux/mmzone.h | 1 + mm/page_alloc.c | 155 +++++++++++++++++++++++++++++++++++------ 2 files changed, 136 insertions(+), 20 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index abe530748de6..8b5757735428 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -385,6 +385,7 @@ enum zone_watermarks { /* Fields and list protected by pagesets local_lock in page_alloc.c */ struct per_cpu_pages { + spinlock_t lock; /* Protects lists field */ int count; /* number of pages in the list */ int high; /* high watermark, emptying needed */ int batch; /* chunk size for buddy add/remove */ diff --git a/mm/page_alloc.c b/mm/page_alloc.c index dc0fdeb3795c..813c84b67c65 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -132,6 +132,17 @@ static DEFINE_PER_CPU(struct pagesets, pagesets) __maybe_unused = { .lock = INIT_LOCAL_LOCK(lock), }; +#ifdef CONFIG_SMP +/* On SMP, spin_trylock is sufficient protection */ +#define pcp_trylock_prepare(flags) do { } while (0) +#define pcp_trylock_finish(flag) do { } while (0) +#else + +/* UP spin_trylock always succeeds so disable IRQs to prevent re-entrancy. */ +#define pcp_trylock_prepare(flags) local_irq_save(flags) +#define pcp_trylock_finish(flags) local_irq_restore(flags) +#endif + #ifdef CONFIG_USE_PERCPU_NUMA_NODE_ID DEFINE_PER_CPU(int, numa_node); EXPORT_PER_CPU_SYMBOL(numa_node); @@ -3082,15 +3093,22 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order, */ void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp) { - unsigned long flags; int to_drain, batch; - local_lock_irqsave(&pagesets.lock, flags); batch = READ_ONCE(pcp->batch); to_drain = min(pcp->count, batch); - if (to_drain > 0) + if (to_drain > 0) { + unsigned long flags; + + /* free_pcppages_bulk expects IRQs disabled for zone->lock */ + local_irq_save(flags); + + spin_lock(&pcp->lock); free_pcppages_bulk(zone, to_drain, pcp, 0); - local_unlock_irqrestore(&pagesets.lock, flags); + spin_unlock(&pcp->lock); + + local_irq_restore(flags); + } } #endif @@ -3103,16 +3121,21 @@ void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp) */ static void drain_pages_zone(unsigned int cpu, struct zone *zone) { - unsigned long flags; struct per_cpu_pages *pcp; - local_lock_irqsave(&pagesets.lock, flags); - pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu); - if (pcp->count) + if (pcp->count) { + unsigned long flags; + + /* free_pcppages_bulk expects IRQs disabled for zone->lock */ + local_irq_save(flags); + + spin_lock(&pcp->lock); free_pcppages_bulk(zone, pcp->count, pcp, 0); + spin_unlock(&pcp->lock); - local_unlock_irqrestore(&pagesets.lock, flags); + local_irq_restore(flags); + } } /* @@ -3380,18 +3403,30 @@ static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone, return min(READ_ONCE(pcp->batch) << 2, high); } -static void free_unref_page_commit(struct page *page, int migratetype, - unsigned int order) +/* Returns true if the page was committed to the per-cpu list. */ +static bool free_unref_page_commit(struct page *page, int migratetype, + unsigned int order, bool locked) { struct zone *zone = page_zone(page); struct per_cpu_pages *pcp; int high; int pindex; bool free_high; + unsigned long __maybe_unused UP_flags; __count_vm_event(PGFREE); pcp = this_cpu_ptr(zone->per_cpu_pageset); pindex = order_to_pindex(migratetype, order); + + if (!locked) { + /* Protect against a parallel drain. */ + pcp_trylock_prepare(UP_flags); + if (!spin_trylock(&pcp->lock)) { + pcp_trylock_finish(UP_flags); + return false; + } + } + list_add(&page->pcp_list, &pcp->lists[pindex]); pcp->count += 1 << order; @@ -3409,6 +3444,13 @@ static void free_unref_page_commit(struct page *page, int migratetype, free_pcppages_bulk(zone, nr_pcp_free(pcp, high, batch, free_high), pcp, pindex); } + + if (!locked) { + spin_unlock(&pcp->lock); + pcp_trylock_finish(UP_flags); + } + + return true; } /* @@ -3419,6 +3461,7 @@ void free_unref_page(struct page *page, unsigned int order) unsigned long flags; unsigned long pfn = page_to_pfn(page); int migratetype; + bool freed_pcp = false; if (!free_unref_page_prepare(page, pfn, order)) return; @@ -3440,8 +3483,11 @@ void free_unref_page(struct page *page, unsigned int order) } local_lock_irqsave(&pagesets.lock, flags); - free_unref_page_commit(page, migratetype, order); + freed_pcp = free_unref_page_commit(page, migratetype, order, false); local_unlock_irqrestore(&pagesets.lock, flags); + + if (unlikely(!freed_pcp)) + free_one_page(page_zone(page), page, pfn, order, migratetype, FPI_NONE); } /* @@ -3450,10 +3496,19 @@ void free_unref_page(struct page *page, unsigned int order) void free_unref_page_list(struct list_head *list) { struct page *page, *next; + struct per_cpu_pages *pcp; + struct zone *locked_zone; unsigned long flags; int batch_count = 0; int migratetype; + /* + * An empty list is possible. Check early so that the later + * lru_to_page() does not potentially read garbage. + */ + if (list_empty(list)) + return; + /* Prepare pages for freeing */ list_for_each_entry_safe(page, next, list, lru) { unsigned long pfn = page_to_pfn(page); @@ -3474,8 +3529,26 @@ void free_unref_page_list(struct list_head *list) } } + VM_BUG_ON(in_hardirq()); + local_lock_irqsave(&pagesets.lock, flags); + + page = lru_to_page(list); + locked_zone = page_zone(page); + pcp = this_cpu_ptr(locked_zone->per_cpu_pageset); + spin_lock(&pcp->lock); + list_for_each_entry_safe(page, next, list, lru) { + struct zone *zone = page_zone(page); + + /* Different zone, different pcp lock. */ + if (zone != locked_zone) { + spin_unlock(&pcp->lock); + locked_zone = zone; + pcp = this_cpu_ptr(zone->per_cpu_pageset); + spin_lock(&pcp->lock); + } + /* * Non-isolated types over MIGRATE_PCPTYPES get added * to the MIGRATE_MOVABLE pcp list. @@ -3485,18 +3558,25 @@ void free_unref_page_list(struct list_head *list) migratetype = MIGRATE_MOVABLE; trace_mm_page_free_batched(page); - free_unref_page_commit(page, migratetype, 0); + + /* True is dead code at the moment due to local_lock_irqsave. */ + if (unlikely(!free_unref_page_commit(page, migratetype, 0, true))) + free_one_page(page_zone(page), page, page_to_pfn(page), 0, migratetype, FPI_NONE); /* * Guard against excessive IRQ disabled times when we get * a large list of pages to free. */ if (++batch_count == SWAP_CLUSTER_MAX) { + spin_unlock(&pcp->lock); local_unlock_irqrestore(&pagesets.lock, flags); batch_count = 0; local_lock_irqsave(&pagesets.lock, flags); + pcp = this_cpu_ptr(locked_zone->per_cpu_pageset); + spin_lock(&pcp->lock); } } + spin_unlock(&pcp->lock); local_unlock_irqrestore(&pagesets.lock, flags); } @@ -3668,9 +3748,30 @@ struct page *__rmqueue_pcplist(struct zone *zone, unsigned int order, int migratetype, unsigned int alloc_flags, struct per_cpu_pages *pcp, - struct list_head *list) + struct list_head *list, + bool locked) { struct page *page; + unsigned long __maybe_unused UP_flags; + + /* + * spin_trylock is not necessary right now due to due to + * local_lock_irqsave and is a preparation step for + * a conversion to local_lock using the trylock to prevent + * IRQ re-entrancy. If pcp->lock cannot be acquired, the caller + * uses rmqueue_buddy. + * + * TODO: Convert local_lock_irqsave to local_lock. Care + * is needed as the type of local_lock would need a + * PREEMPT_RT version due to threaded IRQs. + */ + if (unlikely(!locked)) { + pcp_trylock_prepare(UP_flags); + if (!spin_trylock(&pcp->lock)) { + pcp_trylock_finish(UP_flags); + return NULL; + } + } do { if (list_empty(list)) { @@ -3691,8 +3792,10 @@ struct page *__rmqueue_pcplist(struct zone *zone, unsigned int order, migratetype, alloc_flags); pcp->count += alloced << order; - if (unlikely(list_empty(list))) - return NULL; + if (unlikely(list_empty(list))) { + page = NULL; + goto out; + } } page = list_first_entry(list, struct page, lru); @@ -3700,6 +3803,12 @@ struct page *__rmqueue_pcplist(struct zone *zone, unsigned int order, pcp->count -= 1 << order; } while (check_new_pcp(page, order)); +out: + if (!locked) { + spin_unlock(&pcp->lock); + pcp_trylock_finish(UP_flags); + } + return page; } @@ -3724,7 +3833,7 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone, pcp = this_cpu_ptr(zone->per_cpu_pageset); pcp->free_factor >>= 1; list = &pcp->lists[order_to_pindex(migratetype, order)]; - page = __rmqueue_pcplist(zone, order, migratetype, alloc_flags, pcp, list); + page = __rmqueue_pcplist(zone, order, migratetype, alloc_flags, pcp, list, false); local_unlock_irqrestore(&pagesets.lock, flags); if (page) { __count_zid_vm_events(PGALLOC, page_zonenum(page), 1); @@ -3759,7 +3868,8 @@ struct page *rmqueue(struct zone *preferred_zone, migratetype != MIGRATE_MOVABLE) { page = rmqueue_pcplist(preferred_zone, zone, order, gfp_flags, migratetype, alloc_flags); - goto out; + if (likely(page)) + goto out; } } @@ -5326,6 +5436,7 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, local_lock_irqsave(&pagesets.lock, flags); pcp = this_cpu_ptr(zone->per_cpu_pageset); pcp_list = &pcp->lists[order_to_pindex(ac.migratetype, 0)]; + spin_lock(&pcp->lock); while (nr_populated < nr_pages) { @@ -5336,11 +5447,13 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, } page = __rmqueue_pcplist(zone, 0, ac.migratetype, alloc_flags, - pcp, pcp_list); + pcp, pcp_list, true); if (unlikely(!page)) { /* Try and get at least one page */ - if (!nr_populated) + if (!nr_populated) { + spin_unlock(&pcp->lock); goto failed_irq; + } break; } nr_account++; @@ -5353,6 +5466,7 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, nr_populated++; } + spin_unlock(&pcp->lock); local_unlock_irqrestore(&pagesets.lock, flags); __count_zid_vm_events(PGALLOC, zone_idx(zone), nr_account); @@ -6992,6 +7106,7 @@ static void per_cpu_pages_init(struct per_cpu_pages *pcp, struct per_cpu_zonesta memset(pcp, 0, sizeof(*pcp)); memset(pzstats, 0, sizeof(*pzstats)); + spin_lock_init(&pcp->lock); for (pindex = 0; pindex < NR_PCP_LISTS; pindex++) INIT_LIST_HEAD(&pcp->lists[pindex]); From patchwork Wed Apr 20 09:59:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 12820013 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 257D1C433F5 for ; Wed, 20 Apr 2022 10:00:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ACADE6B0088; Wed, 20 Apr 2022 06:00:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A79846B0087; Wed, 20 Apr 2022 06:00:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 91A846B0088; Wed, 20 Apr 2022 06:00:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 842036B0085 for ; Wed, 20 Apr 2022 06:00:23 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 5C82A824DA for ; Wed, 20 Apr 2022 10:00:23 +0000 (UTC) X-FDA: 79376812326.01.1FD174C Received: from outbound-smtp21.blacknight.com (outbound-smtp21.blacknight.com [81.17.249.41]) by imf19.hostedemail.com (Postfix) with ESMTP id 176BB1A0018 for ; Wed, 20 Apr 2022 10:00:20 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp21.blacknight.com (Postfix) with ESMTPS id 920D9CCD6D for ; Wed, 20 Apr 2022 11:00:21 +0100 (IST) Received: (qmail 14142 invoked from network); 20 Apr 2022 10:00:21 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 20 Apr 2022 10:00:21 -0000 From: Mel Gorman To: Nicolas Saenz Julienne Cc: Marcelo Tosatti , Vlastimil Babka , Michal Hocko , LKML , Linux-MM , Mel Gorman Subject: [PATCH 6/6] mm/page_alloc: Remotely drain per-cpu lists Date: Wed, 20 Apr 2022 10:59:06 +0100 Message-Id: <20220420095906.27349-7-mgorman@techsingularity.net> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220420095906.27349-1-mgorman@techsingularity.net> References: <20220420095906.27349-1-mgorman@techsingularity.net> MIME-Version: 1.0 Authentication-Results: imf19.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf19.hostedemail.com: domain of mgorman@techsingularity.net designates 81.17.249.41 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 176BB1A0018 X-Stat-Signature: uiyxzpwn5g14kiamnhf3k5zxqike7sti X-HE-Tag: 1650448820-413921 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Nicolas Saenz Julienne Some setups, notably NOHZ_FULL CPUs, are too busy to handle the per-cpu drain work queued by __drain_all_pages(). So introduce new a mechanism to remotely drain the per-cpu lists. It is made possible by remotely locking 'struct per_cpu_pages' new per-cpu spinlocks. A benefit of this new scheme is that drain operations are now migration safe. There was no observed performance degradation vs. the previous scheme. Both netperf and hackbench were run in parallel to triggering the __drain_all_pages(NULL, true) code path around ~100 times per second. The new scheme performs a bit better (~5%), although the important point here is there are no performance regressions vs. the previous mechanism. Per-cpu lists draining happens only in slow paths. Link: https://lore.kernel.org/r/20211103170512.2745765-4-nsaenzju@redhat.com Signed-off-by: Nicolas Saenz Julienne Signed-off-by: Mel Gorman --- mm/page_alloc.c | 66 +++++++++---------------------------------------- 1 file changed, 11 insertions(+), 55 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 813c84b67c65..17d11eb0413e 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -161,13 +161,7 @@ DEFINE_PER_CPU(int, _numa_mem_); /* Kernel "local memory" node */ EXPORT_PER_CPU_SYMBOL(_numa_mem_); #endif -/* work_structs for global per-cpu drains */ -struct pcpu_drain { - struct zone *zone; - struct work_struct work; -}; static DEFINE_MUTEX(pcpu_drain_mutex); -static DEFINE_PER_CPU(struct pcpu_drain, pcpu_drain); #ifdef CONFIG_GCC_PLUGIN_LATENT_ENTROPY volatile unsigned long latent_entropy __latent_entropy; @@ -3087,9 +3081,6 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order, * Called from the vmstat counter updater to drain pagesets of this * currently executing processor on remote nodes after they have * expired. - * - * Note that this function must be called with the thread pinned to - * a single processor. */ void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp) { @@ -3114,10 +3105,6 @@ void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp) /* * Drain pcplists of the indicated processor and zone. - * - * The processor must either be the current processor and the - * thread pinned to the current processor or a processor that - * is not online. */ static void drain_pages_zone(unsigned int cpu, struct zone *zone) { @@ -3140,10 +3127,6 @@ static void drain_pages_zone(unsigned int cpu, struct zone *zone) /* * Drain pcplists of all zones on the indicated processor. - * - * The processor must either be the current processor and the - * thread pinned to the current processor or a processor that - * is not online. */ static void drain_pages(unsigned int cpu) { @@ -3156,9 +3139,6 @@ static void drain_pages(unsigned int cpu) /* * Spill all of this CPU's per-cpu pages back into the buddy allocator. - * - * The CPU has to be pinned. When zone parameter is non-NULL, spill just - * the single zone's pages. */ void drain_local_pages(struct zone *zone) { @@ -3170,24 +3150,6 @@ void drain_local_pages(struct zone *zone) drain_pages(cpu); } -static void drain_local_pages_wq(struct work_struct *work) -{ - struct pcpu_drain *drain; - - drain = container_of(work, struct pcpu_drain, work); - - /* - * drain_all_pages doesn't use proper cpu hotplug protection so - * we can race with cpu offline when the WQ can move this from - * a cpu pinned worker to an unbound one. We can operate on a different - * cpu which is alright but we also have to make sure to not move to - * a different one. - */ - migrate_disable(); - drain_local_pages(drain->zone); - migrate_enable(); -} - /* * The implementation of drain_all_pages(), exposing an extra parameter to * drain on all cpus. @@ -3208,13 +3170,6 @@ static void __drain_all_pages(struct zone *zone, bool force_all_cpus) */ static cpumask_t cpus_with_pcps; - /* - * Make sure nobody triggers this path before mm_percpu_wq is fully - * initialized. - */ - if (WARN_ON_ONCE(!mm_percpu_wq)) - return; - /* * Do not drain if one is already in progress unless it's specific to * a zone. Such callers are primarily CMA and memory hotplug and need @@ -3264,14 +3219,12 @@ static void __drain_all_pages(struct zone *zone, bool force_all_cpus) } for_each_cpu(cpu, &cpus_with_pcps) { - struct pcpu_drain *drain = per_cpu_ptr(&pcpu_drain, cpu); - - drain->zone = zone; - INIT_WORK(&drain->work, drain_local_pages_wq); - queue_work_on(cpu, mm_percpu_wq, &drain->work); + if (zone) { + drain_pages_zone(cpu, zone); + } else { + drain_pages(cpu); + } } - for_each_cpu(cpu, &cpus_with_pcps) - flush_work(&per_cpu_ptr(&pcpu_drain, cpu)->work); mutex_unlock(&pcpu_drain_mutex); } @@ -3280,8 +3233,6 @@ static void __drain_all_pages(struct zone *zone, bool force_all_cpus) * Spill all the per-cpu pages from all CPUs back into the buddy allocator. * * When zone parameter is non-NULL, spill just the single zone's pages. - * - * Note that this can be extremely slow as the draining happens in a workqueue. */ void drain_all_pages(struct zone *zone) { @@ -3559,7 +3510,12 @@ void free_unref_page_list(struct list_head *list) trace_mm_page_free_batched(page); - /* True is dead code at the moment due to local_lock_irqsave. */ + /* + * If there is a parallel drain in progress, free to the buddy + * allocator directly. This is expensive as the zone lock will + * be acquired multiple times but if a drain is in progress + * then an expensive operation is already taking place. + */ if (unlikely(!free_unref_page_commit(page, migratetype, 0, true))) free_one_page(page_zone(page), page, page_to_pfn(page), 0, migratetype, FPI_NONE);