From patchwork Tue Nov 29 15:16:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 13058710 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 881ADC4167B for ; Tue, 29 Nov 2022 15:17:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1447A6B0075; Tue, 29 Nov 2022 10:17:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0CDFE6B0078; Tue, 29 Nov 2022 10:17:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EFF316B007B; Tue, 29 Nov 2022 10:17:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id E35676B0075 for ; Tue, 29 Nov 2022 10:17:46 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id B5EB8C0E8D for ; Tue, 29 Nov 2022 15:17:46 +0000 (UTC) X-FDA: 80186834532.02.1D16D49 Received: from outbound-smtp47.blacknight.com (outbound-smtp47.blacknight.com [46.22.136.64]) by imf09.hostedemail.com (Postfix) with ESMTP id 20AD3140009 for ; Tue, 29 Nov 2022 15:17:45 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail02.blacknight.ie [81.17.254.11]) by outbound-smtp47.blacknight.com (Postfix) with ESMTPS id B935FFA960 for ; Tue, 29 Nov 2022 15:17:44 +0000 (GMT) Received: (qmail 4822 invoked from network); 29 Nov 2022 15:17:44 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 29 Nov 2022 15:17:44 -0000 From: Mel Gorman To: Linux-MM Cc: Andrew Morton , Michal Hocko , NeilBrown , Thierry Reding , Matthew Wilcox , Vlastimil Babka , LKML , Mel Gorman Subject: [PATCH 3/6] mm/page_alloc: Explicitly record high-order atomic allocations in alloc_flags Date: Tue, 29 Nov 2022 15:16:58 +0000 Message-Id: <20221129151701.23261-4-mgorman@techsingularity.net> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20221129151701.23261-1-mgorman@techsingularity.net> References: <20221129151701.23261-1-mgorman@techsingularity.net> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669735066; a=rsa-sha256; cv=none; b=o6RN1IigiCBkoWDuUohta/i4AqiP276tXmdyi1PcZESphFWbwtEJ2wsHybjJKkDcXDqQ2N KeqX4BRhaCMGS/1P/5+wYW/n3fo5nkdy64xPGJvQfzAuD6BWGumOouJi4K6Rz4J8LcizCx cndavWiclkcrZYWRCoeVF6yEFqZzrqA= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf09.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.136.64 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669735066; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tS8n/Ck67Q8F8PZhxNkkUKyllJ628oDKaZcfi2EFgb8=; b=5cLMJ7UuxOYQxlOzF+418XWK5Y+xGX5Qklm+IjOlsNZezMn/1teyXyKMyFSkEutSSBCdYJ eP8vYhFoKUQ1io2ZRSfcrRRvK4sTDxQn9aFZ2p9wTIa2Ehox+6lNt/5B5nlvd2kwCrLfZJ KBZbHJ9X3HGulC1KfmBElNTql4JydPk= X-Rspamd-Queue-Id: 20AD3140009 Authentication-Results: imf09.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf09.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.136.64 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net X-Rspamd-Server: rspam12 X-Rspam-User: X-Stat-Signature: gqo9d5souzk16ge35h4cwd1tr7w4wbcy X-HE-Tag: 1669735065-665616 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: A high-order ALLOC_HARDER allocation is assumed to be atomic. While that is accurate, it changes later in the series. In preparation, explicitly record high-order atomic allocations in gfp_to_alloc_flags(). Signed-off-by: Mel Gorman --- mm/internal.h | 1 + mm/page_alloc.c | 19 +++++++++++++------ 2 files changed, 14 insertions(+), 6 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index d503e57a57a1..9a9d9b5ee87f 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -754,6 +754,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, #else #define ALLOC_NOFRAGMENT 0x0 #endif +#define ALLOC_HIGHATOMIC 0x200 /* Allows access to MIGRATE_HIGHATOMIC */ #define ALLOC_KSWAPD 0x800 /* allow waking of kswapd, __GFP_KSWAPD_RECLAIM set */ enum ttu_flags; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index da746e9eb2cf..e2b65767dda0 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3710,7 +3710,7 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone, * reserved for high-order atomic allocation, so order-0 * request should skip it. */ - if (order > 0 && alloc_flags & ALLOC_HARDER) + if (alloc_flags & ALLOC_HIGHATOMIC) page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC); if (!page) { page = __rmqueue(zone, order, migratetype, alloc_flags); @@ -4028,8 +4028,10 @@ bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark, return true; } #endif - if (alloc_harder && !free_area_empty(area, MIGRATE_HIGHATOMIC)) + if ((alloc_flags & ALLOC_HIGHATOMIC) && + !free_area_empty(area, MIGRATE_HIGHATOMIC)) { return true; + } } return false; } @@ -4291,7 +4293,7 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags, * If this is a high-order atomic allocation then check * if the pageblock should be reserved for the future */ - if (unlikely(order && (alloc_flags & ALLOC_HARDER))) + if (unlikely(alloc_flags & ALLOC_HIGHATOMIC)) reserve_highatomic_pageblock(page, zone, order); return page; @@ -4818,7 +4820,7 @@ static void wake_all_kswapds(unsigned int order, gfp_t gfp_mask, } static inline unsigned int -gfp_to_alloc_flags(gfp_t gfp_mask) +gfp_to_alloc_flags(gfp_t gfp_mask, unsigned int order) { unsigned int alloc_flags = ALLOC_WMARK_MIN | ALLOC_CPUSET; @@ -4844,8 +4846,13 @@ gfp_to_alloc_flags(gfp_t gfp_mask) * Not worth trying to allocate harder for __GFP_NOMEMALLOC even * if it can't schedule. */ - if (!(gfp_mask & __GFP_NOMEMALLOC)) + if (!(gfp_mask & __GFP_NOMEMALLOC)) { alloc_flags |= ALLOC_HARDER; + + if (order > 0) + alloc_flags |= ALLOC_HIGHATOMIC; + } + /* * Ignore cpuset mems for GFP_ATOMIC rather than fail, see the * comment for __cpuset_node_allowed(). @@ -5053,7 +5060,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, * kswapd needs to be woken up, and to avoid the cost of setting up * alloc_flags precisely. So we do that now. */ - alloc_flags = gfp_to_alloc_flags(gfp_mask); + alloc_flags = gfp_to_alloc_flags(gfp_mask, order); /* * We need to recalculate the starting point for the zonelist iterator