From patchwork Mon Jan 9 15:16:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 13093708 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 561A1C54EBD for ; Mon, 9 Jan 2023 15:16:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E8E148E0007; Mon, 9 Jan 2023 10:16:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E3DF28E0001; Mon, 9 Jan 2023 10:16:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D2F3A8E0007; Mon, 9 Jan 2023 10:16:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id C35E18E0001 for ; Mon, 9 Jan 2023 10:16:58 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 9F58A140A1A for ; Mon, 9 Jan 2023 15:16:58 +0000 (UTC) X-FDA: 80335613316.17.12022CB Received: from outbound-smtp62.blacknight.com (outbound-smtp62.blacknight.com [46.22.136.251]) by imf10.hostedemail.com (Postfix) with ESMTP id 14A90C0005 for ; Mon, 9 Jan 2023 15:16:55 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=none; spf=pass (imf10.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.136.251 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673277416; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kw7JUz/rGbzGnFG23IWbxIVRXidXa9M1SzzzPQEsOGU=; b=vp3+eYHvZbgLxddY3VYGVTGCY8E4ZP+JBzShP6ccntDyR7/ZZaxSPiVI3VADmmlqhoF52r uAtbkIs6yomm+dMwQ0/SirUjlVdQVwejFCKsO34DR8VhTzc6RHeEUt0f5uVoLXEFGkb8eI XW1SdX0U7ENBkDJ2mtBKQ/F2XdtHI98= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=none; spf=pass (imf10.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.136.251 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673277416; a=rsa-sha256; cv=none; b=nxrc2hypzqGpfsv+IHKlZCIt8zkzg6vevrOYP7naD48iCAb2ICz7O+EKxC1JIKittAfZDU 6jOsXMGA0f7HLE4TunMh2Kd1QxeEfEO8Nee01Vgoaibm2m3Gg3kVUdcArxCXWDKp0o3wL1 6WxH1r5o+m9JrWgOwmqYbA1j3lOW6cU= Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp62.blacknight.com (Postfix) with ESMTPS id 798A8FA8AF for ; Mon, 9 Jan 2023 15:16:54 +0000 (GMT) Received: (qmail 16592 invoked from network); 9 Jan 2023 15:16:54 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 9 Jan 2023 15:16:54 -0000 From: Mel Gorman To: Linux-MM Cc: Andrew Morton , Michal Hocko , NeilBrown , Thierry Reding , Matthew Wilcox , Vlastimil Babka , LKML , Mel Gorman Subject: [PATCH 1/7] mm/page_alloc: Rename ALLOC_HIGH to ALLOC_MIN_RESERVE Date: Mon, 9 Jan 2023 15:16:25 +0000 Message-Id: <20230109151631.24923-2-mgorman@techsingularity.net> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230109151631.24923-1-mgorman@techsingularity.net> References: <20230109151631.24923-1-mgorman@techsingularity.net> MIME-Version: 1.0 X-Stat-Signature: i8kugoywwgh8j733yjrjq9jyc68m8g1q X-Rspam-User: X-Rspamd-Queue-Id: 14A90C0005 X-Rspamd-Server: rspam06 X-HE-Tag: 1673277415-671627 X-HE-Meta: U2FsdGVkX18dBAcTAr4xVW0gkJVir1uBmf3KM86IzEqgEXMZ8hSDszhF1bGiXa/EIMy2MgGv2FZVlmcEiGBTzaLWagm3LoAteHT6KXVBjYpgi7DIUho5rULNbP4F+HUgHcAWiO89oOVEj4I31TkevUhcplZhkhL3cODEjxfa+Dl1LjOa6UsC4MHyKo6H8AlSOJsNRPLz3dwX1Q/j3bnH6G48OLio3CMKn7UGlrpcLQgt/f6CxSvL+XPLGyQKFcyGwQMXRTwHqXMnEexJlAtezSP3Ui1xMDIdNetb2JQqr8iTkckUpr2sS6WTIewHgGcbGGp/iPXs3mDCzjD6/iOu47E9SkuxHWTTj+7a3xsYY1xwuKEeOmF0rTDyVnsDcCM08jiwkUFk7rpoGmsrP13cCyn8u4PkCdVROA4mPC8n9pETxMGZ8b75n1iQQkSLyqM12QgJmI5jRFHgVKWbA1aI4WjJq8l0CqoKPX2W7HYLnkbeKZL4csrQ4hAp1kZteW2zTap+HRrQDDM0zLR5kuY1dXfoKp9dMvm4jsBXPTAZsKfT66gIKw0Ims/0u1u+0O5DczYPJ/kPN67sFGPwvnyn+uRUI8a4LSjgosuPTZ9yx8+VXsV330cFzpdxioBasooXlZJjsJFpG0qqqzHY5TuPuKbM/QvOnPiJVZwGEETJ6YRtBxShSfjim73JkcfHYSQcCGvhQCptbFofjZs23FIkU3NU83IW8/6khih6e1daAbpfoNM78Ov4UAvqIHRGD3q+XdpgzYGXo2rCH/06siobBqCgAbsr39EHZQfwgWi1VwV1Fn/cxulaV/DyVIQVVAGMrGkk89Bik2v8YSmnGJhxxeZeHJx5L40YFId5bTccdRLAOmr8DQfNwM7a91PbxmtYvlBATUdvZ6uzSvNhuBGBpDFtSMrZGCrCAeu57C6FxF2zwW16asfLT/VbDBIdS8zzCi39Tb4szp/K8LADvMi yhG2LBQK RUKbYTMgDLGohp5+0C4CFv3PoU3yIKfDmObIntcLTQaQfsAxNtk9k8HM0KoZLNy1ICOfu X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: __GFP_HIGH aliases to ALLOC_HIGH but the name does not really hint what it means. As ALLOC_HIGH is internal to the allocator, rename it to ALLOC_MIN_RESERVE to document that the min reserves can be depleted. Signed-off-by: Mel Gorman Acked-by: Vlastimil Babka Acked-by: Michal Hocko --- mm/internal.h | 4 +++- mm/page_alloc.c | 8 ++++---- 2 files changed, 7 insertions(+), 5 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index bcf75a8b032d..403e4386626d 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -736,7 +736,9 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, #endif #define ALLOC_HARDER 0x10 /* try to alloc harder */ -#define ALLOC_HIGH 0x20 /* __GFP_HIGH set */ +#define ALLOC_MIN_RESERVE 0x20 /* __GFP_HIGH set. Allow access to 50% + * of the min watermark. + */ #define ALLOC_CPUSET 0x40 /* check for correct cpuset */ #define ALLOC_CMA 0x80 /* allow allocations from CMA areas */ #ifdef CONFIG_ZONE_DMA32 diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0745aedebb37..244c1e675dc8 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3976,7 +3976,7 @@ bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark, /* free_pages may go negative - that's OK */ free_pages -= __zone_watermark_unusable_free(z, order, alloc_flags); - if (alloc_flags & ALLOC_HIGH) + if (alloc_flags & ALLOC_MIN_RESERVE) min -= min / 2; if (unlikely(alloc_harder)) { @@ -4818,18 +4818,18 @@ gfp_to_alloc_flags(gfp_t gfp_mask) unsigned int alloc_flags = ALLOC_WMARK_MIN | ALLOC_CPUSET; /* - * __GFP_HIGH is assumed to be the same as ALLOC_HIGH + * __GFP_HIGH is assumed to be the same as ALLOC_MIN_RESERVE * and __GFP_KSWAPD_RECLAIM is assumed to be the same as ALLOC_KSWAPD * to save two branches. */ - BUILD_BUG_ON(__GFP_HIGH != (__force gfp_t) ALLOC_HIGH); + BUILD_BUG_ON(__GFP_HIGH != (__force gfp_t) ALLOC_MIN_RESERVE); BUILD_BUG_ON(__GFP_KSWAPD_RECLAIM != (__force gfp_t) ALLOC_KSWAPD); /* * The caller may dip into page reserves a bit more if the caller * cannot run direct reclaim, or if the caller has realtime scheduling * policy or is asking for __GFP_HIGH memory. GFP_ATOMIC requests will - * set both ALLOC_HARDER (__GFP_ATOMIC) and ALLOC_HIGH (__GFP_HIGH). + * set both ALLOC_HARDER (__GFP_ATOMIC) and ALLOC_MIN_RESERVE(__GFP_HIGH). */ alloc_flags |= (__force int) (gfp_mask & (__GFP_HIGH | __GFP_KSWAPD_RECLAIM)); From patchwork Mon Jan 9 15:16:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 13093709 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6B61C5479D for ; Mon, 9 Jan 2023 15:17:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 406828E000A; Mon, 9 Jan 2023 10:17:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3B6D88E0001; Mon, 9 Jan 2023 10:17:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2A6498E000A; Mon, 9 Jan 2023 10:17:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 1CDCB8E0001 for ; Mon, 9 Jan 2023 10:17:10 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id D3596AAE20 for ; Mon, 9 Jan 2023 15:17:09 +0000 (UTC) X-FDA: 80335613778.19.7764EBC Received: from outbound-smtp10.blacknight.com (outbound-smtp10.blacknight.com [46.22.139.15]) by imf05.hostedemail.com (Postfix) with ESMTP id 2A27E100006 for ; Mon, 9 Jan 2023 15:17:07 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=none; spf=pass (imf05.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.139.15 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673277428; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wNNO3WG+hYMDAnexlcyOoQNVFKpjqk833XO89T5ImN0=; b=E8C8suuy0MFiHTadvxsgBRCOuenFMZaPll6yEndmZx1suJNjIM3jCjOfPGpVSXyAJUMpZz I4YzMXdFN955aq5BsCJmwLj5PQZvd2ZZXRx3Ng86phXFKtk5+YhxLop2hi0SL+zU9yrobt Jtai7D839TNpE7lI0FYYliM8XYtWx2U= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=none; spf=pass (imf05.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.139.15 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673277428; a=rsa-sha256; cv=none; b=8UUuhAAMzZPXv1PRT5Op3HrK1AQhktNQ9bs5jqcqk5+ZTy68rbPlUMRxHqT4ry2y+9QNfF BpWmLs15mUkvfDf3DoAShle6894iOF0kUW8vg47Lji+NcQFUQZyTm8+s9Hv6Ttn1TJqPaa F3G+96CYGnXOhCkbLlPUCsgRuzSYOoM= Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp10.blacknight.com (Postfix) with ESMTPS id 1397D1C40F8 for ; Mon, 9 Jan 2023 15:17:05 +0000 (GMT) Received: (qmail 17223 invoked from network); 9 Jan 2023 15:17:04 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 9 Jan 2023 15:17:04 -0000 From: Mel Gorman To: Linux-MM Cc: Andrew Morton , Michal Hocko , NeilBrown , Thierry Reding , Matthew Wilcox , Vlastimil Babka , LKML , Mel Gorman Subject: [PATCH 2/7] mm/page_alloc: Treat RT tasks similar to __GFP_HIGH Date: Mon, 9 Jan 2023 15:16:26 +0000 Message-Id: <20230109151631.24923-3-mgorman@techsingularity.net> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230109151631.24923-1-mgorman@techsingularity.net> References: <20230109151631.24923-1-mgorman@techsingularity.net> MIME-Version: 1.0 X-Stat-Signature: oa4hux4mtnqcxp7i5cidbkuutrs4uzm6 X-Rspam-User: X-Rspamd-Queue-Id: 2A27E100006 X-Rspamd-Server: rspam06 X-HE-Tag: 1673277427-849466 X-HE-Meta: U2FsdGVkX1/KAMiQX8Z4aT3KC6T1SKaPu5YrO3Br8xnoF1MZfGa+dOpYKRKgtlu04mbpNG6SLADtv3EJolRHlXtbPFIsKJLTUXzvgqOpmSnr3/ml0VaLXZ5E9PrYe3lzvAzNkepxKdro9tHns5Fa7bwtpkdnk+7mxVslNn4JAt7Bdj6SMJnyOU7EWXLJYrr24h+EIgNINg2lJZeS94VsxrwocHidLOtMr3m77UgZgeO391bVHCScuqXxd+3Hs+Ozmf5Gb4obiR52oLLN1IwoZPHVufwvixX25rP6M+HG7KFQnqszh7kTVlhRrpv3u+ksLyVY6/mY/0Bm/++i74eKY/dsir7sngT573uaWoc/9sLvUni8zcYuAcOhoPaLT42oiI6hu3pQhH1etdbH+S8sflc5gSMRmW1YTYFNC/0VkqchC4KJBLWWeG6KJcVLRJHThhHKr18USSC9OqV8DwV8rOB9KraqfFgLwQcIQUQLm77nTKU3xTqCzsvPx/EI3RlO5Pw6azRdLltJG8o+yBYrOsdTnEnoDHNjj/bZCn55JNFlHv0NqO72jsOze07j0Or0yNRdTXwtMcMzPyN9kYG70n8qYTsu5/VRYovDMFg05Kukn8lpxXARU2T2gu7ZwZRthCdlluFqolISlrKks/dJhgwsywdJC2py3J/PhpiJFqnx9TPGkM3nX15JptsWztZZ0YfYKYHtcgbKX3Feew0tOn6OnXe83w6Qhk3tYrfVYSEX29bGf5mNUN7/zEOOvSOlvtsS1TPOixsSUVE8rSNnU6F53VJCKgGOe52HaU8Sb5esu1ZAGwtgz5rH2jKbrp7xIV2jqUcMgc/j18RHZWAeWpBgihxL8Ia1dxBpWZh7PRdP8pZ0coU8M4+cDlfXeEZVxX2kSRA0L8Hl5R8lf+m88Qa4GVgkaaCiwoK0yt8pT+BcI0dC0mgJbeRv/EFb7J2kg5X1NWYCqQ4/5a7BT+E kndsqOZ0 AA/eTRH62dcCS3lDdymKXV879loNXmyESXTcqgYsvUFl9xch2QXa+NrtQ9+KFLhvBMlSE X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: RT tasks are allowed to dip below the min reserve but ALLOC_HARDER is typically combined with ALLOC_MIN_RESERVE so RT tasks are a little unusual. While there is some justification for allowing RT tasks access to memory reserves, there is a strong chance that a RT task that is also under memory pressure is at risk of missing deadlines anyway. Relax how much reserves an RT task can access by treating it the same as __GFP_HIGH allocations. Signed-off-by: Mel Gorman Acked-by: Vlastimil Babka Acked-by: Michal Hocko --- mm/page_alloc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 244c1e675dc8..0040b4e00913 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4847,7 +4847,7 @@ gfp_to_alloc_flags(gfp_t gfp_mask) */ alloc_flags &= ~ALLOC_CPUSET; } else if (unlikely(rt_task(current)) && in_task()) - alloc_flags |= ALLOC_HARDER; + alloc_flags |= ALLOC_MIN_RESERVE; alloc_flags = gfp_to_alloc_flags_cma(gfp_mask, alloc_flags); From patchwork Mon Jan 9 15:16:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 13093710 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05D56C5479D for ; Mon, 9 Jan 2023 15:17:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 84D0A8E000B; Mon, 9 Jan 2023 10:17:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7FC918E0001; Mon, 9 Jan 2023 10:17:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6EB938E000B; Mon, 9 Jan 2023 10:17:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 5FA8B8E0001 for ; Mon, 9 Jan 2023 10:17:18 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 44187AADCC for ; Mon, 9 Jan 2023 15:17:18 +0000 (UTC) X-FDA: 80335614156.26.A609C7E Received: from outbound-smtp47.blacknight.com (outbound-smtp47.blacknight.com [46.22.136.64]) by imf05.hostedemail.com (Postfix) with ESMTP id 8A6B5100011 for ; Mon, 9 Jan 2023 15:17:16 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf05.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.136.64 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673277436; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yXUMy6cDPmif+grVxIc9oiJzcOxCxumptlQ5E1zTx7w=; b=4THHdiz1pIIgwQWgKsmOU0twKrB9WWKaq34xzcmAB2irllAiH0Z4VDEwb86eiJeSyCsw7O Ny+MCOeqaHo6DHyGhbtj2qmcLPl0FyY9XYId2xhihVApV1UjOQTUYgz0X7Tf7y+ht46DNO xI6fMs7H3SBeXF7C1qUCGvWo6jP2t5s= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf05.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.136.64 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673277436; a=rsa-sha256; cv=none; b=5U3FTfz1sjORBSa9VmBVEF+spwqIYmTyi1nCDJe2nLGh1KrKH3jNkfLLdIb991WaYMPfcH gUUJpNIXUoABzFUfucAMzBn1ST8TWHf5e6wHaslVBTpCpeicict3Yw0juc22LdvzA9dO1A 5LzufVpY5iIXWPpheaGRLh1DiXjLccg= Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp47.blacknight.com (Postfix) with ESMTPS id 37100FA881 for ; Mon, 9 Jan 2023 15:17:15 +0000 (GMT) Received: (qmail 17642 invoked from network); 9 Jan 2023 15:17:15 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 9 Jan 2023 15:17:15 -0000 From: Mel Gorman To: Linux-MM Cc: Andrew Morton , Michal Hocko , NeilBrown , Thierry Reding , Matthew Wilcox , Vlastimil Babka , LKML , Mel Gorman Subject: [PATCH 3/7] mm/page_alloc: Explicitly record high-order atomic allocations in alloc_flags Date: Mon, 9 Jan 2023 15:16:27 +0000 Message-Id: <20230109151631.24923-4-mgorman@techsingularity.net> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230109151631.24923-1-mgorman@techsingularity.net> References: <20230109151631.24923-1-mgorman@techsingularity.net> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 8A6B5100011 X-Stat-Signature: 65jqrfc5wa1k4xgt6ec8cwfhkit883tt X-Rspam-User: X-HE-Tag: 1673277436-495198 X-HE-Meta: U2FsdGVkX19PsmhxjL4BlU/oZX6E41MpeiLRPeyZPx8XHooM4kG6Q3wH2e4x0dy8GwnFnIRpZoYfcDhpQNJvq5B9Y5oOkH5tgygzFhhY4cKOr/Fn9K7OMbW54+hwNy7e1S3hB8RqjIb5MCS+gHXTcHZINjtc5Q10BuLdCVQiPDck4dcEJ2TUFPX/W4dRaKkafr2J6T4X8tUDOP+i30ntq7B6b7UOh5zZ3UzO/r3HbuvpqRqPnBSG5upNmo/8PSu2MChSmx4J7UC2hlgL+WdLb00sQkHoe3OZgaXcYdu+jYuCLgMXtQO+gUeE1AckHpEGAkQ+59wAt+my6B+IsZoMBQ9Kgd6tEG6/OpXlgVfry3nGLq6t/jtnG1upymNPUyQVQRNGL0BnZqg/zlsnLW+0GcjoA+v473k25QfoRJnOmGLy2V21B1ptho47PvIpudyFds1Qh1r97Wro2bD6uDKpzG2L6AI3mIxYjtJS1UAiTNHR9CIAUa4Aod+ZUs4b+ld9CPnjiVn7AOKw4mntXY15jXXXUu50SBvhqy0NVZ71lTxIXWBH7oqMajpcz6n33t2+Nwfa2Doi0S+aThZ0lHpYoQ9bhvJ8zSXExCxG53uS1nk6720PExvOS3/9MQVNPr1G+uApvRymcGTmUt4no33u6ylTWJJJoLhj8O9aAY9OmhDc66aikwQSmsj0Xz6U0cA1y/ujW2FmZfmT2Pc2ba0GQBw8FxLMmlSr5Zu3rBaigz/nYXnzU64xkSLhikPGDzOKmLiDLMDLgcPpC9GvQZcvgKbkgeRXOOT629Ksq4aiIRAiND5dr0itXgwwaNUbju5nmbNSm6Zn/fjda0dcsKGL7uKF6wsne6txPeyU4lkzwnHz1K4YJwMTimp0ktoNlj5lT/sihE+pbOD+6dJrjoChim0ncsgZN7xt0tOy5ssFD40mWN4FBv5duJWAcP7P/7BqVToXAAjAEYgpUDNclOQ AdS1oQ7D s5sKerglBgGbCThNJ0MC/IrFYwVhJVeTrgui69pc66YTCjG+gQ1hQdDWl004scVWO9x5N X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: A high-order ALLOC_HARDER allocation is assumed to be atomic. While that is accurate, it changes later in the series. In preparation, explicitly record high-order atomic allocations in gfp_to_alloc_flags(). There is a slight functional change in that OOM handling avoids using high-order reserve until it has to. Signed-off-by: Mel Gorman Acked-by: Vlastimil Babka --- mm/internal.h | 1 + mm/page_alloc.c | 29 +++++++++++++++++++++++------ 2 files changed, 24 insertions(+), 6 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 403e4386626d..178484d9fd94 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -746,6 +746,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, #else #define ALLOC_NOFRAGMENT 0x0 #endif +#define ALLOC_HIGHATOMIC 0x200 /* Allows access to MIGRATE_HIGHATOMIC */ #define ALLOC_KSWAPD 0x800 /* allow waking of kswapd, __GFP_KSWAPD_RECLAIM set */ enum ttu_flags; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0040b4e00913..0ef4f3236a5a 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3706,10 +3706,20 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone, * reserved for high-order atomic allocation, so order-0 * request should skip it. */ - if (order > 0 && alloc_flags & ALLOC_HARDER) + if (alloc_flags & ALLOC_HIGHATOMIC) page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC); if (!page) { page = __rmqueue(zone, order, migratetype, alloc_flags); + + /* + * If the allocation fails, allow OOM handling access + * to HIGHATOMIC reserves as failing now is worse than + * failing a high-order atomic allocation in the + * future. + */ + if (!page && (alloc_flags & ALLOC_OOM)) + page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC); + if (!page) { spin_unlock_irqrestore(&zone->lock, flags); return NULL; @@ -4023,8 +4033,10 @@ bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark, return true; } #endif - if (alloc_harder && !free_area_empty(area, MIGRATE_HIGHATOMIC)) + if ((alloc_flags & (ALLOC_HIGHATOMIC|ALLOC_OOM)) && + !free_area_empty(area, MIGRATE_HIGHATOMIC)) { return true; + } } return false; } @@ -4286,7 +4298,7 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags, * If this is a high-order atomic allocation then check * if the pageblock should be reserved for the future */ - if (unlikely(order && (alloc_flags & ALLOC_HARDER))) + if (unlikely(alloc_flags & ALLOC_HIGHATOMIC)) reserve_highatomic_pageblock(page, zone, order); return page; @@ -4813,7 +4825,7 @@ static void wake_all_kswapds(unsigned int order, gfp_t gfp_mask, } static inline unsigned int -gfp_to_alloc_flags(gfp_t gfp_mask) +gfp_to_alloc_flags(gfp_t gfp_mask, unsigned int order) { unsigned int alloc_flags = ALLOC_WMARK_MIN | ALLOC_CPUSET; @@ -4839,8 +4851,13 @@ gfp_to_alloc_flags(gfp_t gfp_mask) * Not worth trying to allocate harder for __GFP_NOMEMALLOC even * if it can't schedule. */ - if (!(gfp_mask & __GFP_NOMEMALLOC)) + if (!(gfp_mask & __GFP_NOMEMALLOC)) { alloc_flags |= ALLOC_HARDER; + + if (order > 0) + alloc_flags |= ALLOC_HIGHATOMIC; + } + /* * Ignore cpuset mems for GFP_ATOMIC rather than fail, see the * comment for __cpuset_node_allowed(). @@ -5048,7 +5065,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, * kswapd needs to be woken up, and to avoid the cost of setting up * alloc_flags precisely. So we do that now. */ - alloc_flags = gfp_to_alloc_flags(gfp_mask); + alloc_flags = gfp_to_alloc_flags(gfp_mask, order); /* * We need to recalculate the starting point for the zonelist iterator From patchwork Mon Jan 9 15:16:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 13093711 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F6A0C5479D for ; Mon, 9 Jan 2023 15:17:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C43BD8E000C; Mon, 9 Jan 2023 10:17:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BF4248E0001; Mon, 9 Jan 2023 10:17:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AE3678E000C; Mon, 9 Jan 2023 10:17:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id A07AD8E0001 for ; Mon, 9 Jan 2023 10:17:28 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 73AB0806BA for ; Mon, 9 Jan 2023 15:17:28 +0000 (UTC) X-FDA: 80335614576.25.A3382F9 Received: from outbound-smtp07.blacknight.com (outbound-smtp07.blacknight.com [46.22.139.12]) by imf02.hostedemail.com (Postfix) with ESMTP id C005D8001C for ; Mon, 9 Jan 2023 15:17:26 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf02.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.139.12 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673277447; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Zf9NPxc+KtVdQOTetyOPI9RBspqTg8JrOu12oVMo14o=; b=zgV0JciwWkKIcC/Fi4V/HSfHOMPAer8YwNmVWtL8lnTUA5HWOISVAVqExi0u8nghOBENIi SZHjBWYncw0Q7Ast6KgX+QQa+wUmGfH1/1P7s9dxZCB0CKQQZOGFsNJk8OZDXZoMPcFD9k 5FbaLFvACLttfHWc4ciwkWQmzdLP1/g= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf02.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.139.12 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673277447; a=rsa-sha256; cv=none; b=uqK9p4UQ5EX/5MBNjmjBT2eYqUwVeOuQylm1JEzrt5YMAZYQVXSsppZ8YOD6kpsGlwlBiz GqkD7QPYpqNI/dpHDueevvK4+DsCe7sCOWOw4THcRmarNk5SsuLnzbI8+ymgB+mC6IqTNb AcGxHVk3TzIK2WMm76l1enqbDLB2VCI= Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp07.blacknight.com (Postfix) with ESMTPS id 736781C40FA for ; Mon, 9 Jan 2023 15:17:25 +0000 (GMT) Received: (qmail 18139 invoked from network); 9 Jan 2023 15:17:25 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 9 Jan 2023 15:17:25 -0000 From: Mel Gorman To: Linux-MM Cc: Andrew Morton , Michal Hocko , NeilBrown , Thierry Reding , Matthew Wilcox , Vlastimil Babka , LKML , Mel Gorman Subject: [PATCH 4/7] mm/page_alloc: Explicitly define what alloc flags deplete min reserves Date: Mon, 9 Jan 2023 15:16:28 +0000 Message-Id: <20230109151631.24923-5-mgorman@techsingularity.net> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230109151631.24923-1-mgorman@techsingularity.net> References: <20230109151631.24923-1-mgorman@techsingularity.net> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: C005D8001C X-Stat-Signature: shzc1aobj9dzpcdyzdx1dt77xjgnm5dm X-HE-Tag: 1673277446-50328 X-HE-Meta: U2FsdGVkX1/wUkTJisKSAVZwlFQlcqXu8afB+ucAN3A8Q2VPjJcfgcKZOFO8Y1z7SEJ3NCxpLCtlAg2Yfr/0jAzhYmhwaa0tvknPaVB/e1woEuiL1FraVbCwR5G1RA26whZNarsZCjQKjgT3ynuqQNpdaQ15AVgN8sv1NnW8NJnKaYQLN6OcEHIGIwl5lr1c+HhEpoQG0V1Ad28X/LMJ4kQ4UAZ5MZ/3kP54727wGS7D52Iyk/PQGb/OEX6JbtyaWv+vA9a8pXjTuj2kfJK1GQE55SBlrOO7jNuKcLCo7llSPs4mFduyu5W2TdR3ZDO/Xab2tCCM2GPHnOMZNgO5yvkvcV+XHQQwT3torjfW3tqgKg8qT88NF6Ejcwjmo3JNLx8uFXk2VvtI4zCDQgU4Z26dY53e9EhjGSToXJHJclZX068bAQqXoeB/RKl327W4/LubrH1VHRojnuLMx8q6sP7iEErIpQQp36T0EFvTzmyQ/7pCCp7H3vq4mer9mFM4/xTaxqFLhoVBa18VOtD+LnR2tG6zVQPfU64FaiGergviYqsSw/NaNBPmn9xGOCIRDgTz8GiEdHMQAMyHspiqWDXvPDXKH2PTKd5O9pwUf8zq62cwOVnI+Kbm2knLMaNizvotuowC0U+QCFn+ZIvlzBN29pREYykR6wX0JWx0APZT1DyJfvd9fC8cvEmHxtFzmvXQUYjDBRVbNTKdLV7FYk9JpsLOIDFv9b7x4ZDKZWkFnke9p4bkbrhGrS6l0TxcMU/RYMhrmSQy1K26Ws4Ka9SRW9QXRAciNa0AAsgYnoT+qYKuk2N06sKqrlhC0zPC42yIvEIMkxGh+m0poWCQy52gSLzRIOQJ0L/0XXFhibu/KdpwPTQnARl7T/yP87k1JsQTWBf7A3SkKkOAAbtTxOp5252LsDKkKluB4ypQNpmgr+90d5B5UnqKBD9XEaXuFMaGSIuZ8v4nZhYPGqr em2K5UgO myEqr0LCEPkZJKn6yGi+OPTn2KQ48goad+QR4VlogZZ5VYTTrjm/mNuKHFUVVshaXHe06 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: As there are more ALLOC_ flags that affect reserves, define what flags affect reserves and clarify the effect of each flag. Signed-off-by: Mel Gorman Acked-by: Vlastimil Babka Acked-by: Michal Hocko --- mm/internal.h | 3 +++ mm/page_alloc.c | 34 ++++++++++++++++++++++------------ 2 files changed, 25 insertions(+), 12 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 178484d9fd94..8706d46863df 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -749,6 +749,9 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, #define ALLOC_HIGHATOMIC 0x200 /* Allows access to MIGRATE_HIGHATOMIC */ #define ALLOC_KSWAPD 0x800 /* allow waking of kswapd, __GFP_KSWAPD_RECLAIM set */ +/* Flags that allow allocations below the min watermark. */ +#define ALLOC_RESERVES (ALLOC_HARDER|ALLOC_MIN_RESERVE|ALLOC_HIGHATOMIC|ALLOC_OOM) + enum ttu_flags; struct tlbflush_unmap_batch; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0ef4f3236a5a..6f41b84a97ac 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3949,15 +3949,14 @@ ALLOW_ERROR_INJECTION(should_fail_alloc_page, TRUE); static inline long __zone_watermark_unusable_free(struct zone *z, unsigned int order, unsigned int alloc_flags) { - const bool alloc_harder = (alloc_flags & (ALLOC_HARDER|ALLOC_OOM)); long unusable_free = (1 << order) - 1; /* - * If the caller does not have rights to ALLOC_HARDER then subtract - * the high-atomic reserves. This will over-estimate the size of the - * atomic reserve but it avoids a search. + * If the caller does not have rights to reserves below the min + * watermark then subtract the high-atomic reserves. This will + * over-estimate the size of the atomic reserve but it avoids a search. */ - if (likely(!alloc_harder)) + if (likely(!(alloc_flags & ALLOC_RESERVES))) unusable_free += z->nr_reserved_highatomic; #ifdef CONFIG_CMA @@ -3981,25 +3980,36 @@ bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark, { long min = mark; int o; - const bool alloc_harder = (alloc_flags & (ALLOC_HARDER|ALLOC_OOM)); /* free_pages may go negative - that's OK */ free_pages -= __zone_watermark_unusable_free(z, order, alloc_flags); - if (alloc_flags & ALLOC_MIN_RESERVE) - min -= min / 2; + if (unlikely(alloc_flags & ALLOC_RESERVES)) { + /* + * __GFP_HIGH allows access to 50% of the min reserve as well + * as OOM. + */ + if (alloc_flags & ALLOC_MIN_RESERVE) + min -= min / 2; - if (unlikely(alloc_harder)) { /* - * OOM victims can try even harder than normal ALLOC_HARDER + * Non-blocking allocations can access some of the reserve + * with more access if also __GFP_HIGH. The reasoning is that + * a non-blocking caller may incur a more severe penalty + * if it cannot get memory quickly, particularly if it's + * also __GFP_HIGH. + */ + if (alloc_flags & ALLOC_HARDER) + min -= min / 4; + + /* + * OOM victims can try even harder than the normal reserve * users on the grounds that it's definitely going to be in * the exit path shortly and free memory. Any allocation it * makes during the free path will be small and short-lived. */ if (alloc_flags & ALLOC_OOM) min -= min / 2; - else - min -= min / 4; } /* From patchwork Mon Jan 9 15:16:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 13093712 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74F0CC54EBD for ; Mon, 9 Jan 2023 15:17:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 00E148E000D; Mon, 9 Jan 2023 10:17:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F015E8E0001; Mon, 9 Jan 2023 10:17:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DEFF28E000D; Mon, 9 Jan 2023 10:17:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id CF3E88E0001 for ; Mon, 9 Jan 2023 10:17:38 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id A7A5D1C2886 for ; Mon, 9 Jan 2023 15:17:38 +0000 (UTC) X-FDA: 80335614996.21.A6AD97A Received: from outbound-smtp44.blacknight.com (outbound-smtp44.blacknight.com [46.22.136.52]) by imf14.hostedemail.com (Postfix) with ESMTP id 209D2100012 for ; Mon, 9 Jan 2023 15:17:36 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf14.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.136.52 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673277457; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=OTiE94DWTqreF7H0O/DeI5IYF1+Qxs1+noUSvGBkGJw=; b=5opggsSW3NPTZ+NBwqMU0KvykZysrCa8b9yuVFDRiiSobtNjD54KnlSSdl55C4QWnLcGVn E9WBXnlqj9blb7iJKnM/Lwwj0IMCI+A45WJRD2izzhZmFoivTCvegCimaw5l8bSjwh9zEd i+vYY4LRA+t5GjVr6aGlShY7+71okgc= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf14.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.136.52 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673277457; a=rsa-sha256; cv=none; b=wGo7JsMyJXKKv/TgxgiPC0IK/VJM7yr2qCGJsM/KuvE3QOd7vsRUe4Q5KmjqwkksvBid+A kHjiHd+C6H8o5JUJ0EGBDB9ZzTAlGs3uTSBjJLMNWpk+pRluu33oc9kwDOKc3AMid1omlI Va7LAE/VyObSEnMSSzrVJ9IiMlOaeDE= Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp44.blacknight.com (Postfix) with ESMTPS id A93C8F8447 for ; Mon, 9 Jan 2023 15:17:35 +0000 (GMT) Received: (qmail 18620 invoked from network); 9 Jan 2023 15:17:35 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 9 Jan 2023 15:17:35 -0000 From: Mel Gorman To: Linux-MM Cc: Andrew Morton , Michal Hocko , NeilBrown , Thierry Reding , Matthew Wilcox , Vlastimil Babka , LKML , Mel Gorman Subject: [PATCH 5/7] mm/page_alloc.c: Allow __GFP_NOFAIL requests deeper access to reserves Date: Mon, 9 Jan 2023 15:16:29 +0000 Message-Id: <20230109151631.24923-6-mgorman@techsingularity.net> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230109151631.24923-1-mgorman@techsingularity.net> References: <20230109151631.24923-1-mgorman@techsingularity.net> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 209D2100012 X-Rspam-User: X-Stat-Signature: 8gaumph4xistafmwehzdmy1m4t8ydm3n X-HE-Tag: 1673277456-287703 X-HE-Meta: U2FsdGVkX1+NLzCqVe7hW2t2/szY5GupW7sD/Mo+9lsG0LR7wQJUUC7O4W9UBowN6sr+6cKKhKVhjCTmmO9E+IuWpx6mwT9MC7cVW5hURNzguy1SLnJizl+LcMgDLPftbqpgn6XEplVVqIwYDukBruK2NxntYVGj2ooB6jwO9uU3QSd6pd5tzzR1enzblCWuAwoW7L8bW1wdf96s5t+vDMKF3+CNZu3Zxgx6uCjiGna+MkXP7d4m2mgZE1W2Du8UAPdkGs+7+hOzOF6Wo2K433MoeMh96Tfdh5rPZwoWydyNr6v5l+6Iwl78bRWBxMN9XOsoSizP8cAhbur8HhCObwRmBV9/sE/578yyYpveczpcY+kOFUwHxK3+SiqgGJTNHxLFgQyp1BF1iECBfk8xy/MFkOZzDIFWXc0n4inwj3KhgdDkWGvOdRD31tqUW2k43t6Gl0QORWPcZsoUYXCYYVVlxctdthnfh8bskclWgCF3SwpXXva11nAy9Wdv5wV875/7qVpXshxwgQs4T8uzCQ3IR4ovxdKWp9yBwmFtJ4ioyKzmsYg5336Oynye+wc6wXC/iw4wXqLumxStNmJK6HgEstlQMuBGeI3UvjZm8Dp2KUzgKh4LFMBezKtwaSomawWTa1HMoDOPAVv6WEUWE8NSpCIG4UCzQesAAnonbWdARE9HI+LFsJVyov1pKkl6BMMJTFmrxTMG9rVylo2D7XQfO+tqWWFhSuHZozgzsC0vVwkfA2TSYNMFVCHv16Bh+WHGzwv6Lv7Gi6JOn2xHbSjCSioC+vilhSBlgs+pbjLhWAw5UUwmWKyWpCHa1tHkpeyzLQ9fUHG73UrvMgmX4Ej3+kukXY0vRbJYqEKwrEEeUCoWraC67NSr0F0dj0rRj74VQCJLW7L9Wc6bXxxNLt0bb26nCY2C9D87Uvucj7juvD2ZaeBzlVQaYoUTRdV0fxeinGesre4muq2v5Qn U6pC/TTA BIk9m1n6Z36AhNQev2V27IHcQ4i4nrZtlNIm6KajqEEJQWT8sGvuyQY6aJiGqPbl6Vxy6 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently __GFP_NOFAIL allocations without any other flags can access 25% of the reserves but these requests imply that the system cannot make forward progress until the allocation succeeds. Allow __GFP_NOFAIL access to 75% of the min reserve. Signed-off-by: Mel Gorman Acked-by: Vlastimil Babka --- mm/page_alloc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 6f41b84a97ac..d2df78f5baa2 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5308,7 +5308,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, * could deplete whole memory reserves which would just make * the situation worse */ - page = __alloc_pages_cpuset_fallback(gfp_mask, order, ALLOC_HARDER, ac); + page = __alloc_pages_cpuset_fallback(gfp_mask, order, ALLOC_MIN_RESERVE|ALLOC_HARDER, ac); if (page) goto got_pg; From patchwork Mon Jan 9 15:16:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 13093713 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1603CC54EBD for ; Mon, 9 Jan 2023 15:17:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A957A8E000E; Mon, 9 Jan 2023 10:17:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A467A8E0001; Mon, 9 Jan 2023 10:17:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 935148E000E; Mon, 9 Jan 2023 10:17:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 865DE8E0001 for ; Mon, 9 Jan 2023 10:17:49 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 5D7811A07D1 for ; Mon, 9 Jan 2023 15:17:49 +0000 (UTC) X-FDA: 80335615458.06.A9E9D76 Received: from outbound-smtp27.blacknight.com (outbound-smtp27.blacknight.com [81.17.249.195]) by imf28.hostedemail.com (Postfix) with ESMTP id 95EBAC0008 for ; Mon, 9 Jan 2023 15:17:47 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf28.hostedemail.com: domain of mgorman@techsingularity.net designates 81.17.249.195 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673277467; a=rsa-sha256; cv=none; b=8ZvV2zwFNY3YUzXO48jBM9QsgWmwfEXbk5AV7DOhsTOHh3PJ6wQMqDlu42TK4ULkKrS/oz IE1xByRI5v41+xHJvfKXYd//ymTL5ZpS3Lgm45HE7yKW8aSY06ZSa2crMg4Eln/2OTj445 +fJ1rmV/PSWStThnjRUOto2sGApEHFQ= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf28.hostedemail.com: domain of mgorman@techsingularity.net designates 81.17.249.195 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673277467; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/aosjVf9wnG2odvEfIQHwOZ4v3JbIECRX2kB0cfaaik=; b=rPzfMvcMlVeF6t3g/qkmFSxDVX5z5lSLIBQrHAZa7BpztZ/bIarqfEejgn6KjBFDhEx3rW B2I5YbkaIUJBCkE0sA8bfraeGssb6GBFTxxOEPrMUDSNsdGxGYVLgMhQ7B6V7g+jt8n4LA 76O2NeELnOzmmd0ao3HBt+YchwRHoXY= Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp27.blacknight.com (Postfix) with ESMTPS id CC0FECACB4 for ; Mon, 9 Jan 2023 15:17:45 +0000 (GMT) Received: (qmail 19193 invoked from network); 9 Jan 2023 15:17:45 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 9 Jan 2023 15:17:45 -0000 From: Mel Gorman To: Linux-MM Cc: Andrew Morton , Michal Hocko , NeilBrown , Thierry Reding , Matthew Wilcox , Vlastimil Babka , LKML , Mel Gorman Subject: [PATCH 6/7] mm/page_alloc: Give GFP_ATOMIC and non-blocking allocations access to reserves Date: Mon, 9 Jan 2023 15:16:30 +0000 Message-Id: <20230109151631.24923-7-mgorman@techsingularity.net> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230109151631.24923-1-mgorman@techsingularity.net> References: <20230109151631.24923-1-mgorman@techsingularity.net> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 95EBAC0008 X-Stat-Signature: jhukdisk8eayf31sf3hq8dxx6f4913mc X-HE-Tag: 1673277467-68026 X-HE-Meta: U2FsdGVkX19qxSmUe2RgvzLtrjGViYVYrglYGbHmGcjg3B1QkZ5G14v+zNAp7MtJhTcvY66sXhPMpk/U1VwolLAXd0ZNhvrKt1cWJcaoOjOFzLFqhf4zS8AJwmZNeInB/aBqFWINrbo0nHg0miPOJQ66esLggKZ7JcTGEo/9mO/6EqBUXBWQIWR4DxMGsU7mrcU1imRt7BkdhepjG8NhyJkOYkWKucRsILIdGcl4aXSdf+PPlppunIRjdfjWNtaxrzuYRaOE4z1rf310nq1likozSJCldzpefFPpMpAs1ayjdxDN5NKqpmgT+ADnLzuRwxPiqw8Xq3xgmVbegugAZa8sNNJq+8olBwfs4RIFW+IX8GZ1SJuQWKRh0L6XqvzvRYRYspsAOWh+z9ldYc/UeKocMVBiUGQEZH024kmsHKu6LuBuZlzxVVE0NP1uJQZwuIzdAtIXDWwu8u9Fq48oen3aGg6ZxkStSVDIo6V1mcd9W3PsPWpSebOlV11DBPlS4fyOW9fOi0D4PXuMaPrefNBxioPaAB4f1zLsckokEhS7RCYVURDEocN2zVvVmrfrPLitJMrvQXcaYwbRetFk6uRseGrzobjXaZX+nrDgxb96SwsyEWuPTGw/doMewbW5Kz6/Zs6fIQae+uIMB5tqSJ/sxQf8XualIiwwiaRvtr7gqStDGd2Kg5+6JRfYuc2Zei7jiiIMzSqtD+DFb3f2BHIYIegizye3cFmOoz+G6JH656ze01GXrvSMF5zvYR2u66QMKTqWPFZkWE3WNZbm3WM+tWeDzEP/LeRPXv7QTxfd8lYtFH6jWwQ7CSx/+uVPRMcsKJWgjDZ0iXAioIWjn84GTlbcQHAiyC1h1snr2jAMOL2e2pRhRDsrdPP4RiYRJ7M56CTR9u11sq17cjMohSwBkfaWuBBiB79fHqMpxSfXVWGcWOtAsu0wZ7e9XQZpgoEb3yYYIRcMRzfnlrW 5v6A14Bf JzYP3fgV9ucF8QDsrtqnoyuCOfPrKyf8lD5QTotla4g/ChGjDLhKpEN/LkyrTClD+R/15 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Explicit GFP_ATOMIC allocations get flagged ALLOC_HARDER which is a bit vague. In preparation for removing __GFP_ATOMIC, give GFP_ATOMIC and other non-blocking allocation requests equal access to reserve. Rename ALLOC_HARDER to ALLOC_NON_BLOCK to make it more clear what the flag means. Signed-off-by: Mel Gorman Acked-by: Vlastimil Babka --- mm/internal.h | 7 +++++-- mm/page_alloc.c | 23 +++++++++++++---------- 2 files changed, 18 insertions(+), 12 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 8706d46863df..23a37588073a 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -735,7 +735,10 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, #define ALLOC_OOM ALLOC_NO_WATERMARKS #endif -#define ALLOC_HARDER 0x10 /* try to alloc harder */ +#define ALLOC_NON_BLOCK 0x10 /* Caller cannot block. Allow access + * to 25% of the min watermark or + * 62.5% if __GFP_HIGH is set. + */ #define ALLOC_MIN_RESERVE 0x20 /* __GFP_HIGH set. Allow access to 50% * of the min watermark. */ @@ -750,7 +753,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, #define ALLOC_KSWAPD 0x800 /* allow waking of kswapd, __GFP_KSWAPD_RECLAIM set */ /* Flags that allow allocations below the min watermark. */ -#define ALLOC_RESERVES (ALLOC_HARDER|ALLOC_MIN_RESERVE|ALLOC_HIGHATOMIC|ALLOC_OOM) +#define ALLOC_RESERVES (ALLOC_NON_BLOCK|ALLOC_MIN_RESERVE|ALLOC_HIGHATOMIC|ALLOC_OOM) enum ttu_flags; struct tlbflush_unmap_batch; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d2df78f5baa2..2217bab2dbb2 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3999,7 +3999,7 @@ bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark, * if it cannot get memory quickly, particularly if it's * also __GFP_HIGH. */ - if (alloc_flags & ALLOC_HARDER) + if (alloc_flags & ALLOC_NON_BLOCK) min -= min / 4; /* @@ -4851,28 +4851,30 @@ gfp_to_alloc_flags(gfp_t gfp_mask, unsigned int order) * The caller may dip into page reserves a bit more if the caller * cannot run direct reclaim, or if the caller has realtime scheduling * policy or is asking for __GFP_HIGH memory. GFP_ATOMIC requests will - * set both ALLOC_HARDER (__GFP_ATOMIC) and ALLOC_MIN_RESERVE(__GFP_HIGH). + * set both ALLOC_NON_BLOCK and ALLOC_MIN_RESERVE(__GFP_HIGH). */ alloc_flags |= (__force int) (gfp_mask & (__GFP_HIGH | __GFP_KSWAPD_RECLAIM)); - if (gfp_mask & __GFP_ATOMIC) { + if (!(gfp_mask & __GFP_DIRECT_RECLAIM)) { /* * Not worth trying to allocate harder for __GFP_NOMEMALLOC even * if it can't schedule. */ if (!(gfp_mask & __GFP_NOMEMALLOC)) { - alloc_flags |= ALLOC_HARDER; + alloc_flags |= ALLOC_NON_BLOCK; if (order > 0) alloc_flags |= ALLOC_HIGHATOMIC; } /* - * Ignore cpuset mems for GFP_ATOMIC rather than fail, see the - * comment for __cpuset_node_allowed(). + * Ignore cpuset mems for non-blocking __GFP_HIGH (probably + * GFP_ATOMIC) rather than fail, see the comment for + * __cpuset_node_allowed(). */ - alloc_flags &= ~ALLOC_CPUSET; + if (alloc_flags & ALLOC_MIN_RESERVE) + alloc_flags &= ~ALLOC_CPUSET; } else if (unlikely(rt_task(current)) && in_task()) alloc_flags |= ALLOC_MIN_RESERVE; @@ -5304,11 +5306,12 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, /* * Help non-failing allocations by giving them access to memory - * reserves but do not use ALLOC_NO_WATERMARKS because this + * reserves normally used for high priority non-blocking + * allocations but do not use ALLOC_NO_WATERMARKS because this * could deplete whole memory reserves which would just make - * the situation worse + * the situation worse. */ - page = __alloc_pages_cpuset_fallback(gfp_mask, order, ALLOC_MIN_RESERVE|ALLOC_HARDER, ac); + page = __alloc_pages_cpuset_fallback(gfp_mask, order, ALLOC_MIN_RESERVE|ALLOC_NON_BLOCK, ac); if (page) goto got_pg; From patchwork Mon Jan 9 15:16:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 13093714 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 011FAC54EBD for ; Mon, 9 Jan 2023 15:18:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9843B8E000F; Mon, 9 Jan 2023 10:18:00 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 934B98E0001; Mon, 9 Jan 2023 10:18:00 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7FCED8E000F; Mon, 9 Jan 2023 10:18:00 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 7221D8E0001 for ; Mon, 9 Jan 2023 10:18:00 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 3FBA0C09BE for ; Mon, 9 Jan 2023 15:18:00 +0000 (UTC) X-FDA: 80335615920.24.7D95BD1 Received: from outbound-smtp16.blacknight.com (outbound-smtp16.blacknight.com [46.22.139.233]) by imf11.hostedemail.com (Postfix) with ESMTP id 82B454000B for ; Mon, 9 Jan 2023 15:17:57 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf11.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.139.233 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673277477; a=rsa-sha256; cv=none; b=LDWB3tNOjZZI914X0bDUAtB1W6+oZihEDTEiumV62tAgnX/+bZTDKuoyiNO/68Gw+Hq+Mt prkejO0ZcFh2Xn/rx0/Ru9oGY6LzI3tA9hbAzs5JBu0Tgu/VUHNfGUUV1xrFPRWDfTgI9z HlBIZU+ZRpqPhIOEfS9bazLxKxavN8I= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf11.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.139.233 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673277477; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nOe432OLGkHZtFclFtJnxeiSwYMvshAGHDEa8emhJ5Q=; b=kon9DyIXaJ2A5fuUo1tOYVfZpfZ+pBNc5TI1B60kYl5hpHTtQadXrx3cFZFjfBLd9MbDVX sI8ApYn2ecJkNaxjjtN2Ahc42j6YtIf5qCx9s0wAeSQQKdsHBq2Q/GlFD5WTdXil7qw3E0 aWj/eWI9ccBX7wENauIIdsEw2CfC57Q= Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp16.blacknight.com (Postfix) with ESMTPS id 043C81C40F7 for ; Mon, 9 Jan 2023 15:17:56 +0000 (GMT) Received: (qmail 19711 invoked from network); 9 Jan 2023 15:17:55 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 9 Jan 2023 15:17:55 -0000 From: Mel Gorman To: Linux-MM Cc: Andrew Morton , Michal Hocko , NeilBrown , Thierry Reding , Matthew Wilcox , Vlastimil Babka , LKML , Mel Gorman Subject: [PATCH 7/7] mm: discard __GFP_ATOMIC Date: Mon, 9 Jan 2023 15:16:31 +0000 Message-Id: <20230109151631.24923-8-mgorman@techsingularity.net> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230109151631.24923-1-mgorman@techsingularity.net> References: <20230109151631.24923-1-mgorman@techsingularity.net> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 82B454000B X-Stat-Signature: qhkordmqfhoks3ms383q84ukiwh1rrrk X-HE-Tag: 1673277477-434334 X-HE-Meta: U2FsdGVkX1/dBkOhnnAoUM+N0RzC8UVQC6HjEZzNkOwQMcmr5VHdPSBYpzdKHiYX96vpnOB7jEWDsew3qqLxsB+0viC2/VrmiZ0kc4onJ7BKf/YoEpXh6WKWlRC+Zc65kI0nbSNaSMGRy70deCEx9oQ6HRSkk0FHO5iDZQZdxr6aB4z8cwBBe2bq1r2pFfN3OwZbhLpa+ysR4xTVybo112O1ptQurQ8bbo8yQuf8t0rr2g43Nxd54VZus2pxbDG39ETHiRDpxCr9NrSV++hFUQu/2EOLAuUtI3RCMn5whJuOzfki2cEc5/j27wGm5mwiqaFf84/f2g3MTlw/9+BBe0m0smLDbDY7F4IP7sUw9aT7uxhHiC8oiqIPbTo7v8ctLzldrBuLmYQCoj4EFt67AHPwyTZbTlZBh69Yc2adKv/lQgifCgbjX5kqtXCkQQE/h+XoHUzdsWjCkr4IuyoZefhCKZXwCowz4TeV3eQhfAuTnpzzjdby6lm3/Wc2H/3ycloJX13AVWEl+w3MkSOPf2m418ypjCb9cpEJNVV3WpazF8g4afVpR8JRkPy66yScnK7Px8Eky//pg53m169PssIQsAVdaHkE/ZRmL+/neOXvD7uCCt0t4FSI5MaUQ2OQsDlPkt/PqJ4/ssDxibRDJtPs/Y7OUKfEjkKgt6bHH0gF15z/6iGIiTyO6wWSjRNsYlkSWbhqS48SHiROCzSUnip24Y8afuuFfqhotfioZ5drW28DMI4oFzAmc86B+JQA+H4cbPpr2E2HLNgo2+2FS0e1DITOZxDYVJdD7kaakoqel4nd1Lh44Hboc3YdIQ67eUZ071yOwlwLWnHSbGiQ0eS6E/zsOVxi8pFgQerf+T1wmFy99G40HrwlY5HndtVtcYmgrfuKxykda8mzwDf+syQEg3Za7q+NT8dxvLEn8jW3gsj50WcK2XDETDcFP0SytE1Uq7JMZJtDMYu9H8u 2+rIDfyB pfzYrxiR2PDzSpT1lq9PyuHiPHvO8cYCT2f+epYZYMK9jLefI+Fo0KxHGNuXuFkcDpEr+ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: NeilBrown __GFP_ATOMIC serves little purpose. Its main effect is to set ALLOC_HARDER which adds a few little boosts to increase the chance of an allocation succeeding, one of which is to lower the water-mark at which it will succeed. It is *always* paired with __GFP_HIGH which sets ALLOC_HIGH which also adjusts this watermark. It is probable that other users of __GFP_HIGH should benefit from the other little bonuses that __GFP_ATOMIC gets. __GFP_ATOMIC also gives a warning if used with __GFP_DIRECT_RECLAIM. There is little point to this. We already get a might_sleep() warning if __GFP_DIRECT_RECLAIM is set. __GFP_ATOMIC allows the "watermark_boost" to be side-stepped. It is probable that testing ALLOC_HARDER is a better fit here. __GFP_ATOMIC is used by tegra-smmu.c to check if the allocation might sleep. This should test __GFP_DIRECT_RECLAIM instead. This patch: - removes __GFP_ATOMIC - allows __GFP_HIGH allocations to ignore watermark boosting as well as GFP_ATOMIC requests. - makes other adjustments as suggested by the above. The net result is not change to GFP_ATOMIC allocations. Other allocations that use __GFP_HIGH will benefit from a few different extra privileges. This affects: xen, dm, md, ntfs3 the vermillion frame buffer hibernation ksm swap all of which likely produce more benefit than cost if these selected allocation are more likely to succeed quickly. [mgorman: Minor adjustments to rework on top of a series] Link: https://lkml.kernel.org/r/163712397076.13692.4727608274002939094@noble.neil.brown.name Signed-off-by: NeilBrown Signed-off-by: Mel Gorman Acked-by: Vlastimil Babka Acked-by: Michal Hocko --- Documentation/mm/balance.rst | 2 +- drivers/iommu/tegra-smmu.c | 4 ++-- include/linux/gfp_types.h | 12 ++++-------- include/trace/events/mmflags.h | 1 - lib/test_printf.c | 8 ++++---- mm/internal.h | 2 +- mm/page_alloc.c | 13 +++---------- tools/perf/builtin-kmem.c | 1 - 8 files changed, 15 insertions(+), 28 deletions(-) diff --git a/Documentation/mm/balance.rst b/Documentation/mm/balance.rst index 6a1fadf3e173..e38e9d83c1c7 100644 --- a/Documentation/mm/balance.rst +++ b/Documentation/mm/balance.rst @@ -6,7 +6,7 @@ Memory Balancing Started Jan 2000 by Kanoj Sarcar -Memory balancing is needed for !__GFP_ATOMIC and !__GFP_KSWAPD_RECLAIM as +Memory balancing is needed for !__GFP_HIGH and !__GFP_KSWAPD_RECLAIM as well as for non __GFP_IO allocations. The first reason why a caller may avoid reclaim is that the caller can not diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c index 5b1af40221ec..af8d0e685260 100644 --- a/drivers/iommu/tegra-smmu.c +++ b/drivers/iommu/tegra-smmu.c @@ -671,12 +671,12 @@ static struct page *as_get_pde_page(struct tegra_smmu_as *as, * allocate page in a sleeping context if GFP flags permit. Hence * spinlock needs to be unlocked and re-locked after allocation. */ - if (!(gfp & __GFP_ATOMIC)) + if (gfpflags_allow_blocking(gfp)) spin_unlock_irqrestore(&as->lock, *flags); page = alloc_page(gfp | __GFP_DMA | __GFP_ZERO); - if (!(gfp & __GFP_ATOMIC)) + if (gfpflags_allow_blocking(gfp)) spin_lock_irqsave(&as->lock, *flags); /* diff --git a/include/linux/gfp_types.h b/include/linux/gfp_types.h index d88c46ca82e1..5088637fe5c2 100644 --- a/include/linux/gfp_types.h +++ b/include/linux/gfp_types.h @@ -31,7 +31,7 @@ typedef unsigned int __bitwise gfp_t; #define ___GFP_IO 0x40u #define ___GFP_FS 0x80u #define ___GFP_ZERO 0x100u -#define ___GFP_ATOMIC 0x200u +/* 0x200u unused */ #define ___GFP_DIRECT_RECLAIM 0x400u #define ___GFP_KSWAPD_RECLAIM 0x800u #define ___GFP_WRITE 0x1000u @@ -116,11 +116,8 @@ typedef unsigned int __bitwise gfp_t; * * %__GFP_HIGH indicates that the caller is high-priority and that granting * the request is necessary before the system can make forward progress. - * For example, creating an IO context to clean pages. - * - * %__GFP_ATOMIC indicates that the caller cannot reclaim or sleep and is - * high priority. Users are typically interrupt handlers. This may be - * used in conjunction with %__GFP_HIGH + * For example creating an IO context to clean pages and requests + * from atomic context. * * %__GFP_MEMALLOC allows access to all memory. This should only be used when * the caller guarantees the allocation will allow more memory to be freed @@ -135,7 +132,6 @@ typedef unsigned int __bitwise gfp_t; * %__GFP_NOMEMALLOC is used to explicitly forbid access to emergency reserves. * This takes precedence over the %__GFP_MEMALLOC flag if both are set. */ -#define __GFP_ATOMIC ((__force gfp_t)___GFP_ATOMIC) #define __GFP_HIGH ((__force gfp_t)___GFP_HIGH) #define __GFP_MEMALLOC ((__force gfp_t)___GFP_MEMALLOC) #define __GFP_NOMEMALLOC ((__force gfp_t)___GFP_NOMEMALLOC) @@ -329,7 +325,7 @@ typedef unsigned int __bitwise gfp_t; * version does not attempt reclaim/compaction at all and is by default used * in page fault path, while the non-light is used by khugepaged. */ -#define GFP_ATOMIC (__GFP_HIGH|__GFP_ATOMIC|__GFP_KSWAPD_RECLAIM) +#define GFP_ATOMIC (__GFP_HIGH|__GFP_KSWAPD_RECLAIM) #define GFP_KERNEL (__GFP_RECLAIM | __GFP_IO | __GFP_FS) #define GFP_KERNEL_ACCOUNT (GFP_KERNEL | __GFP_ACCOUNT) #define GFP_NOWAIT (__GFP_KSWAPD_RECLAIM) diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h index 412b5a46374c..9db52bc4ce19 100644 --- a/include/trace/events/mmflags.h +++ b/include/trace/events/mmflags.h @@ -31,7 +31,6 @@ gfpflag_string(__GFP_HIGHMEM), \ gfpflag_string(GFP_DMA32), \ gfpflag_string(__GFP_HIGH), \ - gfpflag_string(__GFP_ATOMIC), \ gfpflag_string(__GFP_IO), \ gfpflag_string(__GFP_FS), \ gfpflag_string(__GFP_NOWARN), \ diff --git a/lib/test_printf.c b/lib/test_printf.c index d34dc636b81c..46b4e6c414a3 100644 --- a/lib/test_printf.c +++ b/lib/test_printf.c @@ -674,17 +674,17 @@ flags(void) gfp = GFP_ATOMIC|__GFP_DMA; test("GFP_ATOMIC|GFP_DMA", "%pGg", &gfp); - gfp = __GFP_ATOMIC; - test("__GFP_ATOMIC", "%pGg", &gfp); + gfp = __GFP_HIGH; + test("__GFP_HIGH", "%pGg", &gfp); /* Any flags not translated by the table should remain numeric */ gfp = ~__GFP_BITS_MASK; snprintf(cmp_buffer, BUF_SIZE, "%#lx", (unsigned long) gfp); test(cmp_buffer, "%pGg", &gfp); - snprintf(cmp_buffer, BUF_SIZE, "__GFP_ATOMIC|%#lx", + snprintf(cmp_buffer, BUF_SIZE, "__GFP_HIGH|%#lx", (unsigned long) gfp); - gfp |= __GFP_ATOMIC; + gfp |= __GFP_HIGH; test(cmp_buffer, "%pGg", &gfp); kfree(cmp_buffer); diff --git a/mm/internal.h b/mm/internal.h index 23a37588073a..71b1111427f3 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -24,7 +24,7 @@ struct folio_batch; #define GFP_RECLAIM_MASK (__GFP_RECLAIM|__GFP_HIGH|__GFP_IO|__GFP_FS|\ __GFP_NOWARN|__GFP_RETRY_MAYFAIL|__GFP_NOFAIL|\ __GFP_NORETRY|__GFP_MEMALLOC|__GFP_NOMEMALLOC|\ - __GFP_ATOMIC|__GFP_NOLOCKDEP) + __GFP_NOLOCKDEP) /* The GFP flags allowed during early boot */ #define GFP_BOOT_MASK (__GFP_BITS_MASK & ~(__GFP_RECLAIM|__GFP_IO|__GFP_FS)) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 2217bab2dbb2..7244ab522028 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4086,13 +4086,14 @@ static inline bool zone_watermark_fast(struct zone *z, unsigned int order, if (__zone_watermark_ok(z, order, mark, highest_zoneidx, alloc_flags, free_pages)) return true; + /* - * Ignore watermark boosting for GFP_ATOMIC order-0 allocations + * Ignore watermark boosting for __GFP_HIGH order-0 allocations * when checking the min watermark. The min watermark is the * point where boosting is ignored so that kswapd is woken up * when below the low watermark. */ - if (unlikely(!order && (gfp_mask & __GFP_ATOMIC) && z->watermark_boost + if (unlikely(!order && (alloc_flags & ALLOC_MIN_RESERVE) && z->watermark_boost && ((alloc_flags & ALLOC_WMARK_MASK) == WMARK_MIN))) { mark = z->_watermark[WMARK_MIN]; return __zone_watermark_ok(z, order, mark, highest_zoneidx, @@ -5057,14 +5058,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, unsigned int zonelist_iter_cookie; int reserve_flags; - /* - * We also sanity check to catch abuse of atomic reserves being used by - * callers that are not in atomic context. - */ - if (WARN_ON_ONCE((gfp_mask & (__GFP_ATOMIC|__GFP_DIRECT_RECLAIM)) == - (__GFP_ATOMIC|__GFP_DIRECT_RECLAIM))) - gfp_mask &= ~__GFP_ATOMIC; - restart: compaction_retries = 0; no_progress_loops = 0; diff --git a/tools/perf/builtin-kmem.c b/tools/perf/builtin-kmem.c index e20656c431a4..173d407dce92 100644 --- a/tools/perf/builtin-kmem.c +++ b/tools/perf/builtin-kmem.c @@ -641,7 +641,6 @@ static const struct { { "__GFP_HIGHMEM", "HM" }, { "GFP_DMA32", "D32" }, { "__GFP_HIGH", "H" }, - { "__GFP_ATOMIC", "_A" }, { "__GFP_IO", "I" }, { "__GFP_FS", "F" }, { "__GFP_NOWARN", "NWR" },