From patchwork Fri Jan 13 11:12:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 13100524 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC1D9C54EBD for ; Fri, 13 Jan 2023 11:12:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 36EBB8E0003; Fri, 13 Jan 2023 06:12:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 31F228E0001; Fri, 13 Jan 2023 06:12:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 20E5D8E0003; Fri, 13 Jan 2023 06:12:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 147588E0001 for ; Fri, 13 Jan 2023 06:12:42 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id D0DFE14056E for ; Fri, 13 Jan 2023 11:12:41 +0000 (UTC) X-FDA: 80349512922.17.DBE6DE0 Received: from outbound-smtp14.blacknight.com (outbound-smtp14.blacknight.com [46.22.139.231]) by imf24.hostedemail.com (Postfix) with ESMTP id 3CBFC180012 for ; Fri, 13 Jan 2023 11:12:40 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf24.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.139.231 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673608360; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LnNHzsxLBAqkc6G9HCgyzTdpj5TzheSV3RrT+CCgXUE=; b=4ZlZ/fw89CbhOaJttRxkXeboo1lKkNIF1ViL1Ic0ZFQvZybPrjJV8cvXI1sIl1DM2E6Vgx PYvVfwqu9cfMtNUPtnY2yyLK9ZlSrVslQyxMuFttsLt+FD4YrY03cxm9j3hUHs9fuhodP5 CFo556gaIhnnfm7jLVDn/39eAkC9A2s= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf24.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.139.231 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673608360; a=rsa-sha256; cv=none; b=8cOABMLj1P1A60OH83Qz5L/jIqEMV7QY1GnT6BRPBtekHmsx+kpFWZC5cCtuCnwdsQIKcP SVpfaQk4FVjwCs4pt5Gi8rKs8SRo3RQlxNijlwBSP7TdFJdCbkugqOhnBR86R319iQuIrW eK3sq5/DsNztakK+eHSgfF3QZqwstkQ= Received: from mail.blacknight.com (pemlinmail02.blacknight.ie [81.17.254.11]) by outbound-smtp14.blacknight.com (Postfix) with ESMTPS id 5BF151C42E7 for ; Fri, 13 Jan 2023 11:12:38 +0000 (GMT) Received: (qmail 8795 invoked from network); 13 Jan 2023 11:12:38 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 13 Jan 2023 11:12:38 -0000 From: Mel Gorman To: Andrew Morton Cc: Michal Hocko , NeilBrown , Thierry Reding , Matthew Wilcox , Vlastimil Babka , Linux-MM , LKML , Mel Gorman Subject: [PATCH 1/6] mm/page_alloc: Rename ALLOC_HIGH to ALLOC_MIN_RESERVE Date: Fri, 13 Jan 2023 11:12:12 +0000 Message-Id: <20230113111217.14134-2-mgorman@techsingularity.net> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230113111217.14134-1-mgorman@techsingularity.net> References: <20230113111217.14134-1-mgorman@techsingularity.net> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 3CBFC180012 X-Stat-Signature: de16hzf3nnxbm4ipteaaxtsfhd66xosw X-HE-Tag: 1673608360-477366 X-HE-Meta: U2FsdGVkX1+d6ZBuDnqCpZmfFaYeG9PBnLdKLoFDcDGZfATCfBGrU1yv0lJXRLt3WmrhLwncGVCBVw6wserie2w48v6//ncGcVxakitj3E0rLHXirTiwrvbSX7DIVqwhHLQM2v08v8KVqN4MB0XeGYxUMLc2/dF+eCiewoU99FAkzG3ipOYfJvwetrpwnPHv7uB+d06p3IFDgDYhm+LhKssR3ovd33RLxSq1vVshwhbiqrt6V0aHctUil3qYSB2Z9PRooWLjk3DE7M2sTSRMfIDKxLXM6l0HYdlnC0b1YlEeZNZxPRbNCXSwDxW7HNqYfQRBJUyn3sMPbsPQ/AzyngD5fQKUXLQFHCJGE9YbQKVpAkKxa55klRD1dfrXviMgmGT/8Vyw0XrgksYzc2g2ejignp2IniGaYIfGk1UrdddAicUw+oz3H0bSajF1tsI24B3/bZwcWZu231ZLjPejhas1GXFTIYk9uYAEA1rGmHH6/gU3X1ykOLiYjoh+uqARPm0WKpP396tDcgEMdO2eejANd+kcjf4f4ezxQiERw5nioEm/n0f4LKF19rwAy1brQWuIVEXLxdD7Hjhge98nCN1ZVUiyu0dvU1Mctme/QsYzBX1iWiHiSZEU6tWczH+u2f5d2ga63ty8xZtTFUGCmFwyVr56nkUxNfrL117YLOw5TKRgjYkWE2RdeDrL7RyZ2016/TrcQwF/npb0l5xHhzzF/3VXTlpOBw1kDCC8EwIKMxoVp5td9deJMBYpaI91ZkXBNB/lqNu6nP/9uTzEsb29j2z9g1+xKaK2CkDHUxnkLga4V8K0o3tdJRFJ94GPY20IG3vmTtID1RIULrWInWAN93Cxpkv2DpDCQ6bbPOFb4PBqjzTinCKbo9/SXu5PLNVM43Quh3D9cwaSINi5BHwgNMgUC0TFrYcKxxNADnLHfQpARUwiJH+MXq5Y2n4QIT8GQhJzEtBEaubnuTH 4eM4FXvI jFx7wmHFG8+CbjCED54NKgISqjyBVrXvUPNI0DyvjjlHZkWqUx0U26SH/zfOF0h+IdZn9 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: __GFP_HIGH aliases to ALLOC_HIGH but the name does not really hint what it means. As ALLOC_HIGH is internal to the allocator, rename it to ALLOC_MIN_RESERVE to document that the min reserves can be depleted. Signed-off-by: Mel Gorman Acked-by: Vlastimil Babka Acked-by: Michal Hocko --- mm/internal.h | 4 +++- mm/page_alloc.c | 8 ++++---- 2 files changed, 7 insertions(+), 5 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index bcf75a8b032d..403e4386626d 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -736,7 +736,9 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, #endif #define ALLOC_HARDER 0x10 /* try to alloc harder */ -#define ALLOC_HIGH 0x20 /* __GFP_HIGH set */ +#define ALLOC_MIN_RESERVE 0x20 /* __GFP_HIGH set. Allow access to 50% + * of the min watermark. + */ #define ALLOC_CPUSET 0x40 /* check for correct cpuset */ #define ALLOC_CMA 0x80 /* allow allocations from CMA areas */ #ifdef CONFIG_ZONE_DMA32 diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0745aedebb37..244c1e675dc8 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3976,7 +3976,7 @@ bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark, /* free_pages may go negative - that's OK */ free_pages -= __zone_watermark_unusable_free(z, order, alloc_flags); - if (alloc_flags & ALLOC_HIGH) + if (alloc_flags & ALLOC_MIN_RESERVE) min -= min / 2; if (unlikely(alloc_harder)) { @@ -4818,18 +4818,18 @@ gfp_to_alloc_flags(gfp_t gfp_mask) unsigned int alloc_flags = ALLOC_WMARK_MIN | ALLOC_CPUSET; /* - * __GFP_HIGH is assumed to be the same as ALLOC_HIGH + * __GFP_HIGH is assumed to be the same as ALLOC_MIN_RESERVE * and __GFP_KSWAPD_RECLAIM is assumed to be the same as ALLOC_KSWAPD * to save two branches. */ - BUILD_BUG_ON(__GFP_HIGH != (__force gfp_t) ALLOC_HIGH); + BUILD_BUG_ON(__GFP_HIGH != (__force gfp_t) ALLOC_MIN_RESERVE); BUILD_BUG_ON(__GFP_KSWAPD_RECLAIM != (__force gfp_t) ALLOC_KSWAPD); /* * The caller may dip into page reserves a bit more if the caller * cannot run direct reclaim, or if the caller has realtime scheduling * policy or is asking for __GFP_HIGH memory. GFP_ATOMIC requests will - * set both ALLOC_HARDER (__GFP_ATOMIC) and ALLOC_HIGH (__GFP_HIGH). + * set both ALLOC_HARDER (__GFP_ATOMIC) and ALLOC_MIN_RESERVE(__GFP_HIGH). */ alloc_flags |= (__force int) (gfp_mask & (__GFP_HIGH | __GFP_KSWAPD_RECLAIM)); From patchwork Fri Jan 13 11:12:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 13100525 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C7C5C54EBE for ; Fri, 13 Jan 2023 11:12:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CBE328E0005; Fri, 13 Jan 2023 06:12:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C6D578E0001; Fri, 13 Jan 2023 06:12:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B5C998E0005; Fri, 13 Jan 2023 06:12:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id A96798E0001 for ; Fri, 13 Jan 2023 06:12:51 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 4DBCCAAAF1 for ; Fri, 13 Jan 2023 11:12:51 +0000 (UTC) X-FDA: 80349513342.27.5B82BAA Received: from outbound-smtp23.blacknight.com (outbound-smtp23.blacknight.com [81.17.249.191]) by imf30.hostedemail.com (Postfix) with ESMTP id AC29680009 for ; Fri, 13 Jan 2023 11:12:49 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=none; spf=pass (imf30.hostedemail.com: domain of mgorman@techsingularity.net designates 81.17.249.191 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673608369; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=FiHLbgT+/V2AjHkJD/pCH3t4rdxztdy//5LcZVO0gHM=; b=AUAv7efOVzfYay68M9SfCeU+dlQNw+jUl84Aa0GTO4GDUGF6z+xhPPCDqI1xMd+ZI/lumb 8mkbqHiaS97A88qdUsFZ/SHVorlKf6G0SSLf4cQJ/0ZZfozMz5qEC+p0Y/sNQJ+0DVIv0V lWFPft1JRU1P8WDuPY8oUCj6vw9w5s0= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=none; spf=pass (imf30.hostedemail.com: domain of mgorman@techsingularity.net designates 81.17.249.191 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673608369; a=rsa-sha256; cv=none; b=H/hsWabLDet3e/kbf4GCW5KzUmlhLc2Froul5XiLg3gOGX/kw60agiq7pSLstpRECgKFOV +aBciXxUcyLWuC7qJS1SAvBBUJkHtKzP0TDLlEUukR6W/AgrFIA2mf3OqDtCqG+BwCtHCX bCjq8wE/iHkQnFIE5qarUL7QTMhDt6c= Received: from mail.blacknight.com (pemlinmail02.blacknight.ie [81.17.254.11]) by outbound-smtp23.blacknight.com (Postfix) with ESMTPS id 8A61396004 for ; Fri, 13 Jan 2023 11:12:48 +0000 (GMT) Received: (qmail 9303 invoked from network); 13 Jan 2023 11:12:48 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 13 Jan 2023 11:12:48 -0000 From: Mel Gorman To: Andrew Morton Cc: Michal Hocko , NeilBrown , Thierry Reding , Matthew Wilcox , Vlastimil Babka , Linux-MM , LKML , Mel Gorman Subject: [PATCH 2/6] mm/page_alloc: Treat RT tasks similar to __GFP_HIGH Date: Fri, 13 Jan 2023 11:12:13 +0000 Message-Id: <20230113111217.14134-3-mgorman@techsingularity.net> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230113111217.14134-1-mgorman@techsingularity.net> References: <20230113111217.14134-1-mgorman@techsingularity.net> MIME-Version: 1.0 X-Stat-Signature: 5xwt5pfeb9o5kegiqrigqrrutm4wib9a X-Rspam-User: X-Rspamd-Queue-Id: AC29680009 X-Rspamd-Server: rspam06 X-HE-Tag: 1673608369-116077 X-HE-Meta: U2FsdGVkX1+BzVMcxrYppMFHYQLnw3Rweq+1DePvFfsQlWG5rRkKfK8yZmMR2FpGqJJdkfGmhK/Dftm7zUsbJ76kAYBtZVYfWCGLyICqbIDCv+J0LLWXH9XAEEFfVhQt0GDNzDuLhqo0RaqVGfdjRfWSa5Pk+1fw2jMYs4zXprcdMtIUfmV+0LlC+hZ+5ucVvdhvft2/1vkqI8S7pMxYeMIAQRmfbhz5oBnLSIkFmBpGOPf3/g8nzSwMOty30dllyBO73be8irXMLGLoUi3KIljYOStoQFbQuvSP6kCpt/bXac24MyAhNAYPpd2XhCRZyX7NFU8QwAgI08TsHWnOO1iYh+jlGVMQK9fU4o7QpUm7juheV1Pc92Jjy4c1NxuOEoQ2f84Qgq/Tm2RQyqjJvl6Psz59hBoet3K7BDYJuWiu6L7sV5arMNI1E8aXAaXZgqlw2eIszoQpRpgrrbQXXLAtJVfMf//ZwtHJw3N/MjbPVYeMjE7Xm2n1W6AjLCbKzTNKmbOkveAL+UDs/PeEguHJ0EejVjqay66BlV2/UE0tn/wha/G82YCUd8lj0Y07Tc7gw+Sl4qg6A5dd4EGzideuqXxXhqtELArzycU5RuDNh6lOnwFvK/aJfMPR7+m76rF2eU9GlfhkrkSG+uUME3pkQv2H6/TzaNIUpyPPlGzo9/DSs2+EtTzoGyVX8Hb4fPkLll3mW9GY/LTRbs21IVYmyZK2VVMgSnOmh6GJy2GXdZHuzgZvjW9X5ezNU8V1K3lQy669MAo+ZdrznFGA/Dv8Vcu0G5R3nYERyLYj4duDzjFl5k4E1sjPYYYm0abwo4zsrFg9xmWg4NylVR75f7ZLGp+NTIsQ409ecRFilBuTcY/KAlHOLlXu4Kt5YMxO2P8g7uxn4cnPxeOKBSbi2WvZPPoXHiybHu+06k0LVYahlzi1uPusFsTW6ycX2CI8j+9rSOCK9pEbv1UWuaa XReSwlQb VfG7R+Cb85WQbgSFsdoL5pYqwKSbtb5SI6zGAIfCsBMVhPRE+pSHDS0M8a+xm6iEFOd6u X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: RT tasks are allowed to dip below the min reserve but ALLOC_HARDER is typically combined with ALLOC_MIN_RESERVE so RT tasks are a little unusual. While there is some justification for allowing RT tasks access to memory reserves, there is a strong chance that a RT task that is also under memory pressure is at risk of missing deadlines anyway. Relax how much reserves an RT task can access by treating it the same as __GFP_HIGH allocations. Note that in a future kernel release that the RT special casing will be removed. Hard realtime tasks should be locking down resources in advance and ensuring enough memory is available. Even a soft-realtime task like audio or video live decoding which cannot jitter should be allocating both memory and any disk space required up-front before the recording starts instead of relying on reserves. At best, reserve access will only delay the problem by a very short interval. Signed-off-by: Mel Gorman Acked-by: Vlastimil Babka Acked-by: Michal Hocko --- mm/page_alloc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 244c1e675dc8..0040b4e00913 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4847,7 +4847,7 @@ gfp_to_alloc_flags(gfp_t gfp_mask) */ alloc_flags &= ~ALLOC_CPUSET; } else if (unlikely(rt_task(current)) && in_task()) - alloc_flags |= ALLOC_HARDER; + alloc_flags |= ALLOC_MIN_RESERVE; alloc_flags = gfp_to_alloc_flags_cma(gfp_mask, alloc_flags); From patchwork Fri Jan 13 11:12:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 13100526 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8149C54EBE for ; Fri, 13 Jan 2023 11:13:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 20F588E0006; Fri, 13 Jan 2023 06:13:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1BFA38E0001; Fri, 13 Jan 2023 06:13:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0AE758E0006; Fri, 13 Jan 2023 06:13:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id F29968E0001 for ; Fri, 13 Jan 2023 06:13:01 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id CE0CB1A0515 for ; Fri, 13 Jan 2023 11:13:01 +0000 (UTC) X-FDA: 80349513762.08.3DD53DE Received: from outbound-smtp15.blacknight.com (outbound-smtp15.blacknight.com [46.22.139.232]) by imf28.hostedemail.com (Postfix) with ESMTP id 26093C001C for ; Fri, 13 Jan 2023 11:12:59 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf28.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.139.232 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673608380; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4dtuJMuLjAPooRkMDKhq25jF6dppLdwvssJeQCbIWts=; b=wzYQDslQBGajxpvtAuWGfEyvfUlvxGX0NLISthv5OAICDPZ7WjRW/qc6aqeH6lS7M0dgSG PxCokVoXfQf6O9WFGQ3PICNXZAYYG+9V5OQ5V+s0xYmd66JqrMIyzqeMGHnzuXW9Tw2O4E amfTH4Cht+HAgEi0eiJkJ/bQ1bFJOtM= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf28.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.139.232 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673608380; a=rsa-sha256; cv=none; b=mC7eYjxDQqxGMYhsmwOnEkeZ90DpqDBoR8RzpFwJaavXUL9ARf5RQGA6n3vaGmvKiWWsJ6 thJ0EJTrrj2mEjSYGXFwGRcoEkiDdtEP9OYPVef4RK2HmMlMwybgQCb5TFiOLqWwzBrCps eR1H8G/uefx2WWH5SHCjJmtjC4vwVT4= Received: from mail.blacknight.com (pemlinmail02.blacknight.ie [81.17.254.11]) by outbound-smtp15.blacknight.com (Postfix) with ESMTPS id 999EB1C42ED for ; Fri, 13 Jan 2023 11:12:58 +0000 (GMT) Received: (qmail 9751 invoked from network); 13 Jan 2023 11:12:58 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 13 Jan 2023 11:12:58 -0000 From: Mel Gorman To: Andrew Morton Cc: Michal Hocko , NeilBrown , Thierry Reding , Matthew Wilcox , Vlastimil Babka , Linux-MM , LKML , Mel Gorman Subject: [PATCH 3/6] mm/page_alloc: Explicitly record high-order atomic allocations in alloc_flags Date: Fri, 13 Jan 2023 11:12:14 +0000 Message-Id: <20230113111217.14134-4-mgorman@techsingularity.net> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230113111217.14134-1-mgorman@techsingularity.net> References: <20230113111217.14134-1-mgorman@techsingularity.net> MIME-Version: 1.0 X-Rspamd-Queue-Id: 26093C001C X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: 3t7ci5c63ub4f5f6t6o5npqhqwkqrndt X-HE-Tag: 1673608379-123919 X-HE-Meta: U2FsdGVkX19Ej9XLmIAksicCPRniYy4WmobM+m/SirPw88ZuEqKy4vnuk4FFFlEasGUZuiBVmJ8L0Nz+OIMiE2oZvEPY2AWXOEwQJcAMAyeUlCdyQqF1iJeKC3zB3IvrLbAocSBxCq5yUfpgs4gN4pHGj//9Ue9Ycnzcnn65gULTJeijxAd2SW1QD2iIn7hp3Xm+RlAM8e+XU0IwE8/0VxUWSNi7fRC4fBSMwS3Zg8augtuZkpplFDWOcEj8Bpxl5LvZJf40tcZ71bmKL1HdlXUqQBiDeGsBNotxqgmYVANyftNdvW68Uo4tP4Tu/J4/wfxJf34K62+A3KOspLKJZVv37lSeSGqKkuZxSxQYWn+/9Czq/cUOYHeFtHqudRpxxBXiN1DuEjA4kicPOTZYAAvnQQrjOCt6zAkctLn0Q1beCxTpznrWjTt19dwGmptfTy9lapofs6R4xy9c0UBH+r0qBd9uzoAuZuwvdZ3riRAvpfK0tGRdjNHOXD4JvAsyNXSBW9chBvwKOpSnlef2pOrH1LgT6D9IA3s2ISNmQVSreQVnsdHFh45FsW227ic5OAwMOxEI5QoStROuuRDB2JYW0v69cPdSkZktjSJI2C0hhqPI0uN3njvTkvmkyc5oTGtc2FMswHiuKc+rKmmIFDPFQBzzFTmgN8ISBIOjUWF93LCblQCJfhk39RKX+mXKSrVe0S4kgWrinC6YUs+soEyDgbyioykFloFRUaRpn7RDo0sP2MjIXjh8TWpZQDIbE91eKYMYvrkBnY7TtIyrdon4z7JEsp4lYe5kr32Ro0w0pghTJiLCAT/uJo3c8QpIcJgBJEnLNj9cKVYK/k9JM9LTz40bmotruPpJlApSeISCuVbhcBaKdvzUDKdPGAFqRao+x1EsH294P3AxdV97tRUoAmSqVuGKYez5cnyRtXbVVXa5oJq1aOyS1nBJqCZn13hLuHcXqgY55fZO9lM K4RedF9Y DfNrTJe7LegiiaZpgHkhjEs1/pmCx9wcR83J67SF1gdAML9v9lpBxeBTEoLR36INEV3jS X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: A high-order ALLOC_HARDER allocation is assumed to be atomic. While that is accurate, it changes later in the series. In preparation, explicitly record high-order atomic allocations in gfp_to_alloc_flags(). Signed-off-by: Mel Gorman Acked-by: Vlastimil Babka Acked-by: Michal Hocko --- mm/internal.h | 1 + mm/page_alloc.c | 29 +++++++++++++++++++++++------ 2 files changed, 24 insertions(+), 6 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 403e4386626d..178484d9fd94 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -746,6 +746,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, #else #define ALLOC_NOFRAGMENT 0x0 #endif +#define ALLOC_HIGHATOMIC 0x200 /* Allows access to MIGRATE_HIGHATOMIC */ #define ALLOC_KSWAPD 0x800 /* allow waking of kswapd, __GFP_KSWAPD_RECLAIM set */ enum ttu_flags; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0040b4e00913..0ef4f3236a5a 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3706,10 +3706,20 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone, * reserved for high-order atomic allocation, so order-0 * request should skip it. */ - if (order > 0 && alloc_flags & ALLOC_HARDER) + if (alloc_flags & ALLOC_HIGHATOMIC) page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC); if (!page) { page = __rmqueue(zone, order, migratetype, alloc_flags); + + /* + * If the allocation fails, allow OOM handling access + * to HIGHATOMIC reserves as failing now is worse than + * failing a high-order atomic allocation in the + * future. + */ + if (!page && (alloc_flags & ALLOC_OOM)) + page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC); + if (!page) { spin_unlock_irqrestore(&zone->lock, flags); return NULL; @@ -4023,8 +4033,10 @@ bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark, return true; } #endif - if (alloc_harder && !free_area_empty(area, MIGRATE_HIGHATOMIC)) + if ((alloc_flags & (ALLOC_HIGHATOMIC|ALLOC_OOM)) && + !free_area_empty(area, MIGRATE_HIGHATOMIC)) { return true; + } } return false; } @@ -4286,7 +4298,7 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags, * If this is a high-order atomic allocation then check * if the pageblock should be reserved for the future */ - if (unlikely(order && (alloc_flags & ALLOC_HARDER))) + if (unlikely(alloc_flags & ALLOC_HIGHATOMIC)) reserve_highatomic_pageblock(page, zone, order); return page; @@ -4813,7 +4825,7 @@ static void wake_all_kswapds(unsigned int order, gfp_t gfp_mask, } static inline unsigned int -gfp_to_alloc_flags(gfp_t gfp_mask) +gfp_to_alloc_flags(gfp_t gfp_mask, unsigned int order) { unsigned int alloc_flags = ALLOC_WMARK_MIN | ALLOC_CPUSET; @@ -4839,8 +4851,13 @@ gfp_to_alloc_flags(gfp_t gfp_mask) * Not worth trying to allocate harder for __GFP_NOMEMALLOC even * if it can't schedule. */ - if (!(gfp_mask & __GFP_NOMEMALLOC)) + if (!(gfp_mask & __GFP_NOMEMALLOC)) { alloc_flags |= ALLOC_HARDER; + + if (order > 0) + alloc_flags |= ALLOC_HIGHATOMIC; + } + /* * Ignore cpuset mems for GFP_ATOMIC rather than fail, see the * comment for __cpuset_node_allowed(). @@ -5048,7 +5065,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, * kswapd needs to be woken up, and to avoid the cost of setting up * alloc_flags precisely. So we do that now. */ - alloc_flags = gfp_to_alloc_flags(gfp_mask); + alloc_flags = gfp_to_alloc_flags(gfp_mask, order); /* * We need to recalculate the starting point for the zonelist iterator From patchwork Fri Jan 13 11:12:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 13100527 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F337FC54EBE for ; Fri, 13 Jan 2023 11:13:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 964808E0007; Fri, 13 Jan 2023 06:13:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9124A8E0001; Fri, 13 Jan 2023 06:13:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 828768E0007; Fri, 13 Jan 2023 06:13:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 752BB8E0001 for ; Fri, 13 Jan 2023 06:13:12 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 24F8B160883 for ; Fri, 13 Jan 2023 11:13:12 +0000 (UTC) X-FDA: 80349514224.20.F4F6001 Received: from outbound-smtp04.blacknight.com (outbound-smtp04.blacknight.com [81.17.249.35]) by imf18.hostedemail.com (Postfix) with ESMTP id 2F5D61C000B for ; Fri, 13 Jan 2023 11:13:09 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf18.hostedemail.com: domain of mgorman@techsingularity.net designates 81.17.249.35 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673608390; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vABlLWU4vOV7PbsFCMQ7RZft5Uue6QSR/R93b7+7cFQ=; b=nBGcntoL19rUpaULMzELgNG3Tdj2D+9Vn6PU2tlUXIytOJ45/K9EIwx6KHZpC6jfIw2E2F iVRxhaQ9P8/AO2q+hiA1mBnV1vYpa8xHsJPrJvJQPnhjKp/6GIxtoVH1PjO+eBihZdIJ7q GEKoVkWqJgwFw+8FAxEE3nlQLt4+VRY= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf18.hostedemail.com: domain of mgorman@techsingularity.net designates 81.17.249.35 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673608390; a=rsa-sha256; cv=none; b=iwuVFhmimRSdiCeZLO41NVqoO/q3tme0TPCIKHM+ZEo/xoZRF96/4/WfjrvmlpB0soWbpD VOKgFjTNJ403iKrwP/3ypCOAehFrEW3MwEfHZwbD/1ORrGqrQjOaLxJxQkUff6Or12sPIs McFs2+beZ98ywk0zDi1PKoxWdabkSIA= Received: from mail.blacknight.com (pemlinmail02.blacknight.ie [81.17.254.11]) by outbound-smtp04.blacknight.com (Postfix) with ESMTPS id C6CB5D09FB for ; Fri, 13 Jan 2023 11:13:08 +0000 (GMT) Received: (qmail 10332 invoked from network); 13 Jan 2023 11:13:08 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 13 Jan 2023 11:13:08 -0000 From: Mel Gorman To: Andrew Morton Cc: Michal Hocko , NeilBrown , Thierry Reding , Matthew Wilcox , Vlastimil Babka , Linux-MM , LKML , Mel Gorman Subject: [PATCH 4/6] mm/page_alloc: Explicitly define what alloc flags deplete min reserves Date: Fri, 13 Jan 2023 11:12:15 +0000 Message-Id: <20230113111217.14134-5-mgorman@techsingularity.net> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230113111217.14134-1-mgorman@techsingularity.net> References: <20230113111217.14134-1-mgorman@techsingularity.net> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 2F5D61C000B X-Rspam-User: X-Stat-Signature: dc4jq83mykawp1rkwki5n56yhnh48odq X-HE-Tag: 1673608389-280344 X-HE-Meta: U2FsdGVkX1+uMfar9eqLBQqo9Qtcp3A9Wg7rGO/HqqgZh8BkDxrMqqzmwVyW1TnOFphsecQQKarInq51ugYSNmEPVljMC2JEJA8QHEsHfmK2+1uokvEqglQUsschYBl5vfwTxqOkqhmGz9sLfCedFjaD58rALmbL8iasvyzU5TZy+bi4XdM36MBn+HA+HnEcaXKt5FCX8U+YpykmBWA7ZU51h8OiwjuJtQR/jCfPinoQezkbf8LE4ofatNwocGyCB84L0FNwIdqGuOw/s0GvPCMGuAZdA3IADOUGp651i/yOdgZA+lOm7HvQNt2Mm2pN9IHE53yaT6v/yOtMzkXnHDKGoS1qv7JrOrNKkkQBFCs444Se5IB5vXS4IPUzGfv7bJ6ZBvHDP72cPSs9fBWKSOBJvoHt941nlOTh1pyIbCr0uFugDAxynTKDKl8ziugPHz22y1Ee1nWlyze/o+kVhgpmuJB/l3o0+u/WZK0rqVnh1PacEo8XWubu5L15RrU4ZB06TeFs1fDPaqbKBx2MLKaNclvAgPZTv7pdUykLxWz8KEdzyOQcPUtSglUS8zGDtjaaPtjBTuOQ3GVN3r6ilYdIEl4tWcrPFJ/Os4MnDx4v+wt3VUdJ2AKg3s6WuAmpkq4EED5NqJhb3zFs6pVK5CVW3vLtx5kcnhVmcsnKvodAbbn+8HIkDtSvxQv8OyEs2R835hFpxBTqc/fxhp7+uajLVB225+KQL/Ai+HSOpJeD+216nef/JSQ9bXbxU2O2TJsEI1rzU7yL/3Vcecaa6e0/Ypb0DJ17cT3JO5r7pQuXn0H/Wp0ADKN7/AsJPgQR0g4CAK0rH//8JHH31XMAZulQHaraXcA99BY7J7aUFVvftKqQfEnvZ3F4I3iPeTwr80iiDFG/U3rO1hcUv2gyS/ntA1GRj4DqAXNWPnr5fdLlEaaSZAvFsVfeMMJ+YVOQBSu4xxKzj4XbY4aHOEc HWSHs0gK BlYn40EuclNJrhX0vr8Npu0XZVravW/Vz8fDnRAqEDIOKkqdgF706vDQX1dpeXxQbVmxE X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: As there are more ALLOC_ flags that affect reserves, define what flags affect reserves and clarify the effect of each flag. Signed-off-by: Mel Gorman Acked-by: Vlastimil Babka Acked-by: Michal Hocko --- mm/internal.h | 3 +++ mm/page_alloc.c | 34 ++++++++++++++++++++++------------ 2 files changed, 25 insertions(+), 12 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 178484d9fd94..8706d46863df 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -749,6 +749,9 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, #define ALLOC_HIGHATOMIC 0x200 /* Allows access to MIGRATE_HIGHATOMIC */ #define ALLOC_KSWAPD 0x800 /* allow waking of kswapd, __GFP_KSWAPD_RECLAIM set */ +/* Flags that allow allocations below the min watermark. */ +#define ALLOC_RESERVES (ALLOC_HARDER|ALLOC_MIN_RESERVE|ALLOC_HIGHATOMIC|ALLOC_OOM) + enum ttu_flags; struct tlbflush_unmap_batch; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0ef4f3236a5a..6f41b84a97ac 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3949,15 +3949,14 @@ ALLOW_ERROR_INJECTION(should_fail_alloc_page, TRUE); static inline long __zone_watermark_unusable_free(struct zone *z, unsigned int order, unsigned int alloc_flags) { - const bool alloc_harder = (alloc_flags & (ALLOC_HARDER|ALLOC_OOM)); long unusable_free = (1 << order) - 1; /* - * If the caller does not have rights to ALLOC_HARDER then subtract - * the high-atomic reserves. This will over-estimate the size of the - * atomic reserve but it avoids a search. + * If the caller does not have rights to reserves below the min + * watermark then subtract the high-atomic reserves. This will + * over-estimate the size of the atomic reserve but it avoids a search. */ - if (likely(!alloc_harder)) + if (likely(!(alloc_flags & ALLOC_RESERVES))) unusable_free += z->nr_reserved_highatomic; #ifdef CONFIG_CMA @@ -3981,25 +3980,36 @@ bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark, { long min = mark; int o; - const bool alloc_harder = (alloc_flags & (ALLOC_HARDER|ALLOC_OOM)); /* free_pages may go negative - that's OK */ free_pages -= __zone_watermark_unusable_free(z, order, alloc_flags); - if (alloc_flags & ALLOC_MIN_RESERVE) - min -= min / 2; + if (unlikely(alloc_flags & ALLOC_RESERVES)) { + /* + * __GFP_HIGH allows access to 50% of the min reserve as well + * as OOM. + */ + if (alloc_flags & ALLOC_MIN_RESERVE) + min -= min / 2; - if (unlikely(alloc_harder)) { /* - * OOM victims can try even harder than normal ALLOC_HARDER + * Non-blocking allocations can access some of the reserve + * with more access if also __GFP_HIGH. The reasoning is that + * a non-blocking caller may incur a more severe penalty + * if it cannot get memory quickly, particularly if it's + * also __GFP_HIGH. + */ + if (alloc_flags & ALLOC_HARDER) + min -= min / 4; + + /* + * OOM victims can try even harder than the normal reserve * users on the grounds that it's definitely going to be in * the exit path shortly and free memory. Any allocation it * makes during the free path will be small and short-lived. */ if (alloc_flags & ALLOC_OOM) min -= min / 2; - else - min -= min / 4; } /* From patchwork Fri Jan 13 11:12:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 13100528 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B86C7C54EBD for ; Fri, 13 Jan 2023 11:13:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 569E98E0008; Fri, 13 Jan 2023 06:13:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 51A618E0001; Fri, 13 Jan 2023 06:13:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 409858E0008; Fri, 13 Jan 2023 06:13:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 315F38E0001 for ; Fri, 13 Jan 2023 06:13:22 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id E805B1C5FF6 for ; Fri, 13 Jan 2023 11:13:21 +0000 (UTC) X-FDA: 80349514602.05.271093A Received: from outbound-smtp60.blacknight.com (outbound-smtp60.blacknight.com [46.22.136.244]) by imf13.hostedemail.com (Postfix) with ESMTP id 4506820013 for ; Fri, 13 Jan 2023 11:13:20 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf13.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.136.244 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673608400; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nmoznsuK0w9j+EF/lpoydTt9vDWWrFHmoqctmIZ8VnY=; b=jca6MaZKyPn1CbAktUFocxM5rZK3NlIZav8d7VhcZbvqrm6pXdlx63oxXSQsaQS80EtCDl uO8gX6qGHEuz+sKsOep2DPhP8JfI2tMiJ32pB2tFbxi4dh/CIElpKUA/BltahGBVULC9Rl 2DuZWuOzEqJkOABOtkzlyhtbiPbXGus= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf13.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.136.244 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673608400; a=rsa-sha256; cv=none; b=HLmPzQHzsq+Jrt4R6ZJ676YF4fsU4ExT+05/yRf5ehfHpjz1U9DoA0GCN4HkKgTWx0V3Jw 5m46QzwiB/Fnoc4uUj3T3wToOUZyedxOstvouQ/2xSjrl38nnJpQEm5lY2w/HjuqUwX/Vi kzgn53TfixtPP4vn98325xjsRksQCkk= Received: from mail.blacknight.com (pemlinmail02.blacknight.ie [81.17.254.11]) by outbound-smtp60.blacknight.com (Postfix) with ESMTPS id E8DE4FA9B0 for ; Fri, 13 Jan 2023 11:13:18 +0000 (GMT) Received: (qmail 10897 invoked from network); 13 Jan 2023 11:13:18 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 13 Jan 2023 11:13:18 -0000 From: Mel Gorman To: Andrew Morton Cc: Michal Hocko , NeilBrown , Thierry Reding , Matthew Wilcox , Vlastimil Babka , Linux-MM , LKML , Mel Gorman Subject: [PATCH 5/6] mm/page_alloc: Explicitly define how __GFP_HIGH non-blocking allocations accesses reserves Date: Fri, 13 Jan 2023 11:12:16 +0000 Message-Id: <20230113111217.14134-6-mgorman@techsingularity.net> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230113111217.14134-1-mgorman@techsingularity.net> References: <20230113111217.14134-1-mgorman@techsingularity.net> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 4506820013 X-Stat-Signature: rbfrzjjfsz3ehtbicyq7afk9yqtqfaax X-HE-Tag: 1673608400-974781 X-HE-Meta: U2FsdGVkX1/3/bpEeBBwjBQft7uRFKk153pQCgCeEvj2L7D45kFIvCYOtgvG4ZiUrSZ3ShRWlGXel6TuEbtoLaBuyKcy+dbHOTP9rNJz87+FhIV8TZoZHclQ/zOA+tJUUXzjZemijK1ruFSlZfroBd1WnFboKkHkrLEQWdjX6rNePN38EKyoZ6VHbirTvK3c4GBl9iJ/llnpA6aCAoko8fnGY+Q9D2FyuLxcPyi8l7FELnEw95MWBUgG1psir9YgVAhVADXJ1yCo32/Pp/aN5tZfl9vIb2k7AD4ooK59LCDUW8kpBZRMaP0SFE2mJLIgVazHtD9g+x1TXgH1KbjDnDif79yoacWzy2Ej7vU8xB1PmVMq/MIb/JbkSXicwtQCtJ7ioUeU+H3gSBeZmjpEoUgutIjMHqXlH4FqKdeEpMk4JVW7MxeAneO8BGt71GXUBRP0JPYn7r4ovCRfKhETp6TMUkXkTcgcPXmjpdS8d+aIkvcdxMJgs4eB12rFi8kCFmsfzcIlEcd8Wytxvh4Ld40hm2n7aBaQA1/RKFeNBJ9i35v82u4x8r4De65996Cq2pa87D2o/V8f9mFw7J9Fx59Iq308DiA1aowBJ7qXAZGLdqxBEt19cBb03Vx1ZZrJEC+SYocYw20GmCY1qZYWS8iT/5g4gZUeFIukAqZkgM9WPIjyWpNwollP49/jY+VQjf52mbBN0NFDma3Jdsi4w/g5+Yg/XqBXHAhs/KN/OuxWAs3A/f/SnScUULGEPNKmLKLMWNQM9dKOFp2M8UnrmIWOnbsFkeq6cg292lP3LLzAylP5soS2QwZpLwObF9PLuF509cljpov+jrDGjM7qU1deCaH+qfsKIURQbGr9vp1mWF/bJH6OKa8PyCjI1FSwumBmdsfj6DBUZoCk0Ncvg05peufhSorKAaRNbfzM5Ja7muySHbKtk/LcmYlnqkGWkahDmWJCCcD7lziRQjF nTdEDwx3 5V6BbC/gpYsrDZc3SMJs3pTSavqO7Jr+JcBfjOGeAHrVozthepRsbqQ+G1avj7JImgOYR X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: GFP_ATOMIC allocations get flagged ALLOC_HARDER which is a vague description. In preparation for the removal of GFP_ATOMIC redefine __GFP_ATOMIC to simply mean non-blocking and renaming ALLOC_HARDER to ALLOC_NON_BLOCK accordingly. __GFP_HIGH is required for access to reserves but non-blocking is granted more access. For example, GFP_NOWAIT is non-blocking but has no special access to reserves. A __GFP_NOFAIL blocking allocation is granted access similar to __GFP_HIGH if the only alternative is an OOM kill. Signed-off-by: Mel Gorman Acked-by: Michal Hocko Acked-by: Vlastimil Babka --- mm/internal.h | 7 +++++-- mm/page_alloc.c | 44 ++++++++++++++++++++++++-------------------- 2 files changed, 29 insertions(+), 22 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 8706d46863df..23a37588073a 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -735,7 +735,10 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, #define ALLOC_OOM ALLOC_NO_WATERMARKS #endif -#define ALLOC_HARDER 0x10 /* try to alloc harder */ +#define ALLOC_NON_BLOCK 0x10 /* Caller cannot block. Allow access + * to 25% of the min watermark or + * 62.5% if __GFP_HIGH is set. + */ #define ALLOC_MIN_RESERVE 0x20 /* __GFP_HIGH set. Allow access to 50% * of the min watermark. */ @@ -750,7 +753,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, #define ALLOC_KSWAPD 0x800 /* allow waking of kswapd, __GFP_KSWAPD_RECLAIM set */ /* Flags that allow allocations below the min watermark. */ -#define ALLOC_RESERVES (ALLOC_HARDER|ALLOC_MIN_RESERVE|ALLOC_HIGHATOMIC|ALLOC_OOM) +#define ALLOC_RESERVES (ALLOC_NON_BLOCK|ALLOC_MIN_RESERVE|ALLOC_HIGHATOMIC|ALLOC_OOM) enum ttu_flags; struct tlbflush_unmap_batch; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 6f41b84a97ac..b9ae0ba0a2ab 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3989,18 +3989,19 @@ bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark, * __GFP_HIGH allows access to 50% of the min reserve as well * as OOM. */ - if (alloc_flags & ALLOC_MIN_RESERVE) + if (alloc_flags & ALLOC_MIN_RESERVE) { min -= min / 2; - /* - * Non-blocking allocations can access some of the reserve - * with more access if also __GFP_HIGH. The reasoning is that - * a non-blocking caller may incur a more severe penalty - * if it cannot get memory quickly, particularly if it's - * also __GFP_HIGH. - */ - if (alloc_flags & ALLOC_HARDER) - min -= min / 4; + /* + * Non-blocking allocations (e.g. GFP_ATOMIC) can + * access more reserves than just __GFP_HIGH. Other + * non-blocking allocations requests such as GFP_NOWAIT + * or (GFP_KERNEL & ~__GFP_DIRECT_RECLAIM) do not get + * access to the min reserve. + */ + if (alloc_flags & ALLOC_NON_BLOCK) + min -= min / 4; + } /* * OOM victims can try even harder than the normal reserve @@ -4851,28 +4852,30 @@ gfp_to_alloc_flags(gfp_t gfp_mask, unsigned int order) * The caller may dip into page reserves a bit more if the caller * cannot run direct reclaim, or if the caller has realtime scheduling * policy or is asking for __GFP_HIGH memory. GFP_ATOMIC requests will - * set both ALLOC_HARDER (__GFP_ATOMIC) and ALLOC_MIN_RESERVE(__GFP_HIGH). + * set both ALLOC_NON_BLOCK and ALLOC_MIN_RESERVE(__GFP_HIGH). */ alloc_flags |= (__force int) (gfp_mask & (__GFP_HIGH | __GFP_KSWAPD_RECLAIM)); - if (gfp_mask & __GFP_ATOMIC) { + if (!(gfp_mask & __GFP_DIRECT_RECLAIM)) { /* * Not worth trying to allocate harder for __GFP_NOMEMALLOC even * if it can't schedule. */ if (!(gfp_mask & __GFP_NOMEMALLOC)) { - alloc_flags |= ALLOC_HARDER; + alloc_flags |= ALLOC_NON_BLOCK; if (order > 0) alloc_flags |= ALLOC_HIGHATOMIC; } /* - * Ignore cpuset mems for GFP_ATOMIC rather than fail, see the - * comment for __cpuset_node_allowed(). + * Ignore cpuset mems for non-blocking __GFP_HIGH (probably + * GFP_ATOMIC) rather than fail, see the comment for + * __cpuset_node_allowed(). */ - alloc_flags &= ~ALLOC_CPUSET; + if (alloc_flags & ALLOC_MIN_RESERVE) + alloc_flags &= ~ALLOC_CPUSET; } else if (unlikely(rt_task(current)) && in_task()) alloc_flags |= ALLOC_MIN_RESERVE; @@ -5303,12 +5306,13 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, WARN_ON_ONCE_GFP(costly_order, gfp_mask); /* - * Help non-failing allocations by giving them access to memory - * reserves but do not use ALLOC_NO_WATERMARKS because this + * Help non-failing allocations by giving some access to memory + * reserves normally used for high priority non-blocking + * allocations but do not use ALLOC_NO_WATERMARKS because this * could deplete whole memory reserves which would just make - * the situation worse + * the situation worse. */ - page = __alloc_pages_cpuset_fallback(gfp_mask, order, ALLOC_HARDER, ac); + page = __alloc_pages_cpuset_fallback(gfp_mask, order, ALLOC_MIN_RESERVE, ac); if (page) goto got_pg; From patchwork Fri Jan 13 11:12:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 13100550 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDF34C54EBE for ; Fri, 13 Jan 2023 11:13:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 718678E0009; Fri, 13 Jan 2023 06:13:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6C8458E0001; Fri, 13 Jan 2023 06:13:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5B76B8E0009; Fri, 13 Jan 2023 06:13:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 4D5258E0001 for ; Fri, 13 Jan 2023 06:13:32 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 22906160818 for ; Fri, 13 Jan 2023 11:13:32 +0000 (UTC) X-FDA: 80349515064.20.86691C9 Received: from outbound-smtp19.blacknight.com (outbound-smtp19.blacknight.com [46.22.139.246]) by imf19.hostedemail.com (Postfix) with ESMTP id 74AD11A0016 for ; Fri, 13 Jan 2023 11:13:30 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=none; spf=pass (imf19.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.139.246 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673608410; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Tm+wGdOtKgrascPjaaR4v1bCmFAQxbOESbe2IeSexTk=; b=BmwYKo820CQdHKqOsOIidSv2+4hC5sttDwDqZaK4Ta4Pk6Vl9b9t92ojO5wDGFs3nD67Q8 DHw672nM082Q682iieblkEC80GAmRDXaNvb/D31w108YiIdTQClCyMX324+EaHN0bGsQIw Oo8UZKWpGCKQqk/R+plWHvyvTSun4C8= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=none; spf=pass (imf19.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.139.246 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673608410; a=rsa-sha256; cv=none; b=zoRcgbw1Sr4xt22NporMDxncJ4C+zppLYA35XYTucJrdIgdbDt0FJCSkOub1L/b+yu2O7G n7ihDoJlRwvunlpUWyRc8W0Io8Z729sk8YayUe5L3tqaUJQH15t1wuG7hoBG7HkDtHK5QG tgi2VmJNDVvtmnhqVJTp63QuAkXFoU0= Received: from mail.blacknight.com (pemlinmail02.blacknight.ie [81.17.254.11]) by outbound-smtp19.blacknight.com (Postfix) with ESMTPS id 147A11C42E9 for ; Fri, 13 Jan 2023 11:13:29 +0000 (GMT) Received: (qmail 11467 invoked from network); 13 Jan 2023 11:13:28 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 13 Jan 2023 11:13:28 -0000 From: Mel Gorman To: Andrew Morton Cc: Michal Hocko , NeilBrown , Thierry Reding , Matthew Wilcox , Vlastimil Babka , Linux-MM , LKML , Mel Gorman Subject: [PATCH 6/6] mm: discard __GFP_ATOMIC Date: Fri, 13 Jan 2023 11:12:17 +0000 Message-Id: <20230113111217.14134-7-mgorman@techsingularity.net> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230113111217.14134-1-mgorman@techsingularity.net> References: <20230113111217.14134-1-mgorman@techsingularity.net> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: s9q7rix8m6pgj3fxquhkn4m7qxu9yyuf X-Rspamd-Queue-Id: 74AD11A0016 X-HE-Tag: 1673608410-476823 X-HE-Meta: U2FsdGVkX19zVwPQILNCEDIky4xnYDZJr4ctgtzVGT6uCBbJP3VfBin0jVXO5eCtMlFnLXSeb650uUdlBtRlBtzQMlbBfl5QeP/2L1RwwgRfALL5jGB5YdJ6bLMAi+51Aw5gwCfPL9Ff6wmwO4kOcyVDIQLM04e8ceh0wZkz0GGIRpsoNB3/uS+B6cVKGLYf0AWh4yuuAqlvvVd9SkQYpxH87KSC2dBONP7OsmPfSFNAnjMgGz1jd/XhiB/gqd0Thn5TSPm8IHbe9nTEfOpBCLfJz9Nd4rJs4Q0eW5aLRRCYK13okoDL0LPjQ3nU2ZjV2QrPvh6GiCzEesS7/2ARaXxkav0DVceozK9sVihHVFS4S2h9jXu6GjSPKsLXPqefKxjxw3qvhgAUwmf3YB/IcmoBl8igHz3NOUmZCZV50glrfkZVyRMze7dI3I5hORwiWO0G2zwVp6gT6RxXI5XfPhsxIs0yIzpP/X/drBALXNye5MZhJ+BqduUZzpfhDDqs24YjGXeJYIGBiWrlKstW8RBJQUTjIjeUNA6tiSudP0ldqkqc3cjJl1PhEFp3pvzravVQ/rB/F+l05niAOKvYG4MIW9H+MpCpbWi1zIODIjuZU9x9uZC3C9/ZsILr3FrGmePL+MXZpEUK3mhNnA+xMKGU4W8V+fEmeQuMwchZrIzyUCjDaPvgNUsqXxBVI3KdSHAfhrhZNGZCSb+/GqRbnlIiJzJAh4vYSLNGLaZo0rWndEf/AkKYAOUa1TrOMy4EvhbHPYf8sbnEDzx+wEOEMnbrFsPp0SnZagIltB+p/GsylCoAOmndszeByhXRKCgkZGnQBVlLv40tAzDhdSeC1j9pzdhpc/MPX/AnVC1flmAkd5RvhAgkd+52TELrwqeKxX6WYUf/8jmHtedynVNsDZUx4sN1DPKqt0ybKIlXjniD81JYqK4ZTS09o4K0zPkf39ajTr0hu8Vtuwp2xly pFQAL8LK z/xov4hx541nOMMnpvn2+NkohqIlXPXLfVPHiVoYPXSBPhcrFAhSPDorfuQWDw1WvBTf6 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: NeilBrown __GFP_ATOMIC serves little purpose. Its main effect is to set ALLOC_HARDER which adds a few little boosts to increase the chance of an allocation succeeding, one of which is to lower the water-mark at which it will succeed. It is *always* paired with __GFP_HIGH which sets ALLOC_HIGH which also adjusts this watermark. It is probable that other users of __GFP_HIGH should benefit from the other little bonuses that __GFP_ATOMIC gets. __GFP_ATOMIC also gives a warning if used with __GFP_DIRECT_RECLAIM. There is little point to this. We already get a might_sleep() warning if __GFP_DIRECT_RECLAIM is set. __GFP_ATOMIC allows the "watermark_boost" to be side-stepped. It is probable that testing ALLOC_HARDER is a better fit here. __GFP_ATOMIC is used by tegra-smmu.c to check if the allocation might sleep. This should test __GFP_DIRECT_RECLAIM instead. This patch: - removes __GFP_ATOMIC - allows __GFP_HIGH allocations to ignore watermark boosting as well as GFP_ATOMIC requests. - makes other adjustments as suggested by the above. The net result is not change to GFP_ATOMIC allocations. Other allocations that use __GFP_HIGH will benefit from a few different extra privileges. This affects: xen, dm, md, ntfs3 the vermillion frame buffer hibernation ksm swap all of which likely produce more benefit than cost if these selected allocation are more likely to succeed quickly. [mgorman: Minor adjustments to rework on top of a series] Link: https://lkml.kernel.org/r/163712397076.13692.4727608274002939094@noble.neil.brown.name Signed-off-by: NeilBrown Signed-off-by: Mel Gorman Acked-by: Vlastimil Babka Acked-by: Michal Hocko --- Documentation/mm/balance.rst | 2 +- drivers/iommu/tegra-smmu.c | 4 ++-- include/linux/gfp_types.h | 12 ++++-------- include/trace/events/mmflags.h | 1 - lib/test_printf.c | 8 ++++---- mm/internal.h | 2 +- mm/page_alloc.c | 13 +++---------- tools/perf/builtin-kmem.c | 1 - 8 files changed, 15 insertions(+), 28 deletions(-) diff --git a/Documentation/mm/balance.rst b/Documentation/mm/balance.rst index 6a1fadf3e173..e38e9d83c1c7 100644 --- a/Documentation/mm/balance.rst +++ b/Documentation/mm/balance.rst @@ -6,7 +6,7 @@ Memory Balancing Started Jan 2000 by Kanoj Sarcar -Memory balancing is needed for !__GFP_ATOMIC and !__GFP_KSWAPD_RECLAIM as +Memory balancing is needed for !__GFP_HIGH and !__GFP_KSWAPD_RECLAIM as well as for non __GFP_IO allocations. The first reason why a caller may avoid reclaim is that the caller can not diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c index 5b1af40221ec..af8d0e685260 100644 --- a/drivers/iommu/tegra-smmu.c +++ b/drivers/iommu/tegra-smmu.c @@ -671,12 +671,12 @@ static struct page *as_get_pde_page(struct tegra_smmu_as *as, * allocate page in a sleeping context if GFP flags permit. Hence * spinlock needs to be unlocked and re-locked after allocation. */ - if (!(gfp & __GFP_ATOMIC)) + if (gfpflags_allow_blocking(gfp)) spin_unlock_irqrestore(&as->lock, *flags); page = alloc_page(gfp | __GFP_DMA | __GFP_ZERO); - if (!(gfp & __GFP_ATOMIC)) + if (gfpflags_allow_blocking(gfp)) spin_lock_irqsave(&as->lock, *flags); /* diff --git a/include/linux/gfp_types.h b/include/linux/gfp_types.h index d88c46ca82e1..5088637fe5c2 100644 --- a/include/linux/gfp_types.h +++ b/include/linux/gfp_types.h @@ -31,7 +31,7 @@ typedef unsigned int __bitwise gfp_t; #define ___GFP_IO 0x40u #define ___GFP_FS 0x80u #define ___GFP_ZERO 0x100u -#define ___GFP_ATOMIC 0x200u +/* 0x200u unused */ #define ___GFP_DIRECT_RECLAIM 0x400u #define ___GFP_KSWAPD_RECLAIM 0x800u #define ___GFP_WRITE 0x1000u @@ -116,11 +116,8 @@ typedef unsigned int __bitwise gfp_t; * * %__GFP_HIGH indicates that the caller is high-priority and that granting * the request is necessary before the system can make forward progress. - * For example, creating an IO context to clean pages. - * - * %__GFP_ATOMIC indicates that the caller cannot reclaim or sleep and is - * high priority. Users are typically interrupt handlers. This may be - * used in conjunction with %__GFP_HIGH + * For example creating an IO context to clean pages and requests + * from atomic context. * * %__GFP_MEMALLOC allows access to all memory. This should only be used when * the caller guarantees the allocation will allow more memory to be freed @@ -135,7 +132,6 @@ typedef unsigned int __bitwise gfp_t; * %__GFP_NOMEMALLOC is used to explicitly forbid access to emergency reserves. * This takes precedence over the %__GFP_MEMALLOC flag if both are set. */ -#define __GFP_ATOMIC ((__force gfp_t)___GFP_ATOMIC) #define __GFP_HIGH ((__force gfp_t)___GFP_HIGH) #define __GFP_MEMALLOC ((__force gfp_t)___GFP_MEMALLOC) #define __GFP_NOMEMALLOC ((__force gfp_t)___GFP_NOMEMALLOC) @@ -329,7 +325,7 @@ typedef unsigned int __bitwise gfp_t; * version does not attempt reclaim/compaction at all and is by default used * in page fault path, while the non-light is used by khugepaged. */ -#define GFP_ATOMIC (__GFP_HIGH|__GFP_ATOMIC|__GFP_KSWAPD_RECLAIM) +#define GFP_ATOMIC (__GFP_HIGH|__GFP_KSWAPD_RECLAIM) #define GFP_KERNEL (__GFP_RECLAIM | __GFP_IO | __GFP_FS) #define GFP_KERNEL_ACCOUNT (GFP_KERNEL | __GFP_ACCOUNT) #define GFP_NOWAIT (__GFP_KSWAPD_RECLAIM) diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h index 412b5a46374c..9db52bc4ce19 100644 --- a/include/trace/events/mmflags.h +++ b/include/trace/events/mmflags.h @@ -31,7 +31,6 @@ gfpflag_string(__GFP_HIGHMEM), \ gfpflag_string(GFP_DMA32), \ gfpflag_string(__GFP_HIGH), \ - gfpflag_string(__GFP_ATOMIC), \ gfpflag_string(__GFP_IO), \ gfpflag_string(__GFP_FS), \ gfpflag_string(__GFP_NOWARN), \ diff --git a/lib/test_printf.c b/lib/test_printf.c index d34dc636b81c..46b4e6c414a3 100644 --- a/lib/test_printf.c +++ b/lib/test_printf.c @@ -674,17 +674,17 @@ flags(void) gfp = GFP_ATOMIC|__GFP_DMA; test("GFP_ATOMIC|GFP_DMA", "%pGg", &gfp); - gfp = __GFP_ATOMIC; - test("__GFP_ATOMIC", "%pGg", &gfp); + gfp = __GFP_HIGH; + test("__GFP_HIGH", "%pGg", &gfp); /* Any flags not translated by the table should remain numeric */ gfp = ~__GFP_BITS_MASK; snprintf(cmp_buffer, BUF_SIZE, "%#lx", (unsigned long) gfp); test(cmp_buffer, "%pGg", &gfp); - snprintf(cmp_buffer, BUF_SIZE, "__GFP_ATOMIC|%#lx", + snprintf(cmp_buffer, BUF_SIZE, "__GFP_HIGH|%#lx", (unsigned long) gfp); - gfp |= __GFP_ATOMIC; + gfp |= __GFP_HIGH; test(cmp_buffer, "%pGg", &gfp); kfree(cmp_buffer); diff --git a/mm/internal.h b/mm/internal.h index 23a37588073a..71b1111427f3 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -24,7 +24,7 @@ struct folio_batch; #define GFP_RECLAIM_MASK (__GFP_RECLAIM|__GFP_HIGH|__GFP_IO|__GFP_FS|\ __GFP_NOWARN|__GFP_RETRY_MAYFAIL|__GFP_NOFAIL|\ __GFP_NORETRY|__GFP_MEMALLOC|__GFP_NOMEMALLOC|\ - __GFP_ATOMIC|__GFP_NOLOCKDEP) + __GFP_NOLOCKDEP) /* The GFP flags allowed during early boot */ #define GFP_BOOT_MASK (__GFP_BITS_MASK & ~(__GFP_RECLAIM|__GFP_IO|__GFP_FS)) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index b9ae0ba0a2ab..78ffebc4798b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4087,13 +4087,14 @@ static inline bool zone_watermark_fast(struct zone *z, unsigned int order, if (__zone_watermark_ok(z, order, mark, highest_zoneidx, alloc_flags, free_pages)) return true; + /* - * Ignore watermark boosting for GFP_ATOMIC order-0 allocations + * Ignore watermark boosting for __GFP_HIGH order-0 allocations * when checking the min watermark. The min watermark is the * point where boosting is ignored so that kswapd is woken up * when below the low watermark. */ - if (unlikely(!order && (gfp_mask & __GFP_ATOMIC) && z->watermark_boost + if (unlikely(!order && (alloc_flags & ALLOC_MIN_RESERVE) && z->watermark_boost && ((alloc_flags & ALLOC_WMARK_MASK) == WMARK_MIN))) { mark = z->_watermark[WMARK_MIN]; return __zone_watermark_ok(z, order, mark, highest_zoneidx, @@ -5058,14 +5059,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, unsigned int zonelist_iter_cookie; int reserve_flags; - /* - * We also sanity check to catch abuse of atomic reserves being used by - * callers that are not in atomic context. - */ - if (WARN_ON_ONCE((gfp_mask & (__GFP_ATOMIC|__GFP_DIRECT_RECLAIM)) == - (__GFP_ATOMIC|__GFP_DIRECT_RECLAIM))) - gfp_mask &= ~__GFP_ATOMIC; - restart: compaction_retries = 0; no_progress_loops = 0; diff --git a/tools/perf/builtin-kmem.c b/tools/perf/builtin-kmem.c index e20656c431a4..173d407dce92 100644 --- a/tools/perf/builtin-kmem.c +++ b/tools/perf/builtin-kmem.c @@ -641,7 +641,6 @@ static const struct { { "__GFP_HIGHMEM", "HM" }, { "GFP_DMA32", "D32" }, { "__GFP_HIGH", "H" }, - { "__GFP_ATOMIC", "_A" }, { "__GFP_IO", "I" }, { "__GFP_FS", "F" }, { "__GFP_NOWARN", "NWR" },