From patchwork Wed Feb 21 11:43:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 13565465 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7F8BC48BF6 for ; Wed, 21 Feb 2024 11:44:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C8B2D6B006E; Wed, 21 Feb 2024 06:44:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C3BC66B0071; Wed, 21 Feb 2024 06:44:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ADBF06B0072; Wed, 21 Feb 2024 06:44:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 9F36E6B006E for ; Wed, 21 Feb 2024 06:44:16 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 3C08EA0DDE for ; Wed, 21 Feb 2024 11:44:16 +0000 (UTC) X-FDA: 81815627712.04.BD15554 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by imf20.hostedemail.com (Postfix) with ESMTP id 371771C0013 for ; Wed, 21 Feb 2024 11:44:12 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b="sp4/C4Ty"; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=tOERbRIT; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b="sp4/C4Ty"; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=tOERbRIT; spf=pass (imf20.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1708515853; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=3PZj5do2Zn1U7eHBDXemVKtNuh921UBQkK1vHaJCPTw=; b=tSIUAMJw+5z2h5VhsPzqnlajT0dM3GtrLlO+/TV6tOR5Rxw3BcQAyZKiAoSXhjGtjs/vlY bmVm0l+DhaUcQgz/tUboNr7Udzm/n6T9MfgJfIjn4mUwyQtEMz/fDGB1P8ZnmnUodw/hzt M3CEOkOqizvyimWRAzPK3ces4enBzHk= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1708515853; a=rsa-sha256; cv=none; b=jmIgVHPfd74xcc32AP0AXprvXVXIoI0E5YJYbEqpwafqpN4OHnP+uawUxE6kEUmMbHNolg b1vQoJC8oTuo2xkGuiuJMHmxAO69XJuWFJCvy48/dXp01f+vvT5hkY4tTfsMhuFstg2B5k erRxtP51LFV/MX27w/bzxoyOu0IHMEw= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b="sp4/C4Ty"; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=tOERbRIT; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b="sp4/C4Ty"; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=tOERbRIT; spf=pass (imf20.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 484421FB55; Wed, 21 Feb 2024 11:44:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1708515851; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3PZj5do2Zn1U7eHBDXemVKtNuh921UBQkK1vHaJCPTw=; b=sp4/C4TyFwmJYeTEdjtbwxk30XrvTy+bsqUGVNiMYf6igoxlBuS4nPvBuCpPM+pKWbJ1pO sUMSmfyX32P6pptNWfob7oHrH6nZqN8V73JFsh15rGFk+PD2M20VvKcDBzc60NNX8EG1su rxKR42sO+y13LGRK8Q0ynn18xU207gg= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1708515851; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3PZj5do2Zn1U7eHBDXemVKtNuh921UBQkK1vHaJCPTw=; b=tOERbRITgvoXRWGR14ROJ7MyMmVg/JlkKY3zxk/5H6MlQp/GKC6ToIjdEJ3DJUC/n/twHh fjXj9OrMRBn5/UBg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1708515851; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3PZj5do2Zn1U7eHBDXemVKtNuh921UBQkK1vHaJCPTw=; b=sp4/C4TyFwmJYeTEdjtbwxk30XrvTy+bsqUGVNiMYf6igoxlBuS4nPvBuCpPM+pKWbJ1pO sUMSmfyX32P6pptNWfob7oHrH6nZqN8V73JFsh15rGFk+PD2M20VvKcDBzc60NNX8EG1su rxKR42sO+y13LGRK8Q0ynn18xU207gg= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1708515851; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3PZj5do2Zn1U7eHBDXemVKtNuh921UBQkK1vHaJCPTw=; b=tOERbRITgvoXRWGR14ROJ7MyMmVg/JlkKY3zxk/5H6MlQp/GKC6ToIjdEJ3DJUC/n/twHh fjXj9OrMRBn5/UBg== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 1F78C13A69; Wed, 21 Feb 2024 11:44:11 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id gL9TBwvi1WUYWQAAD6G6ig (envelope-from ); Wed, 21 Feb 2024 11:44:11 +0000 From: Vlastimil Babka To: Andrew Morton , svenva@chromium.org Cc: bgeffon@google.com, cujomalainey@chromium.org, kramasub@chromium.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-sound@vger.kernel.org, perex@perex.cz, stable@vger.kernel.org, tiwai@suse.com, tiwai@suse.de, vbabka@suse.cz, Michal Hocko , Mel Gorman Subject: [PATCH] mm, vmscan: prevent infinite loop for costly GFP_NOIO | __GFP_RETRY_MAYFAIL allocations Date: Wed, 21 Feb 2024 12:43:58 +0100 Message-ID: <20240221114357.13655-2-vbabka@suse.cz> X-Mailer: git-send-email 2.43.1 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 371771C0013 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: yowiw7nymb3zugebz3smep6z8bzrz1sa X-HE-Tag: 1708515852-166362 X-HE-Meta: U2FsdGVkX1+O1mvuoWwK/JJ7w03wQpnirVTubf94rzeLD2PbyePJHUuYB7eQkSeQr2rWeKDBhXCdqhRdyA+lr3AmQDsA26cI3WNAmd8SdoTJV9KUAUsDwfzx6bveyAzoJSVuwgeEPwVOUov4+UOk+7G0cYD3Bgx5mga0eles6k+VxoTwGDoeTFlDxKK32cPUjJsnLALxp3WXPfjhAFvVHDfPP/hAtFcqs87ObtGKwIDYT9WpUw+GarHbtFThB4Z4l0jEqPq5FSrUi/UJK1bgf+blbRBUvn9BKZiWYmBxHUJWoL0ty56oI1AtrEOZnMeaTld0E0kI9RyrLY9IFbTsE2QP4AIwauVGXRpbqFYstANFaoMFKxqFe3+TQweXFmqFBA1LGkzsU8YGGMMipNxkNwb7dTcOsHqvOEY8EO5T3PEGwUG4fnl4EjpFcfpiK3joOn6/RLKDe/bbVH1IjY4VFuU3SCvtQRKbFUW0hcc7IVe2d0Qz/CoETVWsMnAtve38Uq9NradrWjxOB9l7MPkTjcfD29TKWR7Qhuk9A7cFFzB5rNDV3SSWaHldMvSERHswivJTBGlojYSvbqLqPp0J51pkygosMPe8nWmIlDpsgKot9fCuLq4dYADrMIcp/H4FrgYyzR42Gc0An7Q9nKzm1143fC6IBlsMGzEDVaPMB0nDoRYccRZzVcPHn678nkP4+9mmB99EPt6yh0OUf2iGMjaSpap9RYrUaegrUvWHjz1DSN0raSr1wyPnh7DAfX7m+s9r0szyyjOavYJzc+tH/RDaqi4467Uf9kOuC8x7JY/MUrbvBgyzyhRkJ5CtFG9W5KsR44i/HwQVVgQF0G5lR1Svwbv1TJ3EHq7Qwd1GFLVIhZ2vCsVSsLgTv/vjj8Jz0Duex+eZQOXEsi/scZBLslfqNbjf+Y+ox+tXoyV/P17P4XQC7uenf36hG3VA6HxYvuTRq3ZBmzPaiOSZ75a HuFX5ooB oFqkI3pzB16fAwzU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Sven reports an infinite loop in __alloc_pages_slowpath() for costly order __GFP_RETRY_MAYFAIL allocations that are also GFP_NOIO. Such combination can happen in a suspend/resume context where a GFP_KERNEL allocation can have __GFP_IO masked out via gfp_allowed_mask. Quoting Sven: 1. try to do a "costly" allocation (order > PAGE_ALLOC_COSTLY_ORDER) with __GFP_RETRY_MAYFAIL set. 2. page alloc's __alloc_pages_slowpath tries to get a page from the freelist. This fails because there is nothing free of that costly order. 3. page alloc tries to reclaim by calling __alloc_pages_direct_reclaim, which bails out because a zone is ready to be compacted; it pretends to have made a single page of progress. 4. page alloc tries to compact, but this always bails out early because __GFP_IO is not set (it's not passed by the snd allocator, and even if it were, we are suspending so the __GFP_IO flag would be cleared anyway). 5. page alloc believes reclaim progress was made (because of the pretense in item 3) and so it checks whether it should retry compaction. The compaction retry logic thinks it should try again, because: a) reclaim is needed because of the early bail-out in item 4 b) a zonelist is suitable for compaction 6. goto 2. indefinite stall. (end quote) The immediate root cause is confusing the COMPACT_SKIPPED returned from __alloc_pages_direct_compact() (step 4) due to lack of __GFP_IO to be indicating a lack of order-0 pages, and in step 5 evaluating that in should_compact_retry() as a reason to retry, before incrementing and limiting the number of retries. There are however other places that wrongly assume that compaction can happen while we lack __GFP_IO. To fix this, introduce gfp_compaction_allowed() to abstract the __GFP_IO evaluation and switch the open-coded test in try_to_compact_pages() to use it. Also use the new helper in: - compaction_ready(), which will make reclaim not bail out in step 3, so there's at least one attempt to actually reclaim, even if chances are small for a costly order - in_reclaim_compaction() which will make should_continue_reclaim() return false and we don't over-reclaim unnecessarily - in __alloc_pages_slowpath() to set a local variable can_compact, which is then used to avoid retrying reclaim/compaction for costly allocations (step 5) if we can't compact and also to skip the early compaction attempt that we do in some cases Reported-by: Sven van Ashbrook Closes: https://lore.kernel.org/all/CAG-rBihs_xMKb3wrMO1%2B-%2Bp4fowP9oy1pa_OTkfxBzPUVOZF%2Bg@mail.gmail.com/ Fixes: 3250845d0526 ("Revert "mm, oom: prevent premature OOM killer invocation for high order request"") Cc: Signed-off-by: Vlastimil Babka Tested-by: Karthikeyan Ramasubramanian --- include/linux/gfp.h | 9 +++++++++ mm/compaction.c | 7 +------ mm/page_alloc.c | 10 ++++++---- mm/vmscan.c | 5 ++++- 4 files changed, 20 insertions(+), 11 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index de292a007138..e2a916cf29c4 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -353,6 +353,15 @@ static inline bool gfp_has_io_fs(gfp_t gfp) return (gfp & (__GFP_IO | __GFP_FS)) == (__GFP_IO | __GFP_FS); } +/* + * Check if the gfp flags allow compaction - GFP_NOIO is a really + * tricky context because the migration might require IO. + */ +static inline bool gfp_compaction_allowed(gfp_t gfp_mask) +{ + return IS_ENABLED(CONFIG_COMPACTION) && (gfp_mask & __GFP_IO); +} + extern gfp_t vma_thp_gfp_mask(struct vm_area_struct *vma); #ifdef CONFIG_CONTIG_ALLOC diff --git a/mm/compaction.c b/mm/compaction.c index 4add68d40e8d..b961db601df4 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -2723,16 +2723,11 @@ enum compact_result try_to_compact_pages(gfp_t gfp_mask, unsigned int order, unsigned int alloc_flags, const struct alloc_context *ac, enum compact_priority prio, struct page **capture) { - int may_perform_io = (__force int)(gfp_mask & __GFP_IO); struct zoneref *z; struct zone *zone; enum compact_result rc = COMPACT_SKIPPED; - /* - * Check if the GFP flags allow compaction - GFP_NOIO is really - * tricky context because the migration might require IO - */ - if (!may_perform_io) + if (!gfp_compaction_allowed(gfp_mask)) return COMPACT_SKIPPED; trace_mm_compaction_try_to_compact_pages(order, gfp_mask, prio); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 150d4f23b010..a663202045dc 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4041,6 +4041,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, struct alloc_context *ac) { bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM; + bool can_compact = gfp_compaction_allowed(gfp_mask); const bool costly_order = order > PAGE_ALLOC_COSTLY_ORDER; struct page *page = NULL; unsigned int alloc_flags; @@ -4111,7 +4112,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, * Don't try this for allocations that are allowed to ignore * watermarks, as the ALLOC_NO_WATERMARKS attempt didn't yet happen. */ - if (can_direct_reclaim && + if (can_direct_reclaim && can_compact && (costly_order || (order > 0 && ac->migratetype != MIGRATE_MOVABLE)) && !gfp_pfmemalloc_allowed(gfp_mask)) { @@ -4209,9 +4210,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, /* * Do not retry costly high order allocations unless they are - * __GFP_RETRY_MAYFAIL + * __GFP_RETRY_MAYFAIL and we can compact */ - if (costly_order && !(gfp_mask & __GFP_RETRY_MAYFAIL)) + if (costly_order && (!can_compact || + !(gfp_mask & __GFP_RETRY_MAYFAIL))) goto nopage; if (should_reclaim_retry(gfp_mask, order, ac, alloc_flags, @@ -4224,7 +4226,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, * implementation of the compaction depends on the sufficient amount * of free memory (see __compaction_suitable) */ - if (did_some_progress > 0 && + if (did_some_progress > 0 && can_compact && should_compact_retry(ac, order, alloc_flags, compact_result, &compact_priority, &compaction_retries)) diff --git a/mm/vmscan.c b/mm/vmscan.c index 4f9c854ce6cc..4255619a1a31 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -5753,7 +5753,7 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) /* Use reclaim/compaction for costly allocs or under memory pressure */ static bool in_reclaim_compaction(struct scan_control *sc) { - if (IS_ENABLED(CONFIG_COMPACTION) && sc->order && + if (gfp_compaction_allowed(sc->gfp_mask) && sc->order && (sc->order > PAGE_ALLOC_COSTLY_ORDER || sc->priority < DEF_PRIORITY - 2)) return true; @@ -5998,6 +5998,9 @@ static inline bool compaction_ready(struct zone *zone, struct scan_control *sc) { unsigned long watermark; + if (!gfp_compaction_allowed(sc->gfp_mask)) + return false; + /* Allocation can already succeed, nothing to do */ if (zone_watermark_ok(zone, sc->order, min_wmark_pages(zone), sc->reclaim_idx, 0))