From patchwork Thu Jan 16 01:33:34 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ge Yang X-Patchwork-Id: 13941137 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 100C6C02180 for ; Thu, 16 Jan 2025 01:33:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 602566B007B; Wed, 15 Jan 2025 20:33:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5B2846B0082; Wed, 15 Jan 2025 20:33:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4A1A96B0085; Wed, 15 Jan 2025 20:33:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 2E2546B007B for ; Wed, 15 Jan 2025 20:33:50 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 9880B43A63 for ; Thu, 16 Jan 2025 01:33:49 +0000 (UTC) X-FDA: 83011593378.23.0D5C6BA Received: from m16.mail.126.com (m16.mail.126.com [117.135.210.7]) by imf13.hostedemail.com (Postfix) with ESMTP id 504A820005 for ; Thu, 16 Jan 2025 01:33:45 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=126.com header.s=s110527 header.b=TBsggUsP; spf=pass (imf13.hostedemail.com: domain of yangge1116@126.com designates 117.135.210.7 as permitted sender) smtp.mailfrom=yangge1116@126.com; dmarc=pass (policy=none) header.from=126.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736991228; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references:dkim-signature; bh=QuZuiBxQN68/BS2asWBdr7x38VeHNcrbO5vLMPnuC6o=; b=ZbmkbtIg7jsdCSsC6nDI7n7bRh1j/6fgzGvS7OAQu90uplABEatRWiwuiIEHZzGS1Jdt4N LHXKS4uzdzO8L5tclQ5dcdliCN6PbA9Ssdu7wN/SSETXERxA++GzKKPT8TO4DOmwKfBLvV WpFN6s9jLGxf1O4D11kSmokWKO9iPc4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736991228; a=rsa-sha256; cv=none; b=ICdT7MLuMeqnVUIuVXfna36nSBsHuAOnMFZfS4hILIMnOb7/fWgz3Y529uEsc5s5rIk32U 6AAJNIcuZOWXYigrxtfoqQvxxMLr8cyioyeQdo2uqqf5tQurFqoA8uGIL4gjb/F+n8xyWl X1TZt5WNWfcy2YpvUQYVFTdc0xWzzjM= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=126.com header.s=s110527 header.b=TBsggUsP; spf=pass (imf13.hostedemail.com: domain of yangge1116@126.com designates 117.135.210.7 as permitted sender) smtp.mailfrom=yangge1116@126.com; dmarc=pass (policy=none) header.from=126.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com; s=s110527; h=From:Subject:Date:Message-Id; bh=QuZuiBxQN68/BS2asW Bdr7x38VeHNcrbO5vLMPnuC6o=; b=TBsggUsP0XJkaORvmMjC3dV3tCWxyhjKJm xvt8+pkjI9c5+8VIzeJuem+j4mhj7hEBo8I4ep9I2akeFkTtDIQwt/xsdLBVFQFi 3xX3/cm+Hh/b6oM/a246xQSiXQkEiYIrB+Ejo4K/9nE0wmkm4o0TPjaedgXZV+6i PNYQPTMeE= Received: from hg-OptiPlex-7040.hygon.cn (unknown []) by gzga-smtp-mtada-g0-1 (Coremail) with SMTP id _____wD3z83vYYhnxsb+BA--.20958S2; Thu, 16 Jan 2025 09:33:35 +0800 (CST) From: yangge1116@126.com To: akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, 21cnbao@gmail.com, david@redhat.com, baolin.wang@linux.alibaba.com, hannes@cmpxchg.org, vbabka@suse.cz, liuzixing@hygon.cn, yangge Subject: [PATCH V2] mm: compaction: use the actual allocation context to determine the watermarks for costly order during async memory compaction Date: Thu, 16 Jan 2025 09:33:34 +0800 Message-Id: <1736991214-29069-1-git-send-email-yangge1116@126.com> X-Mailer: git-send-email 2.7.4 X-CM-TRANSID: _____wD3z83vYYhnxsb+BA--.20958S2 X-Coremail-Antispam: 1Uf129KBjvJXoWxZrW3CFyxAFy5Cr13Kr1DGFg_yoWrtw45pa 18ur13C395XF43CF4xta1kuF1Ygw4xJF1UJF1Ivw1UZw4akFn2v3WDta45AF4UXry3JFWj vFZ09F1DCanxAaDanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDUYxBIdaVFxhVjvjDU0xZFpf9x0zRYhFxUUUUU= X-Originating-IP: [112.64.138.194] X-CM-SenderInfo: 51dqwwjhrrila6rslhhfrp/1tbifh7WG2eIVpHBHQABsY X-Stat-Signature: hz5aw45fp9ctr3iqerywub9q9wqyx9hu X-Rspam-User: X-Rspamd-Queue-Id: 504A820005 X-Rspamd-Server: rspam03 X-HE-Tag: 1736991225-995446 X-HE-Meta: U2FsdGVkX1/69ZvEPx+I2kRDD/y4rUZTNO6vrl+tsVnshfa+KKRHcjGH2a9+4hIR/wF/n8bE+iqShgV8isKxxCQAo26btqGliL0QUG146dUut9KZYkG4FOhcHWIdvbHKahY3Cn7Bl+dDQtyfBRyxkRThgk+IkHWHkCzkJa/bxUQO+uvCqhH4NRuF8KzeC3EgJgoADJgH43haV/pgaREKJJmbVlhiiGH5pxrUSgero6B/fsiD9iNzq3SyDjmiHxnMrqLYPLgn4tzVDDNNQMNnXvTbV5Fb5K8t9dWmH9rucEmZWWY1166vOFsMLpu+iyOfB5qCsixd9O6Keuy/7pjfqEcmraf4FVISzvHQxkQzcTetcpwgyHnZ2XhQSZylTElssmgmiO7DBkUYFejfrjOgG1LYbq0HOp6CoUX2pERoN3YIRpH6Z+6mLmyQ3IXf+Zft+s2rehUeNaUWn7Kc03KGg6ENf7hNDyd6fueCxz8om6imqDqeMR1NGe+zH/PIpu0YTTUU7HkR3mMZxAUY3irKYrzktJSu0E2McbLzFVQCnXu3XP99wBg+ZYepU3ccni5YZxxSEf34NXUzbTqlnvKaKbq11OhYWUapRkratiHvIYJ49xh7KoWezEEXRIR6acL46PKn8B2HfwNegBhly2BpmsXsf2deaXILvFBaHomA3Pje7inRmWJZ5jeQW4ndPo/FG3Rsmc+tWHqq6J/4/aTy00F6NXpKBj8UEE82gNtwFHuX1dwfpCvTyPzoA33tTZ9rO/HpXI2nODRn4cUwnYiLRxWUxbtQ4ZrNeS6aPlrzR1ZuyjiamHLOK1EGoeG9Vnx0D7WswGLgCn/rW/wPIYWjmRvo41ulsuPjQaBXMouEF2S3utl4MWUtuB2uWp5iIB4L1eEN+0mYFKURAJoDPovBbVgJYXSiK+/5dunyEiumVJu9Sx0MJEuLT7WIFWN6XxH0N8gG5lC6lgQUQv4t/ab uiOKQtaJ DIUL1cLzddywR75zDMOPTkO0WW9EoFbfA1Mq6wBfjT15xFbQeSexf5x1YK/1pIA5ahwRYky+2+v8pkYbwUYK0hQTJ7moHNqvu3mA6LDEAGv7W0CATEnSMfthAAiXASCBZ3clOaoh1HhFOydOmzODlZrEn0QcfLICAZd3lCNBN4XeCwfocLhWGKk77Pm/kJ/gVY8JLjo2NnM+DgMn6eN4Bs1Crg9u1e2+J1giBlUYQoOYbm62Y8/HJ9A88h435N1mw9kuq8K4xInTmYWUuV4HTm/pjh5KNAhcUawlmt6YHG4oDp88ghvcdHzBp5SVidP1TQU+fAKw7wSEBaGA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: yangge There are 4 NUMA nodes on my machine, and each NUMA node has 32GB of memory. I have configured 16GB of CMA memory on each NUMA node, and starting a 32GB virtual machine with device passthrough is extremely slow, taking almost an hour. Long term GUP cannot allocate memory from CMA area, so a maximum of 16 GB of no-CMA memory on a NUMA node can be used as virtual machine memory. There is 16GB of free CMA memory on a NUMA node, which is sufficient to pass the order-0 watermark check, causing the __compaction_suitable() function to consistently return true. For costly allocations, if the __compaction_suitable() function always returns true, it causes the __alloc_pages_slowpath() function to fail to exit at the appropriate point. This prevents timely fallback to allocating memory on other nodes, ultimately resulting in excessively long virtual machine startup times. Call trace: __alloc_pages_slowpath if (compact_result == COMPACT_SKIPPED || compact_result == COMPACT_DEFERRED) goto nopage; // should exit __alloc_pages_slowpath() from here We could use the real unmovable allocation context to have __zone_watermark_unusable_free() subtract CMA pages, and thus we won't pass the order-0 check anymore once the non-CMA part is exhausted. There is some risk that in some different scenario the compaction could in fact migrate pages from the exhausted non-CMA part of the zone to the CMA part and succeed, and we'll skip it instead. But only __GFP_NORETRY allocations should be affected in the immediate "goto nopage" when compaction is skipped, others will attempt with DEF_COMPACT_PRIORITY anyway and won't fail without trying to compact-migrate the non-CMA pageblocks into CMA pageblocks first, so it should be fine. After this fix, it only takes a few tens of seconds to start a 32GB virtual machine with device passthrough functionality. Link: https://lore.kernel.org/lkml/1736335854-548-1-git-send-email-yangge1116@126.com/ Signed-off-by: yangge Acked-by: Vlastimil Babka --- V2: - update code and message suggested by Vlastimil mm/compaction.c | 29 +++++++++++++++++++++++++---- 1 file changed, 25 insertions(+), 4 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 07bd227..3de7b67 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -2490,7 +2490,8 @@ bool compaction_zonelist_suitable(struct alloc_context *ac, int order, */ static enum compact_result compaction_suit_allocation_order(struct zone *zone, unsigned int order, - int highest_zoneidx, unsigned int alloc_flags) + int highest_zoneidx, unsigned int alloc_flags, + bool async) { unsigned long watermark; @@ -2499,6 +2500,23 @@ compaction_suit_allocation_order(struct zone *zone, unsigned int order, alloc_flags)) return COMPACT_SUCCESS; + /* + * For unmovable allocations (without ALLOC_CMA), check if there is enough + * free memory in the non-CMA pageblocks. Otherwise compaction could form + * the high-order page in CMA pageblocks, which would not help the + * allocation to succeed. However, limit the check to costly order async + * compaction (such as opportunistic THP attempts) because there is the + * possibility that compaction would migrate pages from non-CMA to CMA + * pageblock. + */ + if (order > PAGE_ALLOC_COSTLY_ORDER && async && + !(alloc_flags & ALLOC_CMA)) { + watermark = low_wmark_pages(zone) + compact_gap(order); + if (!__zone_watermark_ok(zone, 0, watermark, highest_zoneidx, + 0, zone_page_state(zone, NR_FREE_PAGES))) + return COMPACT_SKIPPED; + } + if (!compaction_suitable(zone, order, highest_zoneidx)) return COMPACT_SKIPPED; @@ -2534,7 +2552,8 @@ compact_zone(struct compact_control *cc, struct capture_control *capc) if (!is_via_compact_memory(cc->order)) { ret = compaction_suit_allocation_order(cc->zone, cc->order, cc->highest_zoneidx, - cc->alloc_flags); + cc->alloc_flags, + cc->mode == MIGRATE_ASYNC); if (ret != COMPACT_CONTINUE) return ret; } @@ -3037,7 +3056,8 @@ static bool kcompactd_node_suitable(pg_data_t *pgdat) ret = compaction_suit_allocation_order(zone, pgdat->kcompactd_max_order, - highest_zoneidx, ALLOC_WMARK_MIN); + highest_zoneidx, ALLOC_WMARK_MIN, + false); if (ret == COMPACT_CONTINUE) return true; } @@ -3078,7 +3098,8 @@ static void kcompactd_do_work(pg_data_t *pgdat) continue; ret = compaction_suit_allocation_order(zone, - cc.order, zoneid, ALLOC_WMARK_MIN); + cc.order, zoneid, ALLOC_WMARK_MIN, + false); if (ret != COMPACT_CONTINUE) continue;