From patchwork Fri Feb 24 14:11:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 13151288 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E63A0C678DB for ; Fri, 24 Feb 2023 14:12:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 442B66B0074; Fri, 24 Feb 2023 09:12:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3CB906B0075; Fri, 24 Feb 2023 09:12:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 26C7C6B0078; Fri, 24 Feb 2023 09:12:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 186006B0074 for ; Fri, 24 Feb 2023 09:12:22 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id D4B36140CE6 for ; Fri, 24 Feb 2023 14:12:21 +0000 (UTC) X-FDA: 80502375282.21.3A519B0 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by imf03.hostedemail.com (Postfix) with ESMTP id BF3A420024 for ; Fri, 24 Feb 2023 14:12:19 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Oz0JiF4m; spf=pass (imf03.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.120 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677247940; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QSS91WOSv9o+IhWkWMmJOZWF++6zljuzAGCtpIGB2BY=; b=vTzyHEY31RpaKk4rI1he8o8p/Qcy4ibl169XRH0s1QxP2T4YT2qWhPmEPesu4k1szqlToY MZLGYrgXB9avRhw46lXeR2hTLoOzU7gth85pkZqyDLP0cmdKQ0TJX8ttJaSVltFDrUu0Ob rpGPpcrrknwCDcgVJDXHCcpOrvm/sxU= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Oz0JiF4m; spf=pass (imf03.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.120 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677247940; a=rsa-sha256; cv=none; b=PcVo5tWNo1ynpj8AW+iqPFI6uytw139gwOZ2gbt298hwQaK+HqrOoiFW/4ceneNTZp0jrt wrmthu7rdiskHnhXXm8gIewvgwRLNOG7p0NmWkYJrayKneEnURE1eOAEleUTSeno66nrL7 Ad/cpGlwJi/hwhBRtH2O0iZstaulXto= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1677247939; x=1708783939; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=CZw3zTaszsFTuJAllktsjeXaRcVC/hbLzU4WTsAVGdY=; b=Oz0JiF4m1K01jpcNo4vIjiohFTTzVk/ZFFJKMRfnsoSnsha6nXuUKxaQ HGSGdiCYI0V4r46mokMj0rwDzOsbgXnYeSec33ZzChMorcnJBZ6JFRe0N IiygUdHwmQqUqikD/40+WZVdzTx+7k3w+kS+ZqDb4VbwCoR7SWaTlNm3p fLIdf/06mrRVHb77TKNEE9POjoP79on9pB5fMuzUtm4dwanFZkpl/N7Jc 9JrzMwoH5x+qCJaEli8qavSJsm5s6QjZYk5idFFocH5P4RHn8/dJYBOHK m4Cd/RTQgaYmYoxt/xrz52+xj0iH0YDr59aiqZMU2uIyflU8ChONBSgK+ Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10630"; a="332167713" X-IronPort-AV: E=Sophos;i="5.97,324,1669104000"; d="scan'208";a="332167713" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Feb 2023 06:12:19 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10630"; a="741684659" X-IronPort-AV: E=Sophos;i="5.97,324,1669104000"; d="scan'208";a="741684659" Received: from bingqili-mobl2.ccr.corp.intel.com (HELO yhuang6-mobl2.ccr.corp.intel.com) ([10.255.28.19]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Feb 2023 06:12:15 -0800 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Hugh Dickins , "Xu, Pengfei" , Christoph Hellwig , Stefan Roesch , Tejun Heo , Xin Hao , Zi Yan , Yang Shi , Baolin Wang , Matthew Wilcox , Mike Kravetz Subject: [PATCH 2/3] migrate_pages: move split folios processing out of migrate_pages_batch() Date: Fri, 24 Feb 2023 22:11:44 +0800 Message-Id: <20230224141145.96814-3-ying.huang@intel.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230224141145.96814-1-ying.huang@intel.com> References: <20230224141145.96814-1-ying.huang@intel.com> MIME-Version: 1.0 X-Stat-Signature: 6mgpzoyaiox79y3173yfoxkiaiaf4up1 X-Rspam-User: X-Rspamd-Queue-Id: BF3A420024 X-Rspamd-Server: rspam06 X-HE-Tag: 1677247939-57592 X-HE-Meta: U2FsdGVkX1+z9PYJgc9ahix58zkwgBxInoFUAIlYIi35MYdJgXFuRsVp1QF3kBjORelH4rbd5QU1ugXhgkE+tlSloe/ApJK9T1nfRMzu7x3Vw4dVynU/ZVvdj/VMlQY17lKKKxJKpT7FEO3vGcon3IbxKMYRVXt+hk5VAxFoha+KnRfL7qCFnMeIwy+2xQx9V/E1VtajztdCwpDZz0mD3ovNS2GOgk8X/NnU674FXtTe0PpbdmxClJCGW6KwwBLXp+pZdL6YYGLvsPXwqeVibvacqzelVbk/nEC2ZhKpe8ObY2RxHMnGvg4WVV4aKi2QP7nERp2vFFayvSnaWh36z9YSuje7bYX8vtAoeYR5VwNSr1z5fJoidNNmUB2PcCqgE8s0IoYfBUQZ1dzF/TEoyy6Fbf++Ulu6stUwhT0YnHJ8l5dYXLr4P7w5jaCxwQAMbScILx300PTKbClnf51S6kjfUcRJCtAIxMvh8GSi05uFAGm/q6OkpaHb+5y4OVjOs1hJ3U/KeNGmy5EgcsVMH80udFo4jFpboKgF3kch7dBTulK0h9eIUBBzGzjMYcrLVPamC20HsGHM8aEvEyqvnCGiuA9V9Hxl+5eK4aDVIUb5nz6LJyLf4zcawtqVjdwdO4hI4hWyRn23gE/Hga6AAuCAQlyfLP/g9QIYzLtcq7v4vT6B1F+3onbMTC1ZvwSMvHtnl3/RfSxZrkG+vn2cO866vtEJDIl9Mkl28oFB7ATM8m/eBlk+jXQR2T4+xX1CngSXvlQNO8C/p61d2eCsxfjhnYQ29cqNaRhQZU7FrPMsIRAty0c+9otnXd3YsJi3TI/FDM/hL5wS/bj3drBmopv4DLdagmqYd0B7RcwINnFsWlJgLb+cfXBaSLFZHGtq6DdXIT5YYdmdisVmuA//N3Lvu5XGBvG4lWVL5MxRs3BMsAdWHzpt8TJTyt7ycm2t/RT/6O2Wn569nFwW0jb y+zOqKz/ kHaXajklEkGOfgZvE7bVtPS9gORWgwcwiS8YhPqqCCwWbq6rVqoLhquzv/txIe9NpN7V+01rFubuw0MQb6UMxVkPZ87rS2vjRYQnX/DAjARtS3FkJsr5PmFNkGiddLIzO4gjpDPOBIPNuWCoigUs8Q0dwpZyEIFS4pQCU1iwXksDVm6Edg6aWeBCuHYd4NVQcibxwDzWdZxlLVZSHXydIsujEbi0XLiWXhfHIQSZWZOxnn1BJxhqNhB+qe2JVuY37lT6XY71oD/jkHC8l+vcnBBfLideOSuYYVzowDAxei+1/5hFoEUf1YrpC9WdMmcI83qkOD1QevIiEl4SJaYyFZGmfB76XJsRKRKCpZl51Ro3C7MS+MRLkHF41/W7+S271XYS9uNP69p/24JdJonm67q22CCqI4sxz+T/32hfCTzUfCJGrReCgd3P7nGwgq2LO4xjENWJ8vgpVzhg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: To simplify the code logic and reduce the line number. Signed-off-by: "Huang, Ying" Cc: Hugh Dickins Cc: "Xu, Pengfei" Cc: Christoph Hellwig Cc: Stefan Roesch Cc: Tejun Heo Cc: Xin Hao Cc: Zi Yan Cc: Yang Shi Cc: Baolin Wang Cc: Matthew Wilcox Cc: Mike Kravetz Reviewed-by: Baolin Wang --- mm/migrate.c | 76 ++++++++++++++++++---------------------------------- 1 file changed, 26 insertions(+), 50 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 7ac37dbbf307..91198b487e49 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1605,9 +1605,10 @@ static int migrate_hugetlbs(struct list_head *from, new_page_t get_new_page, static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, free_page_t put_new_page, unsigned long private, enum migrate_mode mode, int reason, struct list_head *ret_folios, - struct migrate_pages_stats *stats) + struct list_head *split_folios, struct migrate_pages_stats *stats, + int nr_pass) { - int retry; + int retry = 1; int large_retry = 1; int thp_retry = 1; int nr_failed = 0; @@ -1617,19 +1618,12 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, bool is_large = false; bool is_thp = false; struct folio *folio, *folio2, *dst = NULL, *dst2; - int rc, rc_saved, nr_pages; - LIST_HEAD(split_folios); + int rc, rc_saved = 0, nr_pages; LIST_HEAD(unmap_folios); LIST_HEAD(dst_folios); bool nosplit = (reason == MR_NUMA_MISPLACED); - bool no_split_folio_counting = false; -retry: - rc_saved = 0; - retry = 1; - for (pass = 0; - pass < NR_MAX_MIGRATE_PAGES_RETRY && (retry || large_retry); - pass++) { + for (pass = 0; pass < nr_pass && (retry || large_retry); pass++) { retry = 0; large_retry = 0; thp_retry = 0; @@ -1660,7 +1654,7 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, if (!thp_migration_supported() && is_thp) { nr_large_failed++; stats->nr_thp_failed++; - if (!try_split_folio(folio, &split_folios)) { + if (!try_split_folio(folio, split_folios)) { stats->nr_thp_split++; continue; } @@ -1692,7 +1686,7 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, stats->nr_thp_failed += is_thp; /* Large folio NUMA faulting doesn't split to retry. */ if (!nosplit) { - int ret = try_split_folio(folio, &split_folios); + int ret = try_split_folio(folio, split_folios); if (!ret) { stats->nr_thp_split += is_thp; @@ -1709,18 +1703,11 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, break; } } - } else if (!no_split_folio_counting) { + } else { nr_failed++; } stats->nr_failed_pages += nr_pages + nr_retry_pages; - /* - * There might be some split folios of fail-to-migrate large - * folios left in split_folios list. Move them to ret_folios - * list so that they could be put back to the right list by - * the caller otherwise the folio refcnt will be leaked. - */ - list_splice_init(&split_folios, ret_folios); /* nr_failed isn't updated for not used */ nr_large_failed += large_retry; stats->nr_thp_failed += thp_retry; @@ -1733,7 +1720,7 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, if (is_large) { large_retry++; thp_retry += is_thp; - } else if (!no_split_folio_counting) { + } else { retry++; } nr_retry_pages += nr_pages; @@ -1756,7 +1743,7 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, if (is_large) { nr_large_failed++; stats->nr_thp_failed += is_thp; - } else if (!no_split_folio_counting) { + } else { nr_failed++; } @@ -1774,9 +1761,7 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, try_to_unmap_flush(); retry = 1; - for (pass = 0; - pass < NR_MAX_MIGRATE_PAGES_RETRY && (retry || large_retry); - pass++) { + for (pass = 0; pass < nr_pass && (retry || large_retry); pass++) { retry = 0; large_retry = 0; thp_retry = 0; @@ -1805,7 +1790,7 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, if (is_large) { large_retry++; thp_retry += is_thp; - } else if (!no_split_folio_counting) { + } else { retry++; } nr_retry_pages += nr_pages; @@ -1818,7 +1803,7 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, if (is_large) { nr_large_failed++; stats->nr_thp_failed += is_thp; - } else if (!no_split_folio_counting) { + } else { nr_failed++; } @@ -1855,27 +1840,6 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, dst2 = list_next_entry(dst, lru); } - /* - * Try to migrate split folios of fail-to-migrate large folios, no - * nr_failed counting in this round, since all split folios of a - * large folio is counted as 1 failure in the first round. - */ - if (rc >= 0 && !list_empty(&split_folios)) { - /* - * Move non-migrated folios (after NR_MAX_MIGRATE_PAGES_RETRY - * retries) to ret_folios to avoid migrating them again. - */ - list_splice_init(from, ret_folios); - list_splice_init(&split_folios, from); - /* - * Force async mode to avoid to wait lock or bit when we have - * locked more than one folios. - */ - mode = MIGRATE_ASYNC; - no_split_folio_counting = true; - goto retry; - } - return rc; } @@ -1914,6 +1878,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, struct folio *folio, *folio2; LIST_HEAD(folios); LIST_HEAD(ret_folios); + LIST_HEAD(split_folios); struct migrate_pages_stats stats; trace_mm_migrate_pages_start(mode, reason); @@ -1947,12 +1912,23 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, else list_splice_init(from, &folios); rc = migrate_pages_batch(&folios, get_new_page, put_new_page, private, - mode, reason, &ret_folios, &stats); + mode, reason, &ret_folios, &split_folios, &stats, + NR_MAX_MIGRATE_PAGES_RETRY); list_splice_tail_init(&folios, &ret_folios); if (rc < 0) { rc_gather = rc; + list_splice_tail(&split_folios, &ret_folios); goto out; } + if (!list_empty(&split_folios)) { + /* + * Failure isn't counted since all split folios of a large folio + * is counted as 1 failure already. + */ + migrate_pages_batch(&split_folios, get_new_page, put_new_page, private, + MIGRATE_ASYNC, reason, &ret_folios, NULL, &stats, 1); + list_splice_tail_init(&split_folios, &ret_folios); + } rc_gather += rc; if (!list_empty(from)) goto again;