From patchwork Fri Feb 24 14:11:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 13151289 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F7E6C61DA3 for ; Fri, 24 Feb 2023 14:12:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DE0B76B0075; Fri, 24 Feb 2023 09:12:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D41376B0078; Fri, 24 Feb 2023 09:12:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C08CA6B007B; Fri, 24 Feb 2023 09:12:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id B21456B0075 for ; Fri, 24 Feb 2023 09:12:26 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 74FCF120AF1 for ; Fri, 24 Feb 2023 14:12:26 +0000 (UTC) X-FDA: 80502375492.19.A5D519D Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by imf19.hostedemail.com (Postfix) with ESMTP id 5AEA91A0002 for ; Fri, 24 Feb 2023 14:12:24 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=OY+agIl3; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf19.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.120 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677247944; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jtIb4ZWHwjQeYauy02UTZy5dAdRH53XVqaRUoqPshEM=; b=TJjcDg365e1tQva8H/mV4+KdjGZIpsKv246RgZFOdyzb7nWuydq2mmAsqIEM2gcMUDzGXR JtxInMqWC4w+UHvcbGrQSPiONRl2KTBeiHWhmX4EsWvptVLlpkxYpuD/2sBqr8w04lBByX HgBLFxcgq4okLS7Py2i0gQZGUcsdcEs= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=OY+agIl3; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf19.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.120 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677247944; a=rsa-sha256; cv=none; b=ThlWgioArYYq3xYE1ysojKqPo5+YXt1LKxDO2MTtiPQWr45b6IeppsIlkdO8kHHwIAlo+W hNkfq/OOmJDBKpirJGuscvZ9HYwXaK1PSQ+KeUQASiANlIlhwV9SJXRqXBIBM7Quo+OgjW 76T/OtFo45GcfSmiQs+6xnA3QQSI/kI= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1677247944; x=1708783944; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Dxby7yNPzwalm19ztka4+4UYeHwqmfCE4JJYZx6x6X0=; b=OY+agIl3Ab4Ka0BH0Ou/Qieku9/hZfjq70j/TkyYQ8jEWMJkiYIBuD6g o5St+rbNX7L1LgkXTN9/FsPQRKkemCA9wDf2ZgRVqRiqBltDYRvPgaLgi 8NfQLGNsTtap4VwwgYuXCnBR81y4CVUVgEfbM3qdAmclqVy+qX1uhOei6 trSVRMow0F5Mhov0cNF9suQJMbeZ6yJM+LhTpkjowi0/UzI24MPRzdfvX h1sHvYYglDISjHIN5ycTpL2fpFNCkC/rZi+Jfni2K4nYcAT4+xeUvRS6S x2BfCqJ3ciQecT8VQUfwgNiAQZ9jK0cW1582zUejoajjsDQga7NfJeu1P A==; X-IronPort-AV: E=McAfee;i="6500,9779,10630"; a="332167738" X-IronPort-AV: E=Sophos;i="5.97,324,1669104000"; d="scan'208";a="332167738" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Feb 2023 06:12:22 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10630"; a="741684670" X-IronPort-AV: E=Sophos;i="5.97,324,1669104000"; d="scan'208";a="741684670" Received: from bingqili-mobl2.ccr.corp.intel.com (HELO yhuang6-mobl2.ccr.corp.intel.com) ([10.255.28.19]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Feb 2023 06:12:19 -0800 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Hugh Dickins , "Xu, Pengfei" , Christoph Hellwig , Stefan Roesch , Tejun Heo , Xin Hao , Zi Yan , Yang Shi , Baolin Wang , Matthew Wilcox , Mike Kravetz Subject: [PATCH 3/3] migrate_pages: try migrate in batch asynchronously firstly Date: Fri, 24 Feb 2023 22:11:45 +0800 Message-Id: <20230224141145.96814-4-ying.huang@intel.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230224141145.96814-1-ying.huang@intel.com> References: <20230224141145.96814-1-ying.huang@intel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 5AEA91A0002 X-Stat-Signature: wjpssjbedm46wdbmhbpp4rucm9w8ek64 X-HE-Tag: 1677247944-116700 X-HE-Meta: U2FsdGVkX18lkLFPmvz+f7oouq8PQJlXPh//FG0xevgsAFwb8L2yUXncJmxnSuaF9NCSIbHvZJI5Tf2LjcoVldVs2ri66k0ryxOmflVylC6+bfFIaEDnXu0vnqIJLj9GcIVJLCdp7tG0xCtWYwGLBYGdzCZ1YouUuztbJDMzsNua4+n0Tzi0MK7sQLwR7NBkmr6XS5Apakqgw/1vxlQNjdY1heXkGaeU05YefZNQgeGrY/4N+bRqf5Bc8KNYzJvcZCQGK5D4+A0WKfb2NcqLyDUsKWrM29YT6QdvXWR0oa5K0i/7nr058D5STJslZ5nPXZ3Cr+TzoNfvcfE1+0eQMQmsdHe/nutIsKDdR4uKnqcoUnmPRv6aMg3oiOXwaO2VNULXwbrtW3z2oN13mdKF6NVxj1kb86d4nkDa88gWra4YP8pNfxwH7mSjoYYFG01E6la1wgXtjuJQr5WpAY2NWsZ3hbSZA1nRk5/7PZYb6X5FPP3nyTnJx0mnpHd0yl8pAWTboVEqgTG1I+b0JzuZnOq5+AV2f4C6jBmSSGIXirrcd6x7190sPcLwR4tZCJKV+ydZRItasw3Ztk004pwPh1s4xP5twwH7XvZEUGy8C3dHJxWwgutyCaUCRaZN45euFyfjm8f9owxURV9Imcw+3dE139tPVtwbupOrqG0vFe/KlgH1pF+iC13g9TOjot7xfEMz7FmmzgLM51uOPFA6LYJac+Fjkb99He3PczYOftnuXcwHb8lPQtprpyd9L0sDN2f67NR+nSYYx+J1FHX61RYeDlkwUEwbZ1mwlm82CxY66w0i9rnmocuuFcTo1z3vVCfN23BOx1GcWHcD6YsquVMzWm03KLkuDthcbsh3lU4EKQpy0mu/dy+Z/ZtbuvpeBsnmaNcr3X/FhjDzCbVd7UB5BfUzl+C8BXTMxMx3L9mzo06p/DLtg+IGrqlvCC+8OjHaZeH6b7t8H4+NGts 3YJzUAG8 mO0+RxUR6rjHnGT3nncm4+Ah+lpujq7jcV848mgjvz0NX6wgxNrOHXf4V4bk6l2XgHwYXRP8wOmTKc9TXLHh631Ka7wzZWwEKt1+EvTsWejXRHJj1/gzRRfgTRqSaB2rdH4aE9d6FiMaHwCIx9lmw95+aE0qDxOvIW4Vbl9DhSbhruWnZMuB+edVniDz74fEEC0LsY+3kVkVW7KncxtFiQFOonQMfO6UzT8ppCNgrDAUhZwchtrL/U5ILROcunfpxFu0Dz/Zd+lMLTmHRINFGE4j29hwPSDmiJRqBZP9wqBu+K73MJ77RPLgzunWyeR+8MmjEgVGnSOwAPX3Pt4yqIyZhEXGMzmfIALbBxNkbQAo3wjttN3+JuWPX66hNwPqdrxd0nDptbEbfVmBtt0Ghxl/jH7AGsfdg8hKFkZ7FOj2TGCwA+IFse7QEOTYBl2O6yldVOeUdevplaik= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When we have locked more than one folios, we cannot wait the lock or bit (e.g., page lock, buffer head lock, writeback bit) synchronously. Otherwise deadlock may be triggered. This make it hard to batch the synchronous migration directly. This patch re-enables batching synchronous migration via trying to migrate in batch asynchronously firstly. And any folios that are failed to be migrated asynchronously will be migrated synchronously one by one. Test shows that this can restore the TLB flushing batching performance for synchronous migration effectively. Signed-off-by: "Huang, Ying" Cc: Hugh Dickins Cc: "Xu, Pengfei" Cc: Christoph Hellwig Cc: Stefan Roesch Cc: Tejun Heo Cc: Xin Hao Cc: Zi Yan Cc: Yang Shi Cc: Baolin Wang Cc: Matthew Wilcox Cc: Mike Kravetz Tested-by: Hugh Dickins Reviewed-by: Baolin Wang --- mm/migrate.c | 65 ++++++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 55 insertions(+), 10 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 91198b487e49..c17ce5ee8d92 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1843,6 +1843,51 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, return rc; } +static int migrate_pages_sync(struct list_head *from, new_page_t get_new_page, + free_page_t put_new_page, unsigned long private, + enum migrate_mode mode, int reason, struct list_head *ret_folios, + struct list_head *split_folios, struct migrate_pages_stats *stats) +{ + int rc, nr_failed = 0; + LIST_HEAD(folios); + struct migrate_pages_stats astats; + + memset(&astats, 0, sizeof(astats)); + /* Try to migrate in batch with MIGRATE_ASYNC mode firstly */ + rc = migrate_pages_batch(from, get_new_page, put_new_page, private, MIGRATE_ASYNC, + reason, &folios, split_folios, &astats, + NR_MAX_MIGRATE_PAGES_RETRY); + stats->nr_succeeded += astats.nr_succeeded; + stats->nr_thp_succeeded += astats.nr_thp_succeeded; + stats->nr_thp_split += astats.nr_thp_split; + if (rc < 0) { + stats->nr_failed_pages += astats.nr_failed_pages; + stats->nr_thp_failed += astats.nr_thp_failed; + list_splice_tail(&folios, ret_folios); + return rc; + } + stats->nr_thp_failed += astats.nr_thp_split; + nr_failed += astats.nr_thp_split; + /* + * Fall back to migrate all failed folios one by one synchronously. All + * failed folios except split THPs will be retried, so their failure + * isn't counted + */ + list_splice_tail_init(&folios, from); + while (!list_empty(from)) { + list_move(from->next, &folios); + rc = migrate_pages_batch(&folios, get_new_page, put_new_page, + private, mode, reason, ret_folios, + split_folios, stats, NR_MAX_MIGRATE_PAGES_RETRY); + list_splice_tail_init(&folios, ret_folios); + if (rc < 0) + return rc; + nr_failed += rc; + } + + return nr_failed; +} + /* * migrate_pages - migrate the folios specified in a list, to the free folios * supplied as the target for the page migration @@ -1874,7 +1919,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, enum migrate_mode mode, int reason, unsigned int *ret_succeeded) { int rc, rc_gather; - int nr_pages, batch; + int nr_pages; struct folio *folio, *folio2; LIST_HEAD(folios); LIST_HEAD(ret_folios); @@ -1890,10 +1935,6 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, if (rc_gather < 0) goto out; - if (mode == MIGRATE_ASYNC) - batch = NR_MAX_BATCHED_MIGRATION; - else - batch = 1; again: nr_pages = 0; list_for_each_entry_safe(folio, folio2, from, lru) { @@ -1904,16 +1945,20 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, } nr_pages += folio_nr_pages(folio); - if (nr_pages >= batch) + if (nr_pages >= NR_MAX_BATCHED_MIGRATION) break; } - if (nr_pages >= batch) + if (nr_pages >= NR_MAX_BATCHED_MIGRATION) list_cut_before(&folios, from, &folio2->lru); else list_splice_init(from, &folios); - rc = migrate_pages_batch(&folios, get_new_page, put_new_page, private, - mode, reason, &ret_folios, &split_folios, &stats, - NR_MAX_MIGRATE_PAGES_RETRY); + if (mode == MIGRATE_ASYNC) + rc = migrate_pages_batch(&folios, get_new_page, put_new_page, private, + mode, reason, &ret_folios, &split_folios, &stats, + NR_MAX_MIGRATE_PAGES_RETRY); + else + rc = migrate_pages_sync(&folios, get_new_page, put_new_page, private, + mode, reason, &ret_folios, &split_folios, &stats); list_splice_tail_init(&folios, &ret_folios); if (rc < 0) { rc_gather = rc;