From patchwork Tue Dec 27 00:28:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 13081974 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9E3EC4332F for ; Tue, 27 Dec 2022 00:29:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 37B7394000A; Mon, 26 Dec 2022 19:29:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 32C0C940007; Mon, 26 Dec 2022 19:29:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1CD4594000A; Mon, 26 Dec 2022 19:29:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 0FCC5940007 for ; Mon, 26 Dec 2022 19:29:41 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id CCAE8803C1 for ; Tue, 27 Dec 2022 00:29:40 +0000 (UTC) X-FDA: 80286202920.20.A812066 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by imf19.hostedemail.com (Postfix) with ESMTP id D5F7F1A0004 for ; Tue, 27 Dec 2022 00:29:38 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=MfIFT1GN; spf=pass (imf19.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.65 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672100979; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ObvAcUhsiB4wXEoUkidIw7PHh73SCgJmibqHOUeVrgY=; b=wmc/c0JlxrpDufEIkgqDUZ6UXNX2+yYhxvO2rzTfaxwLIkyL3/QeFTADLiY14LjHMSRlLC uTU/T89OYCWpY49gpYS2tQDL635c56+WvAdeQHFKGyRZYoCY47c8HeOpmVbjMQR6e0YOCS Wk7IdzITdrQPVi2kBpCZ5VM7EpiAgtc= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=MfIFT1GN; spf=pass (imf19.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.65 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672100979; a=rsa-sha256; cv=none; b=OkZh1oNjDmAd/LoGL1yYhGHz99XjtQFmR6w7C724/puA/c7hBhanywXHmXvcMxQvsIFv5m Obx6klUaTHnh/5A9gyiiI1BpigviVAekSTJhSKQhuZl8VR+4oqvcZxU9R/xxZJNiRchR8K d8jiP8j8N2ho7Mv+mCYhU/VxD52j5DQ= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1672100978; x=1703636978; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=IlUDC48YZthy4iB9f936bSNt5z2BK2NSyqzBAjVYcv8=; b=MfIFT1GN/KfQX/KXvbAGSkrSnO4NUFE9QRR8lgsUE/UD35lRDVxZdpJT 4+A1ZgYm9wZiY5Sh4Ov42G4RSZyW6ARx9O9JdkzCIzEStyvPgAicqplEQ dEYwhqB3GNhIauTujU9R5bY5zwNf6KBegzLyGz1LQ52scWpfh0pTa1Oez gEkzHUrBPlQYn95jXRM+l56Ij2wJayOI+4g+kxXs3H0F42JbJNEknOmb1 1GgFTjVrVb6+VDr5NtKHz5vrIeEB6PwscOFT3TotjX5udnI7Fj5UGA/bZ MJIG53aTgbni/8otsZfCm4Gkol6/JTwR+d6sf+10cKGJ3JU/8uptNUtlB A==; X-IronPort-AV: E=McAfee;i="6500,9779,10572"; a="322597243" X-IronPort-AV: E=Sophos;i="5.96,277,1665471600"; d="scan'208";a="322597243" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Dec 2022 16:29:38 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10572"; a="760172216" X-IronPort-AV: E=Sophos;i="5.96,277,1665471600"; d="scan'208";a="760172216" Received: from yyang3-mobl1.ccr.corp.intel.com (HELO yhuang6-mobl2.ccr.corp.intel.com) ([10.254.212.104]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Dec 2022 16:29:34 -0800 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Zi Yan , Yang Shi , Baolin Wang , Oscar Salvador , Matthew Wilcox , Bharata B Rao , Alistair Popple , haoxin Subject: [PATCH 4/8] migrate_pages: split unmap_and_move() to _unmap() and _move() Date: Tue, 27 Dec 2022 08:28:55 +0800 Message-Id: <20221227002859.27740-5-ying.huang@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221227002859.27740-1-ying.huang@intel.com> References: <20221227002859.27740-1-ying.huang@intel.com> MIME-Version: 1.0 X-Stat-Signature: w8krntbiqr31w4b3t1aq5ubpie3mhzus X-Rspam-User: X-Rspamd-Queue-Id: D5F7F1A0004 X-Rspamd-Server: rspam06 X-HE-Tag: 1672100978-709092 X-HE-Meta: U2FsdGVkX1/G78P0XwzYF3HwrWtdtBhc6aDSgOj5C8Yb8inBADv14IqrjCgHZtVEg62qBeMu8ulX/Z/CNpZU+/y6FzSuhXXmuJB2r1l+LlEJEJ3G/p7JwW6h/SVeNdusnR6k47rzgtFIburbPErzWbO+1HlqWbUQX30vBMozXRYLbnuyeyO1StJRRauSqeqtpfKUCSxeAzQfhd2JuQKSJmXwW8MVFjjbP9D3wWXNQk0PSKeSkWkL1pjezoTbv38Jxda2QtStMnMgumDaju9cyNxnANPfaTxef5RUgAtlD0ADezXFAr3/w7/RYA0OK6yfSkuofGQEG+uQw6VEhWaQWsKp+3NN8Gjy87ZbbU+1xNb3VwqzIgFZ9c+Bj5HRLh4FoB/dGiEl5mROm0M/g9YeVoY+/4d8xOk6U3tjniVS/kNGWcewYV/P1Ky7rRu7a5N96w5TsF/WJaq4a1sXC6bB7g+1juDOh63iQ9cUM/duu0DQKam10cClERJP0GUAVhs6cfQebz1EEBZmKrC/yic8gXnqlFmmQ0YhfuEHir1+gYcw5bMvPvbZ9CHNVWNtNqQWPv8M5yAfgmeIOOJQIOgAkt85GJ/FSbVR5+DkMGVopKlZhGRYRplidyfKEawIu6f0Ko9nBc1Tac05xbkyoooOwfjxDKmNyTwnaedWOdjpsYlKiwhE3ZdWy81YjaOwPJyJq1lDHLrpL6ski0err+MEptl16RLV2YITaf34C+awn/PlDrN/61jEU6O27psgBzCSyj4zww/PeAhqnMFT4wR9yQXEgYHS0W2tJ3Z7o92CVTzp3LkT83JhG3bJIdAhSRm3+t2pXHv6uUoVKMXptDi+9BI55lDU1a8gTqsVNjx9Ap478xWZO3cUzh6abhOLaN4CW27hK2D7q37F5nQKR78mikry5l2EJKSnHl0mxJrBklMydfIUfCj0rG7ANM1hHTIpXdcwoq2Exkk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is a preparation patch to batch the folio unmapping and moving. In this patch, unmap_and_move() is split to migrate_folio_unmap() and migrate_folio_move(). So, we can batch _unmap() and _move() in different loops later. To pass some information between unmap and move, the original unused dst->mapping and dst->private are used. Signed-off-by: "Huang, Ying" Cc: Zi Yan Cc: Yang Shi Cc: Baolin Wang Cc: Oscar Salvador Cc: Matthew Wilcox Cc: Bharata B Rao Cc: Alistair Popple Cc: haoxin --- include/linux/migrate.h | 1 + mm/migrate.c | 162 +++++++++++++++++++++++++++++----------- 2 files changed, 121 insertions(+), 42 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 3ef77f52a4f0..7376074f2e1e 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -18,6 +18,7 @@ struct migration_target_control; * - zero on page migration success; */ #define MIGRATEPAGE_SUCCESS 0 +#define MIGRATEPAGE_UNMAP 1 /** * struct movable_operations - Driver page migration diff --git a/mm/migrate.c b/mm/migrate.c index 97ea0737ab2b..e2383b430932 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1009,11 +1009,29 @@ static int move_to_new_folio(struct folio *dst, struct folio *src, return rc; } -static int __unmap_and_move(struct folio *src, struct folio *dst, +static void __migrate_folio_record(struct folio *dst, + unsigned long page_was_mapped, + struct anon_vma *anon_vma) +{ + dst->mapping = (struct address_space *)anon_vma; + dst->private = (void *)page_was_mapped; +} + +static void __migrate_folio_extract(struct folio *dst, + int *page_was_mappedp, + struct anon_vma **anon_vmap) +{ + *anon_vmap = (struct anon_vma *)dst->mapping; + *page_was_mappedp = (unsigned long)dst->private; + dst->mapping = NULL; + dst->private = NULL; +} + +static int __migrate_folio_unmap(struct folio *src, struct folio *dst, int force, enum migrate_mode mode) { int rc = -EAGAIN; - bool page_was_mapped = false; + int page_was_mapped = 0; struct anon_vma *anon_vma = NULL; bool is_lru = !__PageMovable(&src->page); @@ -1089,8 +1107,8 @@ static int __unmap_and_move(struct folio *src, struct folio *dst, goto out_unlock; if (unlikely(!is_lru)) { - rc = move_to_new_folio(dst, src, mode); - goto out_unlock_both; + __migrate_folio_record(dst, page_was_mapped, anon_vma); + return MIGRATEPAGE_UNMAP; } /* @@ -1115,11 +1133,40 @@ static int __unmap_and_move(struct folio *src, struct folio *dst, VM_BUG_ON_FOLIO(folio_test_anon(src) && !folio_test_ksm(src) && !anon_vma, src); try_to_migrate(src, 0); - page_was_mapped = true; + page_was_mapped = 1; } - if (!folio_mapped(src)) - rc = move_to_new_folio(dst, src, mode); + if (!folio_mapped(src)) { + __migrate_folio_record(dst, page_was_mapped, anon_vma); + return MIGRATEPAGE_UNMAP; + } + + + if (page_was_mapped) + remove_migration_ptes(src, src, false); + +out_unlock_both: + folio_unlock(dst); +out_unlock: + /* Drop an anon_vma reference if we took one */ + if (anon_vma) + put_anon_vma(anon_vma); + folio_unlock(src); +out: + + return rc; +} + +static int __migrate_folio_move(struct folio *src, struct folio *dst, + enum migrate_mode mode) +{ + int rc; + int page_was_mapped = 0; + struct anon_vma *anon_vma = NULL; + + __migrate_folio_extract(dst, &page_was_mapped, &anon_vma); + + rc = move_to_new_folio(dst, src, mode); /* * When successful, push dst to LRU immediately: so that if it @@ -1140,14 +1187,11 @@ static int __unmap_and_move(struct folio *src, struct folio *dst, remove_migration_ptes(src, rc == MIGRATEPAGE_SUCCESS ? dst : src, false); -out_unlock_both: folio_unlock(dst); -out_unlock: /* Drop an anon_vma reference if we took one */ if (anon_vma) put_anon_vma(anon_vma); folio_unlock(src); -out: /* * If migration is successful, decrease refcount of dst, * which will not free the page because new page owner increased @@ -1159,19 +1203,32 @@ static int __unmap_and_move(struct folio *src, struct folio *dst, return rc; } -/* - * Obtain the lock on folio, remove all ptes and migrate the folio - * to the newly allocated folio in dst. - */ -static int unmap_and_move(new_page_t get_new_page, - free_page_t put_new_page, - unsigned long private, struct folio *src, - int force, enum migrate_mode mode, - enum migrate_reason reason, - struct list_head *ret) +static void migrate_folio_done(struct folio *src, + enum migrate_reason reason) +{ + /* + * Compaction can migrate also non-LRU pages which are + * not accounted to NR_ISOLATED_*. They can be recognized + * as __PageMovable + */ + if (likely(!__folio_test_movable(src))) + mod_node_page_state(folio_pgdat(src), NR_ISOLATED_ANON + + folio_is_file_lru(src), -folio_nr_pages(src)); + + if (reason != MR_MEMORY_FAILURE) + /* We release the page in page_handle_poison. */ + folio_put(src); +} + +/* Obtain the lock on page, remove all ptes. */ +static int migrate_folio_unmap(new_page_t get_new_page, free_page_t put_new_page, + unsigned long private, struct folio *src, + struct folio **dstp, int force, + enum migrate_mode mode, enum migrate_reason reason, + struct list_head *ret) { struct folio *dst; - int rc = MIGRATEPAGE_SUCCESS; + int rc = MIGRATEPAGE_UNMAP; struct page *newpage = NULL; if (!thp_migration_supported() && folio_test_transhuge(src)) @@ -1182,20 +1239,50 @@ static int unmap_and_move(new_page_t get_new_page, folio_clear_active(src); folio_clear_unevictable(src); /* free_pages_prepare() will clear PG_isolated. */ - goto out; + list_del(&src->lru); + migrate_folio_done(src, reason); + return MIGRATEPAGE_SUCCESS; } newpage = get_new_page(&src->page, private); if (!newpage) return -ENOMEM; dst = page_folio(newpage); + *dstp = dst; dst->private = NULL; - rc = __unmap_and_move(src, dst, force, mode); + rc = __migrate_folio_unmap(src, dst, force, mode); + if (rc == MIGRATEPAGE_UNMAP) + return rc; + + /* + * A page that has not been migrated will have kept its + * references and be restored. + */ + /* restore the folio to right list. */ + if (rc != -EAGAIN) + list_move_tail(&src->lru, ret); + + if (put_new_page) + put_new_page(&dst->page, private); + else + folio_put(dst); + + return rc; +} + +/* Migrate the folio to the newly allocated folio in dst. */ +static int migrate_folio_move(free_page_t put_new_page, unsigned long private, + struct folio *src, struct folio *dst, + enum migrate_mode mode, enum migrate_reason reason, + struct list_head *ret) +{ + int rc; + + rc = __migrate_folio_move(src, dst, mode); if (rc == MIGRATEPAGE_SUCCESS) set_page_owner_migrate_reason(&dst->page, reason); -out: if (rc != -EAGAIN) { /* * A folio that has been migrated has all references @@ -1211,20 +1298,7 @@ static int unmap_and_move(new_page_t get_new_page, * we want to retry. */ if (rc == MIGRATEPAGE_SUCCESS) { - /* - * Compaction can migrate also non-LRU folios which are - * not accounted to NR_ISOLATED_*. They can be recognized - * as __folio_test_movable - */ - if (likely(!__folio_test_movable(src))) - mod_node_page_state(folio_pgdat(src), NR_ISOLATED_ANON + - folio_is_file_lru(src), -folio_nr_pages(src)); - - if (reason != MR_MEMORY_FAILURE) - /* - * We release the folio in page_handle_poison. - */ - folio_put(src); + migrate_folio_done(src, reason); } else { if (rc != -EAGAIN) list_add_tail(&src->lru, ret); @@ -1499,7 +1573,7 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, int pass = 0; bool is_large = false; bool is_thp = false; - struct folio *folio, *folio2; + struct folio *folio, *folio2, *dst = NULL; int rc, nr_pages; LIST_HEAD(split_folios); bool nosplit = (reason == MR_NUMA_MISPLACED); @@ -1524,9 +1598,13 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, cond_resched(); - rc = unmap_and_move(get_new_page, put_new_page, - private, folio, pass > 2, mode, - reason, ret_folios); + rc = migrate_folio_unmap(get_new_page, put_new_page, private, + folio, &dst, pass > 2, mode, + reason, ret_folios); + if (rc == MIGRATEPAGE_UNMAP) + rc = migrate_folio_move(put_new_page, private, + folio, dst, mode, + reason, ret_folios); /* * The rules are: * Success: folio will be freed