From patchwork Tue Aug 22 00:53:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 13360020 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DFA4EE4996 for ; Tue, 22 Aug 2023 00:54:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 843FF940008; Mon, 21 Aug 2023 20:54:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7F45B940016; Mon, 21 Aug 2023 20:54:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6E8B5940008; Mon, 21 Aug 2023 20:54:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 40598940016 for ; Mon, 21 Aug 2023 20:54:15 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 029D2160141 for ; Tue, 22 Aug 2023 00:54:14 +0000 (UTC) X-FDA: 81149919270.14.DB41BBA Received: from out30-133.freemail.mail.aliyun.com (out30-133.freemail.mail.aliyun.com [115.124.30.133]) by imf16.hostedemail.com (Postfix) with ESMTP id 1554E18000D for ; Tue, 22 Aug 2023 00:54:12 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=none; spf=pass (imf16.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.133 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692665653; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1TrGZO3wr5saKYv6Q1F9WzGr6uTMiFtX/tOe2G0ZEm0=; b=avtFU7PxYmIItn+TVUKJPqCPxa8ssppXd/z1yS66Pf8qYJHzr6aUXOFOllIA0rKlFc2/3Y q9s+hcAKmLVt3QOMt/WSKUWlJJhY6FnkoKL2JayNKPV91aqu5mVjFimIjHm2s0yIY0IV1o 7h6F3jwQYd+B1+q3ZiqiRZKAtvKfGso= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692665653; a=rsa-sha256; cv=none; b=UmnuaWv7yQwOLh/itUJS5De+TnaZQrQa8mEHit+gLJ/I0Z1jWcPChz6uXSpCG85w7AvJrF pMLgny96szqy0IVE0u1bV/SgYV6pW/WJNhjMUG0m9Jp2i02IQO9jyEBvFHTX7YwTxKpqcT AjmBgkusv1SU4HorVKeVZkFDGPnrw+E= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=none; spf=pass (imf16.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.133 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R601e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046049;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0VqK0DP4_1692665648; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VqK0DP4_1692665648) by smtp.aliyun-inc.com; Tue, 22 Aug 2023 08:54:08 +0800 From: Baolin Wang To: akpm@linux-foundation.org Cc: mgorman@techsingularity.net, shy828301@gmail.com, david@redhat.com, ying.huang@intel.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 1/4] mm: migrate: factor out migration validation into numa_page_can_migrate() Date: Tue, 22 Aug 2023 08:53:49 +0800 Message-Id: <6e1c5a86b8d960294582a1221a1a20eb66e53b37.1692665449.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 1554E18000D X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: pyy7hkpcoppn1rka6w43umaxs4geyqkw X-HE-Tag: 1692665652-655924 X-HE-Meta: U2FsdGVkX19/XUIX+ORwQhlKbbKYuVueQrGzhwK+eRY7stvjTDtTirGopiT2jHaEMMIDwsqtCFf8tV68e8WKxM0nvHbHhIKXhaYwZ2H9xkehKymRxyi6bUraMC9x6kqKtwb5H3JbCsiJrymq+nkAxac+JIk6BTyJm7SKIwdNE+IrCvzXAAAnXjYb43cGkp4lcSb9kzLLESoqgExJR5TWOGYwBYxkS1k6kGSAJdpABP+435a/WjWn0Q+EzmQeD4Rcyz807647xG8GLsranhZBBro/7gNYPoalTnpYBk6o/FRJeUphqTnS1NBuuTq8vatng2gF69BbbBw4Rrwc76qzpngHeLbSdnpkAYemHiTTP3yDajaVmFYyXs5NUEuvkBIKi2NJSbxqgmLX3tutIH8YrZOBrhq3ij1hnhiEZpKghZgDZpaljEsEmbFduhee7cDsW9SKfO6j/+/MauygfLEul5GCbo5sXcIDlkiAf03JpUyu6wzGp/IPqMcmIELRqMP8NyP8Aj559LWhKRyeA5+NyZwC9Hrz+knaRZpWxNcrXPBnKIBacptyAGa2J7OhpCsDCoAiSis9VpGr1P0rH3l+2lhesEPwFvGEgq61jYwAo8zYCvt3eWcOULmIC8UFT7x7I+jnPFEV/2jvXzPq+rsUscQdKNzBdauOPILD3vYNblHHtNh5NL6BJvDvYviM+DJALGAmXnEnM1nbxUQIoI+zKx7TybQ63P/GIPibk1uadBmVam5erUXMjdehx5soEKhCDNm/02BkX+LnENgiw554bv7sppN83w0s8ufV4NGUL69wwlhkrJQL1UFnsAvTOkFluLMkG5ShvgThH0qX+VLsRNM65iebEPmzyxCUL2u9ZcfRP7CeMHio8P9gDt7eL8msxwPIE+ZJOAb2ObRw9Dw/EJKjLS1GOS3wYqZ49CtclcSIyqe0KAsQMharq0kkrTkYUCYzoSjDgMRKuJJE7R5 ItxE/+kl 1a01x3+U4AwmZigl5OIje3emfyKjBm/CTlqIpd5ZjoLdgPIpm7W3j7n8n4qN6GY4ai4mIujT2syrYpCjFn/67vQWyn+HEDYRSqYgu+p2XxUosaxHElL+uneIBOKzPY043anrA+5i8phDWtiQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now there are several places will validate if a page can migrate or not, so factoring out these validation into a new function to make them more maintainable. Signed-off-by: Baolin Wang --- mm/huge_memory.c | 6 ++++++ mm/internal.h | 1 + mm/memory.c | 30 ++++++++++++++++++++++++++++++ mm/migrate.c | 19 ------------------- 4 files changed, 37 insertions(+), 19 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 4465915711c3..4a9b34a89854 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1540,11 +1540,17 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) spin_unlock(vmf->ptl); writable = false; + if (!numa_page_can_migrate(vma, page)) { + put_page(page); + goto migrate_fail; + } + migrated = migrate_misplaced_page(page, vma, target_nid); if (migrated) { flags |= TNF_MIGRATED; page_nid = target_nid; } else { +migrate_fail: flags |= TNF_MIGRATE_FAIL; vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); if (unlikely(!pmd_same(oldpmd, *vmf->pmd))) { diff --git a/mm/internal.h b/mm/internal.h index f59a53111817..1e00b8a30910 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -933,6 +933,7 @@ void __vunmap_range_noflush(unsigned long start, unsigned long end); int numa_migrate_prep(struct page *page, struct vm_area_struct *vma, unsigned long addr, int page_nid, int *flags); +bool numa_page_can_migrate(struct vm_area_struct *vma, struct page *page); void free_zone_device_page(struct page *page); int migrate_device_coherent_page(struct page *page); diff --git a/mm/memory.c b/mm/memory.c index 12647d139a13..fc6f6b7a70e1 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4735,6 +4735,30 @@ int numa_migrate_prep(struct page *page, struct vm_area_struct *vma, return mpol_misplaced(page, vma, addr); } +bool numa_page_can_migrate(struct vm_area_struct *vma, struct page *page) +{ + /* + * Don't migrate file pages that are mapped in multiple processes + * with execute permissions as they are probably shared libraries. + */ + if (page_mapcount(page) != 1 && page_is_file_lru(page) && + (vma->vm_flags & VM_EXEC)) + return false; + + /* + * Also do not migrate dirty pages as not all filesystems can move + * dirty pages in MIGRATE_ASYNC mode which is a waste of cycles. + */ + if (page_is_file_lru(page) && PageDirty(page)) + return false; + + /* Do not migrate THP mapped by multiple processes */ + if (PageTransHuge(page) && total_mapcount(page) > 1) + return false; + + return true; +} + static vm_fault_t do_numa_page(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; @@ -4815,11 +4839,17 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) pte_unmap_unlock(vmf->pte, vmf->ptl); writable = false; + if (!numa_page_can_migrate(vma, page)) { + put_page(page); + goto migrate_fail; + } + /* Migrate to the requested node */ if (migrate_misplaced_page(page, vma, target_nid)) { page_nid = target_nid; flags |= TNF_MIGRATED; } else { +migrate_fail: flags |= TNF_MIGRATE_FAIL; vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); diff --git a/mm/migrate.c b/mm/migrate.c index e21d5a7e7447..9cc98fb1d6ec 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2485,10 +2485,6 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) VM_BUG_ON_PAGE(order && !PageTransHuge(page), page); - /* Do not migrate THP mapped by multiple processes */ - if (PageTransHuge(page) && total_mapcount(page) > 1) - return 0; - /* Avoid migrating to a node that is nearly full */ if (!migrate_balanced_pgdat(pgdat, nr_pages)) { int z; @@ -2533,21 +2529,6 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, LIST_HEAD(migratepages); int nr_pages = thp_nr_pages(page); - /* - * Don't migrate file pages that are mapped in multiple processes - * with execute permissions as they are probably shared libraries. - */ - if (page_mapcount(page) != 1 && page_is_file_lru(page) && - (vma->vm_flags & VM_EXEC)) - goto out; - - /* - * Also do not migrate dirty pages as not all filesystems can move - * dirty pages in MIGRATE_ASYNC mode which is a waste of cycles. - */ - if (page_is_file_lru(page) && PageDirty(page)) - goto out; - isolated = numamigrate_isolate_page(pgdat, page); if (!isolated) goto out; From patchwork Tue Aug 22 00:53:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 13360022 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 814A8EE4996 for ; Tue, 22 Aug 2023 00:54:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 203E028000C; Mon, 21 Aug 2023 20:54:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1B5FF28000B; Mon, 21 Aug 2023 20:54:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0538428000C; Mon, 21 Aug 2023 20:54:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id E639028000B for ; Mon, 21 Aug 2023 20:54:17 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id B35C0120133 for ; Tue, 22 Aug 2023 00:54:17 +0000 (UTC) X-FDA: 81149919354.08.4707B63 Received: from out30-119.freemail.mail.aliyun.com (out30-119.freemail.mail.aliyun.com [115.124.30.119]) by imf13.hostedemail.com (Postfix) with ESMTP id 859DE20003 for ; Tue, 22 Aug 2023 00:54:14 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf13.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.119 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692665655; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4QgFgSjJ3qaZoWytVxyqoyhuxJGCk4xYZPpkdBoY63c=; b=769FAXI4Rq3yb5Cc5YxqjUADymiBA03s4acS5sioydtDpece1LGBRFBaXHyugO7slzmsU4 tRM3dLAG+Be2nAk5mRCOeO1Z/SFNChuDr1IKrXJGpyhupEUlestfapaNnZctikJ1xGAaeq dLK8Z8Wg4F37dJ4Qmg3+0YHPQJI21qs= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf13.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.119 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692665655; a=rsa-sha256; cv=none; b=HYwDWIgZkvxpBfiDT3Q2ikEi1WT4TNyXIppIeZdUbWJJwB0n0Qr471fnkDHquk9BHhFflL /Nq3ldBVPWp6f4/ylJgN6wMNoOKmwCt9FKx7lq2gaR9XssjXuSCohYvp+vNWwhTBEv6ATp g5TU7A4oqfwrjDnlDxZDu/eFxn2cp7o= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R121e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045170;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0VqK0DPh_1692665649; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VqK0DPh_1692665649) by smtp.aliyun-inc.com; Tue, 22 Aug 2023 08:54:10 +0800 From: Baolin Wang To: akpm@linux-foundation.org Cc: mgorman@techsingularity.net, shy828301@gmail.com, david@redhat.com, ying.huang@intel.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 2/4] mm: migrate: move the numamigrate_isolate_page() into do_numa_page() Date: Tue, 22 Aug 2023 08:53:50 +0800 Message-Id: <9ff2a9e3e644103a08b9b84b76b39bbd4c60020b.1692665449.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 859DE20003 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: p4czqcfpchktqzxpdn3whzt3z1z8e9s8 X-HE-Tag: 1692665654-332884 X-HE-Meta: U2FsdGVkX1+9rffwx7p3hczEQtfzvM/pDqj9yJR/A5FRG3LxK8a3voID4Z6YK9/pcXCfYu4smE+i9W2OX+4CjPZg4+fihUVe21sksZLTDjTvX8esTctI5tHfCsdsa20Emi/sVvO02SDAv9SxARdyqj53lbRvpwntS+NJLKJq4vnUEIpWhmvw0QFC/yJcCreba9zW+H1QP8beHXpfGlVIHzZ07TkmWAHE5tNi/kZNDqqEGooAJnWOWUpE0EnGLB8p5qNQvhzfWYekpsryXP1E9Uo7F5pSfrRsC6tbU0iD+KgoNrarBF1DJbx7/WbIOFoYwbFlkqkvV2RcnPdGmXxd9HP9njxxMZmV0rOAbpMtjgP1sNWTOnvQefgMuanJOSahLEgqfq+wBJULU3kykU/LiKu6iGX3xZzCiuQBt8C69mCvVI+J+d3y9VdF8jEVnc1qyTxI/05XrcDE3jd1Ui58abcU5MIMdTleVMZfpM0zXxMy2gibmmRtyTUE3R/ZtVDSDx8fqAMjKLB5A1bi9I4xxfHFErWpJMJUfF6C4C5VreQiMnlSIeI4zTfgitH+clf33vDa11UbyiQUsATmGlDXiLa8gjdWmt8kYZaLhmtnDiofqKW4yft/5Lo2UaVpr7qQ8uOTgA36pdH6awurhke/gVE8kNEsHEHhGrmjgN7Gf6bmL2ot/izcBSr/ZFCQtih8i4t+QTSsc9IKJrjSFAkwiOBGdsEuo37U/CygBPnVtMNFLoTUC5MRHDmU0jVqAmsTEr4QZkdLkz91FGnwUiZzEAkqtBelhrDbNUnEgaruHKS1g51pkqDUhmFZJraUS7z7PNKYJJ38f3uzHXFHyNiojTTKPDM2aQSBiXj6l5Ux3ZcAri3DQ7DhhLOUj38tNbz8efGAbPWyvOtPHRgUMC74xkSQ/BeVhM10bVJk7iISclDURn+cv2+tBxT2XjhB7SWye0sPJ9deeWt0bnmiTUY xmsSfNV9 tn6/4sMySwxB20EYjhIfnUIUBrToOtkQkTcefHHOmtUzcziYypApHSUFiuGib5wSiATJ6hQqa+BhK1pT8bnhZa/XXpKlP/9JxP1oKIpJLk5XKE3gicT6UOeEyDS6CmvjuygCe2sDb695JFO4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Move the numamigrate_isolate_page() into do_numa_page() to simplify the migrate_misplaced_page(), which now only focuses on page migration, and it also serves as a preparation for supporting batch migration for migrate_misplaced_page(). While we are at it, change the numamigrate_isolate_page() to boolean type to make the return value more clear. Signed-off-by: Baolin Wang --- include/linux/migrate.h | 6 ++++++ mm/huge_memory.c | 7 +++++++ mm/memory.c | 7 +++++++ mm/migrate.c | 22 +++++++--------------- 4 files changed, 27 insertions(+), 15 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 711dd9412561..ddcd62ec2c12 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -144,12 +144,18 @@ const struct movable_operations *page_movable_ops(struct page *page) #ifdef CONFIG_NUMA_BALANCING int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, int node); +bool numamigrate_isolate_page(pg_data_t *pgdat, struct page *page); #else static inline int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, int node) { return -EAGAIN; /* can't migrate now */ } + +static inline bool numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) +{ + return false; +} #endif /* CONFIG_NUMA_BALANCING */ #ifdef CONFIG_MIGRATION diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 4a9b34a89854..07149ead11e4 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1496,6 +1496,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) int target_nid, last_cpupid = (-1 & LAST_CPUPID_MASK); bool migrated = false, writable = false; int flags = 0; + pg_data_t *pgdat; vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); if (unlikely(!pmd_same(oldpmd, *vmf->pmd))) { @@ -1545,6 +1546,12 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) goto migrate_fail; } + pgdat = NODE_DATA(target_nid); + if (!numamigrate_isolate_page(pgdat, page)) { + put_page(page); + goto migrate_fail; + } + migrated = migrate_misplaced_page(page, vma, target_nid); if (migrated) { flags |= TNF_MIGRATED; diff --git a/mm/memory.c b/mm/memory.c index fc6f6b7a70e1..4e451b041488 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4769,6 +4769,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) int target_nid; pte_t pte, old_pte; int flags = 0; + pg_data_t *pgdat; /* * The "pte" at this point cannot be used safely without @@ -4844,6 +4845,12 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) goto migrate_fail; } + pgdat = NODE_DATA(target_nid); + if (!numamigrate_isolate_page(pgdat, page)) { + put_page(page); + goto migrate_fail; + } + /* Migrate to the requested node */ if (migrate_misplaced_page(page, vma, target_nid)) { page_nid = target_nid; diff --git a/mm/migrate.c b/mm/migrate.c index 9cc98fb1d6ec..0b2b69a2a7ab 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2478,7 +2478,7 @@ static struct folio *alloc_misplaced_dst_folio(struct folio *src, return __folio_alloc_node(gfp, order, nid); } -static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) +bool numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) { int nr_pages = thp_nr_pages(page); int order = compound_order(page); @@ -2496,11 +2496,11 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) break; } wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE); - return 0; + return false; } if (!isolate_lru_page(page)) - return 0; + return false; mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + page_is_file_lru(page), nr_pages); @@ -2511,7 +2511,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) * disappearing underneath us during migration. */ put_page(page); - return 1; + return true; } /* @@ -2523,16 +2523,12 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, int node) { pg_data_t *pgdat = NODE_DATA(node); - int isolated; + int migrated = 1; int nr_remaining; unsigned int nr_succeeded; LIST_HEAD(migratepages); int nr_pages = thp_nr_pages(page); - isolated = numamigrate_isolate_page(pgdat, page); - if (!isolated) - goto out; - list_add(&page->lru, &migratepages); nr_remaining = migrate_pages(&migratepages, alloc_misplaced_dst_folio, NULL, node, MIGRATE_ASYNC, @@ -2544,7 +2540,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, page_is_file_lru(page), -nr_pages); putback_lru_page(page); } - isolated = 0; + migrated = 0; } if (nr_succeeded) { count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_succeeded); @@ -2553,11 +2549,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, nr_succeeded); } BUG_ON(!list_empty(&migratepages)); - return isolated; - -out: - put_page(page); - return 0; + return migrated; } #endif /* CONFIG_NUMA_BALANCING */ #endif /* CONFIG_NUMA */ From patchwork Tue Aug 22 00:53:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 13360021 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DBB1EE49AB for ; Tue, 22 Aug 2023 00:54:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2515B940016; Mon, 21 Aug 2023 20:54:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 201A728000B; Mon, 21 Aug 2023 20:54:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 02CAE94001C; Mon, 21 Aug 2023 20:54:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id E7E6B940016 for ; Mon, 21 Aug 2023 20:54:16 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id BE39F1C84BE for ; Tue, 22 Aug 2023 00:54:16 +0000 (UTC) X-FDA: 81149919312.07.9EE5069 Received: from out30-99.freemail.mail.aliyun.com (out30-99.freemail.mail.aliyun.com [115.124.30.99]) by imf22.hostedemail.com (Postfix) with ESMTP id 9566AC0005 for ; Tue, 22 Aug 2023 00:54:14 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf22.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.99 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692665655; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DlJp3/3E7Xb8ya6yRKzSEIygAUPOQPvI4nkRhXzfxUw=; b=q9M2Gj0JT8akI9CXJrV+SkmfLugRd5PWLwlvO998FIMblH2+BuunqfsywzvkLnf10aaQFl caTPrWkVKaiwGNt0PzqZYrq3uF1dVBfLpuxVoZ8LHuNS+eFfJ0OBpB2pM244s+2br7EnHa rSj2i42GX70kOJrnGi5q3UPEmpWhyvE= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf22.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.99 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692665655; a=rsa-sha256; cv=none; b=f7+l2FbvimsdM6RGAe38MEBRIc1I/4PekQaQo0lYvo3N9MUCfapk6AFh4up652a+1aYuJu 0bMrqMTvIvGX0Sm5QFMLk+5jxRJ8tBi4TvYIx1yhhr64Cky3Z8hN4zxEdzId7atUj3rS2n rF5KbVmyYeqQ7RZSl787G5IJ6rB+IDc= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R121e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045192;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0VqK0DQ6_1692665650; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VqK0DQ6_1692665650) by smtp.aliyun-inc.com; Tue, 22 Aug 2023 08:54:11 +0800 From: Baolin Wang To: akpm@linux-foundation.org Cc: mgorman@techsingularity.net, shy828301@gmail.com, david@redhat.com, ying.huang@intel.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 3/4] mm: migrate: change migrate_misplaced_page() to support multiple pages migration Date: Tue, 22 Aug 2023 08:53:51 +0800 Message-Id: <02c3d36270705f0dfec1ea583e252464cb48d802.1692665449.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 9566AC0005 X-Stat-Signature: 11zcn7tkcrerjotntq9deyyrp4cofn6t X-Rspam-User: X-HE-Tag: 1692665654-200034 X-HE-Meta: U2FsdGVkX19tFDHS/o0MMVlpKsvjlFC/hLtnEIj/yZ4D8s3RVYbtwYy/b2wqQlC2BizHCEvWKTmheRtjicto+dCZlZBub2GxJqGK7Na8nf8f2GnEtmNkF1pANt8lgjuzWd5kfw2TEjmT6fcs3wuvXhdbstsFifz1rSij5YqQWIMWAhRTGQ8jW3pKrNTytR3T8rcBtiILvYP9ujh4ZStWQaoTgnDxRI4qnUzMDwEwGC/+ReoHdP09VJ1//msFXyJrsMNWrBD5ljN/bkgnjcaBjbXyD0leU1Mk4xb7sdgP3+VDm1ZPGBS4TQLStxur7hAeTg9reTvOxtcRtPumXCs5ODMhNwgLmPDgvJmHKiv7QwJNswdCzjnI19ryihJzrcJKps5y56UZj+K78r5Z4tNzoah2IxgYa329PxE3yUlgcTToIX2A06LE5duJ3e2qPSBRt9Q3xkUoIcVUdpweboPnfwnnhe7Ao61icPUUpXwayEwL1s+DJTngLREQ8/hlHnljd/1uBCYxcLsY254gxtuKbLcg+a5nw77FCpr6oeby0h6BkdrjJqyIl+xsCdPt9eYxFWR1piIMbtyp0gmZCkgSRoqsTFNSrYCId6+WtnE7vwgbFprG7k4f7maNddmpxIoIf1vPZ6ArQZv7O9KE/Gp8of9bVNALAmVlayH83IAarGPdqokOatXz5J2k60d030P+sERlD29IFeFS2kmHmkXXUES3IxIMMsE78JjdkzGY5B02tE4ymgy3FGfYR6s0JzLchgdTu5TnOOZ3tEfDFwZny+/Icu+gQP6QO7sJVADx+6nqMSYKwZaGHcrqb6bP2MOgVfadc2Xlf5VG25EXaV1p9Sfe0ch8r6TWyqOBzFsJTTTZXI23QN0nu5h4tiId9Bh/s34mqIDbvLAxWhd1s187vmEL0hrX/n49acrF+2JbhoiJU+bPpkWNdqFPNO4t7oD5pWPNw/GcSrADcNlsiqS 9781KMIW zU+r4lQVELKwtSlxjR2vAso5WC3Jfl9zHe57TspJ5Jl+Ou0xVQB/rYfxWnywZ65dsemGgy/fV5/S6qtb6lb3G+7IDpL01DVqLrID3t7NaA31K52a03+Tc/KFk6FEj0Sz4qGlN4R4bBEUs7Fs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Expanding the migrate_misplaced_page() function to allow passing in a list to support multiple pages migration as a preparation to support batch migration for NUMA balancing as well as compound page NUMA balancing in future. Signed-off-by: Baolin Wang --- include/linux/migrate.h | 9 +++++---- mm/huge_memory.c | 5 ++++- mm/memory.c | 4 +++- mm/migrate.c | 26 ++++++++++---------------- 4 files changed, 22 insertions(+), 22 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index ddcd62ec2c12..87edce8e939d 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -142,12 +142,13 @@ const struct movable_operations *page_movable_ops(struct page *page) } #ifdef CONFIG_NUMA_BALANCING -int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, - int node); +int migrate_misplaced_page(struct list_head *migratepages, struct vm_area_struct *vma, + int source_nid, int target_nid); bool numamigrate_isolate_page(pg_data_t *pgdat, struct page *page); #else -static inline int migrate_misplaced_page(struct page *page, - struct vm_area_struct *vma, int node) +static inline int migrate_misplaced_page(struct list_head *migratepages, + struct vm_area_struct *vma, + int source_nid, int target_nid) { return -EAGAIN; /* can't migrate now */ } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 07149ead11e4..4401a3493544 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1497,6 +1497,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) bool migrated = false, writable = false; int flags = 0; pg_data_t *pgdat; + LIST_HEAD(migratepages); vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); if (unlikely(!pmd_same(oldpmd, *vmf->pmd))) { @@ -1552,7 +1553,9 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) goto migrate_fail; } - migrated = migrate_misplaced_page(page, vma, target_nid); + list_add(&page->lru, &migratepages); + migrated = migrate_misplaced_page(&migratepages, vma, + page_nid, target_nid); if (migrated) { flags |= TNF_MIGRATED; page_nid = target_nid; diff --git a/mm/memory.c b/mm/memory.c index 4e451b041488..9e417e8dd5d5 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4770,6 +4770,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) pte_t pte, old_pte; int flags = 0; pg_data_t *pgdat; + LIST_HEAD(migratepages); /* * The "pte" at this point cannot be used safely without @@ -4851,8 +4852,9 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) goto migrate_fail; } + list_add(&page->lru, &migratepages); /* Migrate to the requested node */ - if (migrate_misplaced_page(page, vma, target_nid)) { + if (migrate_misplaced_page(&migratepages, vma, page_nid, target_nid)) { page_nid = target_nid; flags |= TNF_MIGRATED; } else { diff --git a/mm/migrate.c b/mm/migrate.c index 0b2b69a2a7ab..fae7224b8e64 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2519,36 +2519,30 @@ bool numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) * node. Caller is expected to have an elevated reference count on * the page that will be dropped by this function before returning. */ -int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, - int node) +int migrate_misplaced_page(struct list_head *migratepages, struct vm_area_struct *vma, + int source_nid, int target_nid) { - pg_data_t *pgdat = NODE_DATA(node); + pg_data_t *pgdat = NODE_DATA(target_nid); int migrated = 1; int nr_remaining; unsigned int nr_succeeded; - LIST_HEAD(migratepages); - int nr_pages = thp_nr_pages(page); - list_add(&page->lru, &migratepages); - nr_remaining = migrate_pages(&migratepages, alloc_misplaced_dst_folio, - NULL, node, MIGRATE_ASYNC, + nr_remaining = migrate_pages(migratepages, alloc_misplaced_dst_folio, + NULL, target_nid, MIGRATE_ASYNC, MR_NUMA_MISPLACED, &nr_succeeded); if (nr_remaining) { - if (!list_empty(&migratepages)) { - list_del(&page->lru); - mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + - page_is_file_lru(page), -nr_pages); - putback_lru_page(page); - } + if (!list_empty(migratepages)) + putback_movable_pages(migratepages); + migrated = 0; } if (nr_succeeded) { count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_succeeded); - if (!node_is_toptier(page_to_nid(page)) && node_is_toptier(node)) + if (!node_is_toptier(source_nid) && node_is_toptier(target_nid)) mod_node_page_state(pgdat, PGPROMOTE_SUCCESS, nr_succeeded); } - BUG_ON(!list_empty(&migratepages)); + BUG_ON(!list_empty(migratepages)); return migrated; } #endif /* CONFIG_NUMA_BALANCING */ From patchwork Tue Aug 22 00:53:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 13360023 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 500DBEE49A6 for ; Tue, 22 Aug 2023 00:54:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A8EDD28000D; Mon, 21 Aug 2023 20:54:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9F07828000B; Mon, 21 Aug 2023 20:54:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 86A8028000D; Mon, 21 Aug 2023 20:54:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 75D8C28000B for ; Mon, 21 Aug 2023 20:54:18 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 4204AC016E for ; Tue, 22 Aug 2023 00:54:18 +0000 (UTC) X-FDA: 81149919396.01.264A573 Received: from out30-124.freemail.mail.aliyun.com (out30-124.freemail.mail.aliyun.com [115.124.30.124]) by imf05.hostedemail.com (Postfix) with ESMTP id 1307E100017 for ; Tue, 22 Aug 2023 00:54:15 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=none; spf=pass (imf05.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.124 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692665656; a=rsa-sha256; cv=none; b=0khUyXxAAux4XKXnUY+JyMDFQziwfp0mz1gt2CsngWgRZd6OdCZOpvHaYhukJKIyOkaygd MXp2mDMOiVqkfOdNGEme+lZ3xC3iZZJ1p9fFx98sNALXx3NDexZUHlU2g041AvqVVQQrVL xPbP33bakEl1jVoCiArwIjouymtdE04= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=none; spf=pass (imf05.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.124 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692665656; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yVDhgbFmYSfxlu1ENlgkUpOD4znVYtMHFKsHUNldCus=; b=dUBnwIPoZjxBDk1WoRLuCx5aBkaRQOmDf8Kpu68D/6O6X01U+kA3Hmvs1r2f15M3zg4SAC 6+T7I4roxHxnC2SSnJemo1Ly1c9ZuBOTlnNI4LNumoS6ZfoPLj9y/N+cAxk+ywygdhhAEC 6J9fTW5pZCVL6EoEX9JMgQxp+Gs41ho= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R131e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045170;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0VqJzDob_1692665651; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VqJzDob_1692665651) by smtp.aliyun-inc.com; Tue, 22 Aug 2023 08:54:12 +0800 From: Baolin Wang To: akpm@linux-foundation.org Cc: mgorman@techsingularity.net, shy828301@gmail.com, david@redhat.com, ying.huang@intel.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 4/4] mm: migrate: change to return the number of pages migrated successfully Date: Tue, 22 Aug 2023 08:53:52 +0800 Message-Id: <9688ba40be86d7d0af0961e74d2a182ce65f5f8c.1692665449.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 1307E100017 X-Stat-Signature: he1rc3xe75ixq5fn1gkmbh8p1hk4ix5x X-HE-Tag: 1692665655-734431 X-HE-Meta: U2FsdGVkX1+qZo6hDqr4dQgMFpK+5zlTj2ACr0BFkthKoQCBRLQKcP20oMqpnnxVCurWEXRj2A3kaXL11VBn/AAvjV/B3OFpMVFt4q9PKDO9IEY3V1FWZ8XIoTdgYiQ+YF9NXG5xApf3jIfYd12hlVR0XHjhpFv3i/kEKVrGW0Lt0ln90cIIYS9lP53iYlthbd6rM12ihP62Z9UdKOQF1ZHU5IXI18L3cBX1z4J6Wl0ZjCCrnNFEnB3Vf53b1zYxmAsjGvCRCwpiDYZJHzBN9usjYmiZv2Oiz2rujrKzYXHiHvPbtppaRbv8N1qljQNiXEJs8FGxdcPh+RX6QTWSF4wmuiFGH9FEBEDtTZJmoI/R7tIv5PZaryW5rzqzEWfTCfEROqXVidbycCsnkLMVj/G3v3puwT9Ywx+XAWw4N126i8ID4os4QGzSvWpr+45QDiIk51XVgf9/0cgL1BPwepta5JQbPnFunOECvYAgST1Z6oAQvrPNsp0MWkJZUhCNLVyCANmYzYM6Q+SP0FQHFrb3feE8jsc3XB/PaAL5GXID9knnojUk7su0xME3d6g/SOCb9gXUvRefhOpghud5ajtwhOUxCR3Sd8Gr4R3czr+kkY+J/zBLfkVHZ26iqrdRiHC2mBKvlMirJdNOsMSP11kt6cbvkX9ulw6tMjfK9HP27qKdNKqKJCYCleJyXY1NZZ569g1qQNvkJeF8bXzOrUN77qQm//xv+y2C3YiTQcFZ8Q71GHWqs2xwGJ0Hzr1AU5nuu8l99qyRlvPceMjbPHCvjT1TSJI6g8U1KpfIrCE/BYEbQDU/uQyFfIPzi3TH3x2Loxwp6Q7b6cPfGudO5/FihPkSG13pY+f56fm3Uey+G9Ibxo0H1xCWF1WNwSbejW5frNfpY6KGcHeKWBr2+NysXeCpGJ+z7AkE71D/OU5+lFXtwa4g+rGBjJJGuekL+IYNZQ/cHXXlRTrQ9op M42O1/1z v1DZBHid7YxLlgEGFcSF8DL+cZ/maiSKM90cMuGxH927EHGQ+EQwXTXBL8VerkF6K6hKFVmNHTSCmwblFYI50JLJ5Zcf6AV9NFy9DVknwXSfsI5tTltwxO1Dmu4t7ABZ4q3Fq X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Change the migrate_misplaced_page() to return the number of pages migrated successfully, which is used to calculate how many pages are failed to migrate for batch migration. For the compound page's NUMA balancing support, it is possible that partial pages were successfully migrated, so it is necessary to return the number of pages that were successfully migrated from migrate_misplaced_page(). Signed-off-by: Baolin Wang --- mm/huge_memory.c | 9 +++++---- mm/memory.c | 4 +++- mm/migrate.c | 5 +---- 3 files changed, 9 insertions(+), 9 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 4401a3493544..951f73d6b5bf 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1494,10 +1494,11 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) unsigned long haddr = vmf->address & HPAGE_PMD_MASK; int page_nid = NUMA_NO_NODE; int target_nid, last_cpupid = (-1 & LAST_CPUPID_MASK); - bool migrated = false, writable = false; + bool writable = false; int flags = 0; pg_data_t *pgdat; LIST_HEAD(migratepages); + int nr_successed; vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); if (unlikely(!pmd_same(oldpmd, *vmf->pmd))) { @@ -1554,9 +1555,9 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) } list_add(&page->lru, &migratepages); - migrated = migrate_misplaced_page(&migratepages, vma, - page_nid, target_nid); - if (migrated) { + nr_successed = migrate_misplaced_page(&migratepages, vma, + page_nid, target_nid); + if (nr_successed) { flags |= TNF_MIGRATED; page_nid = target_nid; } else { diff --git a/mm/memory.c b/mm/memory.c index 9e417e8dd5d5..2773cd804ee9 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4771,6 +4771,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) int flags = 0; pg_data_t *pgdat; LIST_HEAD(migratepages); + int nr_succeeded; /* * The "pte" at this point cannot be used safely without @@ -4854,7 +4855,8 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) list_add(&page->lru, &migratepages); /* Migrate to the requested node */ - if (migrate_misplaced_page(&migratepages, vma, page_nid, target_nid)) { + nr_succeeded = migrate_misplaced_page(&migratepages, vma, page_nid, target_nid); + if (nr_succeeded) { page_nid = target_nid; flags |= TNF_MIGRATED; } else { diff --git a/mm/migrate.c b/mm/migrate.c index fae7224b8e64..5435cfb225ab 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2523,7 +2523,6 @@ int migrate_misplaced_page(struct list_head *migratepages, struct vm_area_struct int source_nid, int target_nid) { pg_data_t *pgdat = NODE_DATA(target_nid); - int migrated = 1; int nr_remaining; unsigned int nr_succeeded; @@ -2533,8 +2532,6 @@ int migrate_misplaced_page(struct list_head *migratepages, struct vm_area_struct if (nr_remaining) { if (!list_empty(migratepages)) putback_movable_pages(migratepages); - - migrated = 0; } if (nr_succeeded) { count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_succeeded); @@ -2543,7 +2540,7 @@ int migrate_misplaced_page(struct list_head *migratepages, struct vm_area_struct nr_succeeded); } BUG_ON(!list_empty(migratepages)); - return migrated; + return nr_succeeded; } #endif /* CONFIG_NUMA_BALANCING */ #endif /* CONFIG_NUMA */