[v5] mm/mempolicy: Checking hugepage migration is supported by arch in vma_migratable
diff mbox series

Message ID 1579786179-30633-1-git-send-email-lixinhai.lxh@gmail.com
State New
Headers show
Series
  • [v5] mm/mempolicy: Checking hugepage migration is supported by arch in vma_migratable
Related show

Commit Message

Li Xinhai Jan. 23, 2020, 1:29 p.m. UTC
vma_migratable() is called to check if pages in vma can be migrated
before go ahead to further actions. Currently it is used in below code
path:
- task_numa_work
- mbind
- move_pages

For hugetlb mapping, whether vma is migratable or not is determined by:
- CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION
- arch_hugetlb_migration_supported

Issue: current code only checks for CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION
alone, and  no code should use it directly. (note that current code in
vma_migratable don't cause failure or bug because
unmap_and_move_huge_page() will catch unsupported hugepage and handle it
properly)

This patch checks the two factors by hugepage_migration_supported for
impoving code logic and robustness. It will enable early bail out of
hugepage migration procedure, but because currently all architecture
supporting hugepage migration is able to support all page size, we would
not see performance gain with this patch applied.

vma_migratable() is moved to mm/mempolicy.c, because of the circular
reference of mempolicy.h and hugetlb.h cause defining it as inline not
feasible.

Signed-off-by: Li Xinhai <lixinhai.lxh@gmail.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
---
V2, V3 and V4:
All for using different ways to fix the circular reference
of hugetlb.h and mempolicy.h. The exsiting relationship of these two files
is allowing inline functions of hugetlb.h being able to refer
symbols defined in mempolicy.h, but no feasible way for inline functions in
mempolicy.h to using functions in hugetlb.h.
After evaluated different fixes to this situation, current patch looks
better, which no longer define vma_migratable as inline.

v4->v5:
New wrapper vm_hugepage_migration_supported() is not necessary, remove it
and use hugepage_migration_supported().


 include/linux/mempolicy.h | 29 +----------------------------
 mm/mempolicy.c            | 28 ++++++++++++++++++++++++++++
 2 files changed, 29 insertions(+), 28 deletions(-)

Comments

Michal Hocko Jan. 23, 2020, 4:56 p.m. UTC | #1
On Thu 23-01-20 13:29:39, Li Xinhai wrote:
> vma_migratable() is called to check if pages in vma can be migrated
> before go ahead to further actions. Currently it is used in below code
> path:
> - task_numa_work
> - mbind
> - move_pages
> 
> For hugetlb mapping, whether vma is migratable or not is determined by:
> - CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION
> - arch_hugetlb_migration_supported
> 
> Issue: current code only checks for CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION
> alone, and  no code should use it directly. (note that current code in
> vma_migratable don't cause failure or bug because
> unmap_and_move_huge_page() will catch unsupported hugepage and handle it
> properly)
> 
> This patch checks the two factors by hugepage_migration_supported for
> impoving code logic and robustness. It will enable early bail out of
> hugepage migration procedure, but because currently all architecture
> supporting hugepage migration is able to support all page size, we would
> not see performance gain with this patch applied.
> 
> vma_migratable() is moved to mm/mempolicy.c, because of the circular
> reference of mempolicy.h and hugetlb.h cause defining it as inline not
> feasible.
> 
> Signed-off-by: Li Xinhai <lixinhai.lxh@gmail.com>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Mike Kravetz <mike.kravetz@oracle.com>
> Cc: Anshuman Khandual <anshuman.khandual@arm.com>
> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>

Acked-by: Michal Hocko <mhocko@suse.com>

Thanks!
Mike Kravetz Jan. 23, 2020, 11:24 p.m. UTC | #2
On 1/23/20 5:29 AM, Li Xinhai wrote:
> vma_migratable() is called to check if pages in vma can be migrated
> before go ahead to further actions. Currently it is used in below code
> path:
> - task_numa_work
> - mbind
> - move_pages
> 
> For hugetlb mapping, whether vma is migratable or not is determined by:
> - CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION
> - arch_hugetlb_migration_supported
> 
> Issue: current code only checks for CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION
> alone, and  no code should use it directly. (note that current code in
> vma_migratable don't cause failure or bug because
> unmap_and_move_huge_page() will catch unsupported hugepage and handle it
> properly)
> 
> This patch checks the two factors by hugepage_migration_supported for
> impoving code logic and robustness. It will enable early bail out of
> hugepage migration procedure, but because currently all architecture
> supporting hugepage migration is able to support all page size, we would
> not see performance gain with this patch applied.
> 
> vma_migratable() is moved to mm/mempolicy.c, because of the circular
> reference of mempolicy.h and hugetlb.h cause defining it as inline not
> feasible.
> 
> Signed-off-by: Li Xinhai <lixinhai.lxh@gmail.com>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Mike Kravetz <mike.kravetz@oracle.com>
> Cc: Anshuman Khandual <anshuman.khandual@arm.com>
> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>

Thanks for continuing to refine this!  The commit message looks much better.

Un-inlining vma_migratable() should not be an issue as no hot paths make use
of the routine.

Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Anshuman Khandual Jan. 24, 2020, 2:12 a.m. UTC | #3
On 01/23/2020 06:59 PM, Li Xinhai wrote:
> vma_migratable() is called to check if pages in vma can be migrated
> before go ahead to further actions. Currently it is used in below code
> path:
> - task_numa_work
> - mbind
> - move_pages
> 
> For hugetlb mapping, whether vma is migratable or not is determined by:
> - CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION
> - arch_hugetlb_migration_supported
> 
> Issue: current code only checks for CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION
> alone, and  no code should use it directly. (note that current code in
> vma_migratable don't cause failure or bug because
> unmap_and_move_huge_page() will catch unsupported hugepage and handle it
> properly)
> 
> This patch checks the two factors by hugepage_migration_supported for
> impoving code logic and robustness. It will enable early bail out of
> hugepage migration procedure, but because currently all architecture
> supporting hugepage migration is able to support all page size, we would
> not see performance gain with this patch applied.
> 
> vma_migratable() is moved to mm/mempolicy.c, because of the circular
> reference of mempolicy.h and hugetlb.h cause defining it as inline not
> feasible.
> 
> Signed-off-by: Li Xinhai <lixinhai.lxh@gmail.com>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Mike Kravetz <mike.kravetz@oracle.com>
> Cc: Anshuman Khandual <anshuman.khandual@arm.com>
> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
HORIGUCHI NAOYA(堀口 直也) Jan. 24, 2020, 2:36 a.m. UTC | #4
On Thu, Jan 23, 2020 at 01:29:39PM +0000, Li Xinhai wrote:
> vma_migratable() is called to check if pages in vma can be migrated
> before go ahead to further actions. Currently it is used in below code
> path:
> - task_numa_work
> - mbind
> - move_pages
> 
> For hugetlb mapping, whether vma is migratable or not is determined by:
> - CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION
> - arch_hugetlb_migration_supported
> 
> Issue: current code only checks for CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION
> alone, and  no code should use it directly. (note that current code in
> vma_migratable don't cause failure or bug because
> unmap_and_move_huge_page() will catch unsupported hugepage and handle it
> properly)
> 
> This patch checks the two factors by hugepage_migration_supported for
> impoving code logic and robustness. It will enable early bail out of
> hugepage migration procedure, but because currently all architecture
> supporting hugepage migration is able to support all page size, we would
> not see performance gain with this patch applied.
> 
> vma_migratable() is moved to mm/mempolicy.c, because of the circular
> reference of mempolicy.h and hugetlb.h cause defining it as inline not
> feasible.
> 
> Signed-off-by: Li Xinhai <lixinhai.lxh@gmail.com>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Mike Kravetz <mike.kravetz@oracle.com>
> Cc: Anshuman Khandual <anshuman.khandual@arm.com>
> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>

Reviewed-by: Naoya Horiguchi <naoya.horiguchi@nec.com>

Patch
diff mbox series

diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
index 5228c62..8165278 100644
--- a/include/linux/mempolicy.h
+++ b/include/linux/mempolicy.h
@@ -173,34 +173,7 @@  int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from,
 extern void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol);
 
 /* Check if a vma is migratable */
-static inline bool vma_migratable(struct vm_area_struct *vma)
-{
-	if (vma->vm_flags & (VM_IO | VM_PFNMAP))
-		return false;
-
-	/*
-	 * DAX device mappings require predictable access latency, so avoid
-	 * incurring periodic faults.
-	 */
-	if (vma_is_dax(vma))
-		return false;
-
-#ifndef CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION
-	if (vma->vm_flags & VM_HUGETLB)
-		return false;
-#endif
-
-	/*
-	 * Migration allocates pages in the highest zone. If we cannot
-	 * do so then migration (at least from node to node) is not
-	 * possible.
-	 */
-	if (vma->vm_file &&
-		gfp_zone(mapping_gfp_mask(vma->vm_file->f_mapping))
-								< policy_zone)
-			return false;
-	return true;
-}
+extern bool vma_migratable(struct vm_area_struct *vma);
 
 extern int mpol_misplaced(struct page *, struct vm_area_struct *, unsigned long);
 extern void mpol_put_task_policy(struct task_struct *);
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 067cf7d..9319dcb 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -1714,6 +1714,34 @@  static int kernel_get_mempolicy(int __user *policy,
 
 #endif /* CONFIG_COMPAT */
 
+bool vma_migratable(struct vm_area_struct *vma)
+{
+	if (vma->vm_flags & (VM_IO | VM_PFNMAP))
+		return false;
+
+	/*
+	 * DAX device mappings require predictable access latency, so avoid
+	 * incurring periodic faults.
+	 */
+	if (vma_is_dax(vma))
+		return false;
+
+	if (is_vm_hugetlb_page(vma) &&
+		!hugepage_migration_supported(hstate_vma(vma)))
+		return false;
+
+	/*
+	 * Migration allocates pages in the highest zone. If we cannot
+	 * do so then migration (at least from node to node) is not
+	 * possible.
+	 */
+	if (vma->vm_file &&
+		gfp_zone(mapping_gfp_mask(vma->vm_file->f_mapping))
+			< policy_zone)
+		return false;
+	return true;
+}
+
 struct mempolicy *__get_vma_policy(struct vm_area_struct *vma,
 						unsigned long addr)
 {