mbox series

[v3,0/3] do_numa_page(),do_huge_pmd_numa_page() fix and cleanup

Message ID 20240809145906.1513458-1-ziy@nvidia.com (mailing list archive)
Headers show
Series do_numa_page(),do_huge_pmd_numa_page() fix and cleanup | expand

Message

Zi Yan Aug. 9, 2024, 2:59 p.m. UTC
Changes from v1[1] and v2(Patch 2 only)[2]
===
1. Patch 1: Separated do_numa_page() and do_huge_pmd_numa_page() fixes,
since the issues are introduced by two separate commits.

2. Patch 1: Moved migration failure branch code and called task_numa_fault()
and return immediately when migration succeedds. (per Huang, Ying)

3. Patch 2: change do_huge_pmd_numa_page() to match do_numa_page() in
terms of page table entry manipulation (per Huang, Ying)

4. Patch 1: Restructured the code (per Kefeng Wang)

5. Patch 1: Returned immediately when page table entries do not match instead
of using goto (per David Hildenbrand)

[1] https://lore.kernel.org/lkml/20240807184730.1266736-1-ziy@nvidia.com/
[2] https://lore.kernel.org/linux-mm/20240808233728.1477034-1-ziy@nvidia.com/

Zi Yan (3):
  mm/numa: no task_numa_fault() call if PTE is changed
  mm/numa: no task_numa_fault() call if PMD is changed
  mm/migrate: move common code to numa_migrate_check (was
    numa_migrate_prep)

 mm/huge_memory.c | 56 ++++++++++++----------------
 mm/internal.h    |  5 ++-
 mm/memory.c      | 96 ++++++++++++++++++++++++------------------------
 3 files changed, 75 insertions(+), 82 deletions(-)