diff mbox series

[v5,1/8] mm/huge_memory: only split PMD mapping when necessary in unmap_folio()

Message ID 20240226205534.1603748-2-zi.yan@sent.com (mailing list archive)
State Accepted
Commit 319a624ec2b79db7a0b0a2a2a61e3aa5c96eabfc
Headers show
Series Split a folio to any lower order folios | expand

Commit Message

Zi Yan Feb. 26, 2024, 8:55 p.m. UTC
From: Zi Yan <ziy@nvidia.com>

As multi-size THP support is added, not all THPs are PMD-mapped, thus
during a huge page split, there is no need to always split PMD mapping
in unmap_folio(). Make it conditional.

Signed-off-by: Zi Yan <ziy@nvidia.com>
---
 mm/huge_memory.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

Comments

David Hildenbrand Feb. 28, 2024, 10:30 a.m. UTC | #1
On 26.02.24 21:55, Zi Yan wrote:
> From: Zi Yan <ziy@nvidia.com>
> 
> As multi-size THP support is added, not all THPs are PMD-mapped, thus
> during a huge page split, there is no need to always split PMD mapping
> in unmap_folio(). Make it conditional.
> 
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> ---
>   mm/huge_memory.c | 7 +++++--
>   1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 28341a5067fb..b20e535e874c 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2727,11 +2727,14 @@ void vma_adjust_trans_huge(struct vm_area_struct *vma,
>   
>   static void unmap_folio(struct folio *folio)
>   {
> -	enum ttu_flags ttu_flags = TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD |
> -		TTU_SYNC | TTU_BATCH_FLUSH;
> +	enum ttu_flags ttu_flags = TTU_RMAP_LOCKED | TTU_SYNC |
> +		TTU_BATCH_FLUSH;
>   
>   	VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
>   
> +	if (folio_test_pmd_mappable(folio))
> +		ttu_flags |= TTU_SPLIT_HUGE_PMD;
> +
>   	/*
>   	 * Anon pages need migration entries to preserve them, but file
>   	 * pages can simply be left unmapped, then faulted back on demand.

Reviewed-by: David Hildenbrand <david@redhat.com>
diff mbox series

Patch

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 28341a5067fb..b20e535e874c 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2727,11 +2727,14 @@  void vma_adjust_trans_huge(struct vm_area_struct *vma,
 
 static void unmap_folio(struct folio *folio)
 {
-	enum ttu_flags ttu_flags = TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD |
-		TTU_SYNC | TTU_BATCH_FLUSH;
+	enum ttu_flags ttu_flags = TTU_RMAP_LOCKED | TTU_SYNC |
+		TTU_BATCH_FLUSH;
 
 	VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
 
+	if (folio_test_pmd_mappable(folio))
+		ttu_flags |= TTU_SPLIT_HUGE_PMD;
+
 	/*
 	 * Anon pages need migration entries to preserve them, but file
 	 * pages can simply be left unmapped, then faulted back on demand.