diff mbox series

[v2,4/5] mm/huge_memory.c: remove unnecessary tlb_remove_page_size() for huge zero pmd

Message ID 20210429132648.305447-5-linmiaohe@huawei.com (mailing list archive)
State New, archived
Headers show
Series Cleanup and fixup for huge_memory | expand

Commit Message

Miaohe Lin April 29, 2021, 1:26 p.m. UTC
Commit aa88b68c3b1d ("thp: keep huge zero page pinned until tlb flush")
introduced tlb_remove_page() for huge zero page to keep it pinned until
flush is complete and prevents the page from being split under us. But
huge zero page is kept pinned until all relevant mm_users reach zero since
the commit 6fcb52a56ff6 ("thp: reduce usage of huge zero page's atomic
counter"). So tlb_remove_page_size() for huge zero pmd is unnecessary now.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/huge_memory.c | 3 ---
 1 file changed, 3 deletions(-)

Comments

David Hildenbrand April 29, 2021, 3:02 p.m. UTC | #1
On 29.04.21 15:26, Miaohe Lin wrote:
> Commit aa88b68c3b1d ("thp: keep huge zero page pinned until tlb flush")
> introduced tlb_remove_page() for huge zero page to keep it pinned until
> flush is complete and prevents the page from being split under us. But
> huge zero page is kept pinned until all relevant mm_users reach zero since
> the commit 6fcb52a56ff6 ("thp: reduce usage of huge zero page's atomic
> counter"). So tlb_remove_page_size() for huge zero pmd is unnecessary now.
> 
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
>   mm/huge_memory.c | 3 ---
>   1 file changed, 3 deletions(-)
> 
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index e24a96de2e37..af30338ac49c 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1680,12 +1680,9 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
>   		if (arch_needs_pgtable_deposit())
>   			zap_deposited_table(tlb->mm, pmd);
>   		spin_unlock(ptl);
> -		if (is_huge_zero_pmd(orig_pmd))
> -			tlb_remove_page_size(tlb, pmd_page(orig_pmd), HPAGE_PMD_SIZE);
>   	} else if (is_huge_zero_pmd(orig_pmd)) {
>   		zap_deposited_table(tlb->mm, pmd);
>   		spin_unlock(ptl);
> -		tlb_remove_page_size(tlb, pmd_page(orig_pmd), HPAGE_PMD_SIZE);
>   	} else {
>   		struct page *page = NULL;
>   		int flush_needed = 1;
> 

This sounds sane to me

Acked-by: David Hildenbrand <david@redhat.com>
Yang Shi April 29, 2021, 5:55 p.m. UTC | #2
On Thu, Apr 29, 2021 at 6:27 AM Miaohe Lin <linmiaohe@huawei.com> wrote:
>
> Commit aa88b68c3b1d ("thp: keep huge zero page pinned until tlb flush")
> introduced tlb_remove_page() for huge zero page to keep it pinned until
> flush is complete and prevents the page from being split under us. But
> huge zero page is kept pinned until all relevant mm_users reach zero since
> the commit 6fcb52a56ff6 ("thp: reduce usage of huge zero page's atomic
> counter"). So tlb_remove_page_size() for huge zero pmd is unnecessary now.

By reading the git history, it seems the lifecycle of huge zero page
is bound to process instead of page table due to the latter commit.
The patch looks correct to me. Reviewed-by: Yang Shi
<shy828301@gmail.com>

>
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
>  mm/huge_memory.c | 3 ---
>  1 file changed, 3 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index e24a96de2e37..af30338ac49c 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1680,12 +1680,9 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
>                 if (arch_needs_pgtable_deposit())
>                         zap_deposited_table(tlb->mm, pmd);
>                 spin_unlock(ptl);
> -               if (is_huge_zero_pmd(orig_pmd))
> -                       tlb_remove_page_size(tlb, pmd_page(orig_pmd), HPAGE_PMD_SIZE);
>         } else if (is_huge_zero_pmd(orig_pmd)) {
>                 zap_deposited_table(tlb->mm, pmd);
>                 spin_unlock(ptl);
> -               tlb_remove_page_size(tlb, pmd_page(orig_pmd), HPAGE_PMD_SIZE);
>         } else {
>                 struct page *page = NULL;
>                 int flush_needed = 1;
> --
> 2.23.0
>
>
diff mbox series

Patch

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index e24a96de2e37..af30338ac49c 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1680,12 +1680,9 @@  int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 		if (arch_needs_pgtable_deposit())
 			zap_deposited_table(tlb->mm, pmd);
 		spin_unlock(ptl);
-		if (is_huge_zero_pmd(orig_pmd))
-			tlb_remove_page_size(tlb, pmd_page(orig_pmd), HPAGE_PMD_SIZE);
 	} else if (is_huge_zero_pmd(orig_pmd)) {
 		zap_deposited_table(tlb->mm, pmd);
 		spin_unlock(ptl);
-		tlb_remove_page_size(tlb, pmd_page(orig_pmd), HPAGE_PMD_SIZE);
 	} else {
 		struct page *page = NULL;
 		int flush_needed = 1;