diff mbox series

mm: move the easily assessable conditions forward

Message ID 20240815083102.653820-1-link@vivo.com (mailing list archive)
State New
Headers show
Series mm: move the easily assessable conditions forward | expand

Commit Message

Huan Yang Aug. 15, 2024, 8:31 a.m. UTC
Current try_to_map_unused_to_zeropage try to use shared zero page to
save some memory of sub page.

If forbids zeropage, no need to do anything rather than attempting to
assess wthether to use it afterwards.

Signed-off-by: Huan Yang <link@vivo.com>
---
 mm/migrate.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)


base-commit: edd1ec2e3a9f5de7fb267a3af73e4f00e7e052b7

Comments

Andrew Morton Aug. 15, 2024, 10:35 p.m. UTC | #1
On Thu, 15 Aug 2024 16:31:01 +0800 Huan Yang <link@vivo.com> wrote:

> Current try_to_map_unused_to_zeropage try to use shared zero page to
> save some memory of sub page.
> 
> If forbids zeropage, no need to do anything rather than attempting to
> assess wthether to use it afterwards.
> 
> ...
>
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -192,6 +192,9 @@ static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw,
>  	VM_BUG_ON_PAGE(!PageLocked(page), page);
>  	VM_BUG_ON_PAGE(pte_present(*pvmw->pte), page);
>  
> +	if (mm_forbids_zeropage(pvmw->vma->vm_mm))
> +		return false;
> +
>  	if (PageMlocked(page) || (pvmw->vma->vm_flags & VM_LOCKED))
>  		return false;
>  
> @@ -204,7 +207,7 @@ static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw,
>  	contains_data = memchr_inv(addr, 0, PAGE_SIZE);
>  	kunmap_local(addr);
>  
> -	if (contains_data || mm_forbids_zeropage(pvmw->vma->vm_mm))
> +	if (contains_data)
>  		return false;
>  

Looks sensible.  I'll add it as a fixup to "mm: remap unused subpages to shared zeropage when splitting isolated thp".
diff mbox series

Patch

diff --git a/mm/migrate.c b/mm/migrate.c
index 6e32098ac2dc..d71cc4ff190f 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -192,6 +192,9 @@  static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw,
 	VM_BUG_ON_PAGE(!PageLocked(page), page);
 	VM_BUG_ON_PAGE(pte_present(*pvmw->pte), page);
 
+	if (mm_forbids_zeropage(pvmw->vma->vm_mm))
+		return false;
+
 	if (PageMlocked(page) || (pvmw->vma->vm_flags & VM_LOCKED))
 		return false;
 
@@ -204,7 +207,7 @@  static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw,
 	contains_data = memchr_inv(addr, 0, PAGE_SIZE);
 	kunmap_local(addr);
 
-	if (contains_data || mm_forbids_zeropage(pvmw->vma->vm_mm))
+	if (contains_data)
 		return false;
 
 	newpte = pte_mkspecial(pfn_pte(my_zero_pfn(pvmw->address),