diff mbox series

[v1,02/11] mm: thp: consolidate mapcount logic on THP split

Message ID 20211217113049.23850-3-david@redhat.com (mailing list archive)
State New
Headers show
Series mm: COW fixes part 1: fix the COW security issue for THP and hugetlb | expand

Commit Message

David Hildenbrand Dec. 17, 2021, 11:30 a.m. UTC
Let's consolidate the mapcount logic to make it easier to understand and
to prepare for further changes.

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/huge_memory.c | 18 +++++++++++-------
 1 file changed, 11 insertions(+), 7 deletions(-)

Comments

Yang Shi Dec. 17, 2021, 7:06 p.m. UTC | #1
On Fri, Dec 17, 2021 at 3:33 AM David Hildenbrand <david@redhat.com> wrote:
>
> Let's consolidate the mapcount logic to make it easier to understand and
> to prepare for further changes.
>
> Reviewed-by: Peter Xu <peterx@redhat.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>

Reviewed-by: Yang Shi <shy828301@gmail.com>

> ---
>  mm/huge_memory.c | 18 +++++++++++-------
>  1 file changed, 11 insertions(+), 7 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index e5483347291c..4751d03947da 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2101,21 +2101,25 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>                 pte = pte_offset_map(&_pmd, addr);
>                 BUG_ON(!pte_none(*pte));
>                 set_pte_at(mm, addr, pte, entry);
> -               if (!pmd_migration)
> -                       atomic_inc(&page[i]._mapcount);
>                 pte_unmap(pte);
>         }
>
>         if (!pmd_migration) {
> +               /* Sub-page mapcount accounting for above small mappings. */
> +               int val = 1;
> +
>                 /*
>                  * Set PG_double_map before dropping compound_mapcount to avoid
>                  * false-negative page_mapped().
> +                *
> +                * The first to set PageDoubleMap() has to increment all
> +                * sub-page mapcounts by one.
>                  */
> -               if (compound_mapcount(page) > 1 &&
> -                   !TestSetPageDoubleMap(page)) {
> -                       for (i = 0; i < HPAGE_PMD_NR; i++)
> -                               atomic_inc(&page[i]._mapcount);
> -               }
> +               if (compound_mapcount(page) > 1 && !TestSetPageDoubleMap(page))
> +                       val++;
> +
> +               for (i = 0; i < HPAGE_PMD_NR; i++)
> +                       atomic_add(val, &page[i]._mapcount);
>
>                 lock_page_memcg(page);
>                 if (atomic_add_negative(-1, compound_mapcount_ptr(page))) {
> --
> 2.31.1
>
Kirill A . Shutemov Dec. 18, 2021, 2:24 p.m. UTC | #2
On Fri, Dec 17, 2021 at 12:30:40PM +0100, David Hildenbrand wrote:
> Let's consolidate the mapcount logic to make it easier to understand and
> to prepare for further changes.
> 
> Reviewed-by: Peter Xu <peterx@redhat.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>

Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
diff mbox series

Patch

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index e5483347291c..4751d03947da 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2101,21 +2101,25 @@  static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
 		pte = pte_offset_map(&_pmd, addr);
 		BUG_ON(!pte_none(*pte));
 		set_pte_at(mm, addr, pte, entry);
-		if (!pmd_migration)
-			atomic_inc(&page[i]._mapcount);
 		pte_unmap(pte);
 	}
 
 	if (!pmd_migration) {
+		/* Sub-page mapcount accounting for above small mappings. */
+		int val = 1;
+
 		/*
 		 * Set PG_double_map before dropping compound_mapcount to avoid
 		 * false-negative page_mapped().
+		 *
+		 * The first to set PageDoubleMap() has to increment all
+		 * sub-page mapcounts by one.
 		 */
-		if (compound_mapcount(page) > 1 &&
-		    !TestSetPageDoubleMap(page)) {
-			for (i = 0; i < HPAGE_PMD_NR; i++)
-				atomic_inc(&page[i]._mapcount);
-		}
+		if (compound_mapcount(page) > 1 && !TestSetPageDoubleMap(page))
+			val++;
+
+		for (i = 0; i < HPAGE_PMD_NR; i++)
+			atomic_add(val, &page[i]._mapcount);
 
 		lock_page_memcg(page);
 		if (atomic_add_negative(-1, compound_mapcount_ptr(page))) {