diff mbox series

[v2,3/3] mm: Protect kernel pgtables in apply_to_pte_range()

Message ID ef8f6538b83b7fc3372602f90375348f9b4f3596.1744128123.git.agordeev@linux.ibm.com (mailing list archive)
State New
Headers show
Series mm: Fix apply_to_pte_range() vs lazy MMU mode | expand

Commit Message

Alexander Gordeev April 8, 2025, 4:07 p.m. UTC
The lazy MMU mode can only be entered and left under the protection
of the page table locks for all page tables which may be modified.
Yet, when it comes to kernel mappings apply_to_pte_range() does not
take any locks. That does not conform arch_enter|leave_lazy_mmu_mode()
semantics and could potentially lead to re-schedulling a process while
in lazy MMU mode or racing on a kernel page table updates.

Cc: stable@vger.kernel.org
Fixes: 38e0edb15bd0 ("mm/apply_to_range: call pte function with lazy updates")
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
---
 mm/kasan/shadow.c | 7 ++-----
 mm/memory.c       | 5 ++++-
 2 files changed, 6 insertions(+), 6 deletions(-)

Comments

Alexander Gordeev April 10, 2025, 2:50 p.m. UTC | #1
On Tue, Apr 08, 2025 at 06:07:32PM +0200, Alexander Gordeev wrote:

Hi Andrew,

> The lazy MMU mode can only be entered and left under the protection
> of the page table locks for all page tables which may be modified.

Heiko Carstens noticed that the above claim is not valid, since
v6.15-rc1 commit 691ee97e1a9d ("mm: fix lazy mmu docs and usage"),
which restates it to:

"In the general case, no lock is guaranteed to be held between entry and exit
of the lazy mode. So the implementation must assume preemption may be enabled"

That effectively invalidates this patch, so it needs to be dropped.

Patch 2 still could be fine, except -stable and Fixes tags and it does
not need to aim 6.15-rcX. Do you want me to repost it?

Thanks!
Andrew Morton April 10, 2025, 10:47 p.m. UTC | #2
On Thu, 10 Apr 2025 16:50:33 +0200 Alexander Gordeev <agordeev@linux.ibm.com> wrote:

> On Tue, Apr 08, 2025 at 06:07:32PM +0200, Alexander Gordeev wrote:
> 
> Hi Andrew,
> 
> > The lazy MMU mode can only be entered and left under the protection
> > of the page table locks for all page tables which may be modified.
> 
> Heiko Carstens noticed that the above claim is not valid, since
> v6.15-rc1 commit 691ee97e1a9d ("mm: fix lazy mmu docs and usage"),
> which restates it to:
> 
> "In the general case, no lock is guaranteed to be held between entry and exit
> of the lazy mode. So the implementation must assume preemption may be enabled"
> 
> That effectively invalidates this patch, so it needs to be dropped.
> 
> Patch 2 still could be fine, except -stable and Fixes tags and it does
> not need to aim 6.15-rcX. Do you want me to repost it?

I dropped the whole series - let's start again.
diff mbox series

Patch

diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index edfa77959474..6531a7aa8562 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -308,14 +308,14 @@  static int kasan_populate_vmalloc_pte(pte_t *ptep, unsigned long addr,
 	__memset((void *)page, KASAN_VMALLOC_INVALID, PAGE_SIZE);
 	pte = pfn_pte(PFN_DOWN(__pa(page)), PAGE_KERNEL);
 
-	spin_lock(&init_mm.page_table_lock);
 	if (likely(pte_none(ptep_get(ptep)))) {
 		set_pte_at(&init_mm, addr, ptep, pte);
 		page = 0;
 	}
-	spin_unlock(&init_mm.page_table_lock);
+
 	if (page)
 		free_page(page);
+
 	return 0;
 }
 
@@ -401,13 +401,10 @@  static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr,
 
 	page = (unsigned long)__va(pte_pfn(ptep_get(ptep)) << PAGE_SHIFT);
 
-	spin_lock(&init_mm.page_table_lock);
-
 	if (likely(!pte_none(ptep_get(ptep)))) {
 		pte_clear(&init_mm, addr, ptep);
 		free_page(page);
 	}
-	spin_unlock(&init_mm.page_table_lock);
 
 	return 0;
 }
diff --git a/mm/memory.c b/mm/memory.c
index f0201c8ec1ce..1f3727104e99 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2926,6 +2926,7 @@  static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd,
 			pte = pte_offset_kernel(pmd, addr);
 		if (!pte)
 			return err;
+		spin_lock(&init_mm.page_table_lock);
 	} else {
 		if (create)
 			pte = pte_alloc_map_lock(mm, pmd, addr, &ptl);
@@ -2951,7 +2952,9 @@  static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd,
 
 	arch_leave_lazy_mmu_mode();
 
-	if (mm != &init_mm)
+	if (mm == &init_mm)
+		spin_unlock(&init_mm.page_table_lock);
+	else
 		pte_unmap_unlock(mapped_pte, ptl);
 
 	*mask |= PGTBL_PTE_MODIFIED;