diff mbox series

[5/6] tlb: mmu_gather: Introduce tlb_gather_mmu_fullmm()

Message ID 20201120143557.6715-6-will@kernel.org (mailing list archive)
State New, archived
Headers show
Series tlb: Fix access and (soft-)dirty bit management | expand

Commit Message

Will Deacon Nov. 20, 2020, 2:35 p.m. UTC
Passing the range '0, -1' to tlb_gather_mmu() sets the 'fullmm' flag,
which indicates that the mm_struct being operated on is going away. In
this case, some architectures (such as arm64) can elide TLB invalidation
by ensuring that the TLB tag (ASID) associated with this mm is not
immediately reclaimed. Although this behaviour is documented in
asm-generic/tlb.h, it's subtle and easily missed. Consequently, the
/proc walker for manipulating the young and soft-dirty bits passes this
range regardless.

Introduce tlb_gather_mmu_fullmm() to make it clearer that this is for the
entire mm and WARN() if tlb_gather_mmu() is called with an 'end' address
greated than TASK_SIZE.

Signed-off-by: Will Deacon <will@kernel.org>
---
 fs/proc/task_mmu.c        |  2 +-
 include/asm-generic/tlb.h |  6 ++++--
 include/linux/mm_types.h  |  1 +
 mm/mmap.c                 |  2 +-
 mm/mmu_gather.c           | 16 ++++++++++++++--
 5 files changed, 21 insertions(+), 6 deletions(-)

Comments

Linus Torvalds Nov. 20, 2020, 5:22 p.m. UTC | #1
On Fri, Nov 20, 2020 at 6:36 AM Will Deacon <will@kernel.org> wrote:
>
> Introduce tlb_gather_mmu_fullmm() to make it clearer that this is for the
> entire mm and WARN() if tlb_gather_mmu() is called with an 'end' address
> greated than TASK_SIZE.

Ack (but with a spello note - "greated").

          Linus
Linus Torvalds Nov. 20, 2020, 5:31 p.m. UTC | #2
Oh - wait.

Not ack.

Not because this is wrong, but because I think you should remove the
start/end arguments here too.

The _only_ thing they were used for was that "fullmm" flag, afaik. So
now they no longer make sense.

Hmm?

               Linus

On Fri, Nov 20, 2020 at 9:22 AM Linus Torvalds
<torvalds@linux-foundation.org> wrote:
>
> On Fri, Nov 20, 2020 at 6:36 AM Will Deacon <will@kernel.org> wrote:
> >
> > Introduce tlb_gather_mmu_fullmm() to make it clearer that this is for the
> > entire mm and WARN() if tlb_gather_mmu() is called with an 'end' address
> > greated than TASK_SIZE.
>
> Ack (but with a spello note - "greated").
>
>           Linus
Will Deacon Nov. 23, 2020, 4:48 p.m. UTC | #3
On Fri, Nov 20, 2020 at 09:31:09AM -0800, Linus Torvalds wrote:
> Oh - wait.
> 
> Not ack.
> 
> Not because this is wrong, but because I think you should remove the
> start/end arguments here too.
> 
> The _only_ thing they were used for was that "fullmm" flag, afaik. So
> now they no longer make sense.
> 
> Hmm?

Oh nice, well spotted. I'll drop them for v2.

Cheers,

Will
Will Deacon Nov. 23, 2020, 5:51 p.m. UTC | #4
Hmm, this is interesting but my x86-fu is a bit lacking:

On Sun, Nov 22, 2020 at 11:11:58PM +0800, kernel test robot wrote:
> commit: e242a269fa4b7aee0b157ce5b1d7d12179fc3c44 ("[PATCH 5/6] tlb: mmu_gather: Introduce tlb_gather_mmu_fullmm()")
> url: https://github.com/0day-ci/linux/commits/Will-Deacon/tlb-Fix-access-and-soft-dirty-bit-management/20201120-223809
> base: https://git.kernel.org/cgit/linux/kernel/git/arm64/linux.git for-next/core

[...]

> [   14.182822] WARNING: CPU: 0 PID: 1 at mm/mmu_gather.c:293 tlb_gather_mmu+0x40/0x99

This fires because free_ldt_pgtables() initialises an mmu_gather() with
an end address > TASK_SIZE. In other words, this code:

	unsigned long start = LDT_BASE_ADDR;
	unsigned long end = LDT_END_ADDR;

	if (!boot_cpu_has(X86_FEATURE_PTI))
		return;

	tlb_gather_mmu(&tlb, mm, start, end);

seems to be passing kernel addresses to tlb_gather_mmu(), which will cause
the range adjusment logic in __tlb_adjust_range() to round the base down
to TASK_SIZE afaict. At which point, I suspect the low-level invalidation
routine replaces the enormous range with a fullmm flush (see the check in
flush_tlb_mm_range()).

If that's the case (and I would appreciate some input from somebody who
knows what an LDT is), then I think the right answer is to replace this with
a call to tlb_gather_mmu_fullmm, although I haven't ever anticipated these
things working on kernel addresses and whether that would do the right kind
of invalidation for x86 w/ PTI. A quick read of the code suggests it should
work out...

Will
diff mbox series

Patch

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 3308292ee5c5..a76d339b5754 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -1238,7 +1238,7 @@  static ssize_t clear_refs_write(struct file *file, const char __user *buf,
 			count = -EINTR;
 			goto out_mm;
 		}
-		tlb_gather_mmu(&tlb, mm, 0, -1);
+		tlb_gather_mmu_fullmm(&tlb, mm);
 		if (type == CLEAR_REFS_SOFT_DIRTY) {
 			for (vma = mm->mmap; vma; vma = vma->vm_next) {
 				if (!(vma->vm_flags & VM_SOFTDIRTY))
diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
index 6661ee1cff47..2c68a545ffa7 100644
--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -46,7 +46,9 @@ 
  *
  * The mmu_gather API consists of:
  *
- *  - tlb_gather_mmu() / tlb_finish_mmu(); start and finish a mmu_gather
+ *  - tlb_gather_mmu() / tlb_gather_mmu_fullmm() / tlb_finish_mmu()
+ *
+ *    start and finish a mmu_gather
  *
  *    Finish in particular will issue a (final) TLB invalidate and free
  *    all (remaining) queued pages.
@@ -91,7 +93,7 @@ 
  *
  *  - mmu_gather::fullmm
  *
- *    A flag set by tlb_gather_mmu() to indicate we're going to free
+ *    A flag set by tlb_gather_mmu_fullmm() to indicate we're going to free
  *    the entire mm; this allows a number of optimizations.
  *
  *    - We can ignore tlb_{start,end}_vma(); because we don't
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 7b90058a62be..42231729affe 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -585,6 +585,7 @@  static inline cpumask_t *mm_cpumask(struct mm_struct *mm)
 struct mmu_gather;
 extern void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm,
 				unsigned long start, unsigned long end);
+extern void tlb_gather_mmu_fullmm(struct mmu_gather *tlb, struct mm_struct *mm);
 extern void tlb_finish_mmu(struct mmu_gather *tlb);
 
 static inline void init_tlb_flush_pending(struct mm_struct *mm)
diff --git a/mm/mmap.c b/mm/mmap.c
index 6d94b2ee9c45..4b2809fbbd4a 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -3216,7 +3216,7 @@  void exit_mmap(struct mm_struct *mm)
 
 	lru_add_drain();
 	flush_cache_mm(mm);
-	tlb_gather_mmu(&tlb, mm, 0, -1);
+	tlb_gather_mmu_fullmm(&tlb, mm);
 	/* update_hiwater_rss(mm) here? but nobody should be looking */
 	/* Use -1 here to ensure all VMAs in the mm are unmapped */
 	unmap_vmas(&tlb, vma, 0, -1);
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index b0be5a7aa08f..87b48444e7e5 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -261,8 +261,8 @@  void tlb_flush_mmu(struct mmu_gather *tlb)
  * respectively when @mm is without users and we're going to destroy
  * the full address space (exit/execve).
  */
-void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm,
-			unsigned long start, unsigned long end)
+static void __tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm,
+			     unsigned long start, unsigned long end)
 {
 	tlb->mm = mm;
 
@@ -287,6 +287,18 @@  void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm,
 	inc_tlb_flush_pending(tlb->mm);
 }
 
+void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm,
+		    unsigned long start, unsigned long end)
+{
+	WARN_ON(end > TASK_SIZE);
+	__tlb_gather_mmu(tlb, mm, start, end);
+}
+
+void tlb_gather_mmu_fullmm(struct mmu_gather *tlb, struct mm_struct *mm)
+{
+	__tlb_gather_mmu(tlb, mm, 0, -1);
+}
+
 /**
  * tlb_finish_mmu - finish an mmu_gather structure
  * @tlb: the mmu_gather structure to finish