diff mbox series

[mm-stable,1/2] x86/mm: further clarify switch_mm_irqs_off() documentation

Message ID 20240222190911.1903054-1-yosryahmed@google.com (mailing list archive)
State New
Headers show
Series [mm-stable,1/2] x86/mm: further clarify switch_mm_irqs_off() documentation | expand

Commit Message

Yosry Ahmed Feb. 22, 2024, 7:09 p.m. UTC
Commit accf6b23d1e5a ("x86/mm: clarify "prev" usage in
switch_mm_irqs_off()") attempted to clarify x86's usage of the arguments
passed by generic code, specifically the "prev" argument the is unused
by x86. However, it could have done a better job with the comment above
switch_mm_irqs_off(). Rewrite this comment according to Dave Hansen's
suggestion.

Fixes: accf6b23d1e5 ("x86/mm: clarify "prev" usage in switch_mm_irqs_off()")
Suggested-by: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
---
 arch/x86/mm/tlb.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

Comments

Dave Hansen Feb. 22, 2024, 7:11 p.m. UTC | #1
On 2/22/24 11:09, Yosry Ahmed wrote:
> Commit accf6b23d1e5a ("x86/mm: clarify "prev" usage in
> switch_mm_irqs_off()") attempted to clarify x86's usage of the arguments
> passed by generic code, specifically the "prev" argument the is unused
> by x86. However, it could have done a better job with the comment above
> switch_mm_irqs_off(). Rewrite this comment according to Dave Hansen's
> suggestion.

Looks good, thanks for sending this!

Acked-by: Dave Hansen <dave.hansen@intel.com>
diff mbox series

Patch

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index bf9605caf24f7..b67545baf6973 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -493,10 +493,10 @@  static inline void cr4_update_pce_mm(struct mm_struct *mm) { }
 #endif
 
 /*
- * The "prev" argument passed by the caller does not always match CR3. For
- * example, the scheduler passes in active_mm when switching from lazy TLB mode
- * to normal mode, but switch_mm_irqs_off() can be called from x86 code without
- * updating active_mm. Use cpu_tlbstate.loaded_mm instead.
+ * This optimizes when not actually switching mm's.  Some architectures use the
+ * 'unused' argument for this optimization, but x86 must use
+ * 'cpu_tlbstate.loaded_mm' instead because it does not always keep
+ * 'current->active_mm' up to date.
  */
 void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next,
 			struct task_struct *tsk)