mbox series

[v10,00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%

Message ID 20240510065206.76078-1-byungchul@sk.com (mailing list archive)
Headers show
Series LUF(Lazy Unmap Flush) reducing tlb numbers over 90% | expand

Message

Byungchul Park May 10, 2024, 6:51 a.m. UTC
Hi everyone,

While I'm working with a tiered memory system e.g. CXL memory, I have
been facing migration overhead esp. tlb shootdown on promotion or
demotion between different tiers.  Yeah..  most tlb shootdowns on
migration through hinting fault can be avoided thanks to Huang Ying's
work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE
is inaccessible").  See the following link for more information:

https://lore.kernel.org/lkml/20231115025755.GA29979@system.software.com/

However, it's only for migration through hinting fault.  I thought it'd
be much better if we have a general mechanism to reduce all the tlb
numbers that we can apply to any unmap code, that we normally believe
tlb flush should be followed.

I'm suggesting a new mechanism, LUF(Lazy Unmap Flush), defers tlb flush
until folios that have been unmapped and freed, eventually get allocated
again.  It's safe for folios that had been mapped read-only and were
unmapped, since the contents of the folios don't change while staying in
pcp or buddy so we can still read the data through the stale tlb entries.

tlb flush can be defered when folios get unmapped as long as it
guarantees to perform tlb flush needed, before the folios actually
become used, of course, only if all the corresponding ptes don't have
write permission.  Otherwise, the system will get messed up.

To achieve that:

   1. For the folios that map only to non-writable tlb entries, prevent
      tlb flush during unmapping but perform it just before the folios
      actually become used, out of buddy or pcp.

   2. When any non-writable ptes change to writable e.g. through fault
      handler, give up luf mechanism and perform tlb flush required
      right away.

   3. When a writable mapping is created e.g. through mmap(), give up
      luf mechanism and perform tlb flush required right away.

No matter what type of workload is used for performance evaluation, the
result would be positive thanks to the unconditional reduction of tlb
flushes, tlb misses and interrupts.  For the test, I picked up one of
the most popular and heavy workload, llama.cpp that is a
LLM(Large Language Model) inference engine.

The result would depend on memory latency and how often reclaim runs,
which implies tlb miss overhead and how many times unmapping happens.
In my system, the result shows:

   1. tlb flushes are reduced about 95%.
   2. tlb misses(itlb) are reduced about 80%.
   3. tlb misses(dtlb store) are reduced about 57%.
   4. tlb misses(dtlb load) are reduced about 24%.
   5. tlb shootdown interrupts are reduced about 95%.
   6. The test program runtime is reduced about 5%.

The test environment and the result is like:

   Machine: bare metal, x86_64, Intel(R) Xeon(R) Gold 6430
   CPU: 1 socket 64 core with hyper thread on
   Numa: 2 nodes (64 CPUs DRAM 42GB, no CPUs CXL expander 98GB)
   Config: swap off, numa balancing tiering on, demotion enabled

   The test set:

      llama.cpp/main -m $(70G_model1) -p "who are you?" -s 1 -t 15 -n 20 &
      llama.cpp/main -m $(70G_model2) -p "who are you?" -s 1 -t 15 -n 20 &
      llama.cpp/main -m $(70G_model3) -p "who are you?" -s 1 -t 15 -n 20 &
      wait

      where -t: nr of threads, -s: seed used to make the runtime stable,
      -n: nr of tokens that determines the runtime, -p: prompt to ask,
      -m: LLM model to use.

   Run the test set 10 times successively with caches dropped every run
   via 'echo 3 > /proc/sys/vm/drop_caches'.  Each inference prints its
   runtime at the end of each.

   1. Runtime from the output of llama.cpp:

   BEFORE
   ------
   llama_print_timings:       total time = 1002461.95 ms /    24 tokens
   llama_print_timings:       total time = 1044978.38 ms /    24 tokens
   llama_print_timings:       total time = 1000653.09 ms /    24 tokens
   llama_print_timings:       total time = 1047104.80 ms /    24 tokens
   llama_print_timings:       total time = 1069430.36 ms /    24 tokens
   llama_print_timings:       total time = 1068201.16 ms /    24 tokens
   llama_print_timings:       total time = 1078092.59 ms /    24 tokens
   llama_print_timings:       total time = 1073200.45 ms /    24 tokens
   llama_print_timings:       total time = 1067136.00 ms /    24 tokens
   llama_print_timings:       total time = 1076442.56 ms /    24 tokens
   llama_print_timings:       total time = 1004142.64 ms /    24 tokens
   llama_print_timings:       total time = 1042942.65 ms /    24 tokens
   llama_print_timings:       total time =  999933.76 ms /    24 tokens
   llama_print_timings:       total time = 1046548.83 ms /    24 tokens
   llama_print_timings:       total time = 1068671.48 ms /    24 tokens
   llama_print_timings:       total time = 1068285.76 ms /    24 tokens
   llama_print_timings:       total time = 1077789.63 ms /    24 tokens
   llama_print_timings:       total time = 1071558.93 ms /    24 tokens
   llama_print_timings:       total time = 1066181.55 ms /    24 tokens
   llama_print_timings:       total time = 1076767.53 ms /    24 tokens
   llama_print_timings:       total time = 1004065.63 ms /    24 tokens
   llama_print_timings:       total time = 1044522.13 ms /    24 tokens
   llama_print_timings:       total time =  999725.33 ms /    24 tokens
   llama_print_timings:       total time = 1047510.77 ms /    24 tokens
   llama_print_timings:       total time = 1068010.27 ms /    24 tokens
   llama_print_timings:       total time = 1068999.31 ms /    24 tokens
   llama_print_timings:       total time = 1077648.05 ms /    24 tokens
   llama_print_timings:       total time = 1071378.96 ms /    24 tokens
   llama_print_timings:       total time = 1066326.32 ms /    24 tokens
   llama_print_timings:       total time = 1077088.92 ms /    24 tokens

   AFTER
   -----
   llama_print_timings:       total time =  988522.03 ms /    24 tokens
   llama_print_timings:       total time =  997204.52 ms /    24 tokens
   llama_print_timings:       total time =  996605.86 ms /    24 tokens
   llama_print_timings:       total time =  991985.50 ms /    24 tokens
   llama_print_timings:       total time = 1035143.31 ms /    24 tokens
   llama_print_timings:       total time =  993660.18 ms /    24 tokens
   llama_print_timings:       total time =  983082.14 ms /    24 tokens
   llama_print_timings:       total time =  990431.36 ms /    24 tokens
   llama_print_timings:       total time =  992707.09 ms /    24 tokens
   llama_print_timings:       total time =  992673.27 ms /    24 tokens
   llama_print_timings:       total time =  989285.43 ms /    24 tokens
   llama_print_timings:       total time =  996710.06 ms /    24 tokens
   llama_print_timings:       total time =  996534.64 ms /    24 tokens
   llama_print_timings:       total time =  991344.17 ms /    24 tokens
   llama_print_timings:       total time = 1035210.84 ms /    24 tokens
   llama_print_timings:       total time =  994714.13 ms /    24 tokens
   llama_print_timings:       total time =  984184.15 ms /    24 tokens
   llama_print_timings:       total time =  990909.45 ms /    24 tokens
   llama_print_timings:       total time =  991881.48 ms /    24 tokens
   llama_print_timings:       total time =  993918.03 ms /    24 tokens
   llama_print_timings:       total time =  990061.34 ms /    24 tokens
   llama_print_timings:       total time =  998076.69 ms /    24 tokens
   llama_print_timings:       total time =  997082.59 ms /    24 tokens
   llama_print_timings:       total time =  990677.58 ms /    24 tokens
   llama_print_timings:       total time = 1036054.94 ms /    24 tokens
   llama_print_timings:       total time =  994125.93 ms /    24 tokens
   llama_print_timings:       total time =  982467.01 ms /    24 tokens
   llama_print_timings:       total time =  990191.60 ms /    24 tokens
   llama_print_timings:       total time =  993319.24 ms /    24 tokens
   llama_print_timings:       total time =  992540.57 ms /    24 tokens

   2. tlb shootdowns from 'cat /proc/interrupts':

   BEFORE
   ------
   TLB:
   125553646  141418810  161932620  176853972  186655697  190399283
   192143823  196414038  192872439  193313658  193395617  192521416
   190788161  195067598  198016061  193607347  194293972  190786732
   191545637  194856822  191801931  189634535  190399803  196365922
   195268398  190115840  188050050  193194908  195317617  190820190
   190164820  185556071  226797214  229592631  216112464  209909495
   205575979  205950252  204948111  197999795  198892232  205287952
   199344631  195015158  195869844  198858745  195692876  200961904
   203463252  205921722  199850838  206145986  199613202  199961345
   200129577  203020521  207873649  203697671  197093386  204243803
   205993323  200934664  204193128  194435376  TLB shootdowns

   AFTER
   -----
   TLB:
     5648092    6610142    7032849    7882308    8088518    8352310
     8656536    8705136    8647426    8905583    8985408    8704522
     8884344    9026261    8929974    8869066    8877575    8810096
     8770984    8754503    8801694    8865925    8787524    8656432
     8755912    8682034    8773935    8832925    8797997    8515777
     8481240    8891258   10595243   10285973    9756935    9573681
     9398968    9069244    9242984    8899009    9310690    9029095
     9069758    9105825    9092703    9270202    9460287    9258546
     9180415    9232723    9270611    9175020    9490420    9360316
     9420818    9057663    9525631    9310152    9152242    8654483
     9181804    9050847    8919916    8883856  TLB shootdowns

   3. tlb numbers from 'perf stat' per test set:

   BEFORE
   ------
   3163679332	dTLB-load-misses
   2017751856	dTLB-store-misses
   327092903	iTLB-load-misses
   1357543886	tlb:tlb_flush

   AFTER
   -----
   2394694609	dTLB-load-misses
   861144167	dTLB-store-misses
   64055579	iTLB-load-misses
   69175002	tlb:tlb_flush

---

Changes from v9:

	1. Expand the candidate to apply this mechanism:

	   BEFORE - The souce folios at any type of migration.
	   AFTER  - Any folios that have been unmapped and freed.

	2. Change the workload for test:

	   BEFORE - XSBench
	   AFTER  - llama.cpp (one of the most popluar real workload)

	3. Change the test environment:

	   BEFORE - qemu machine, too small DRAM(1GB), large remote mem
	   AFTER  - bare metal, real CXL memory, practical memory size

	4. Rename the mechanism from MIGRC(Migration Read Copy) to
	   LUF(Lazy Unmap Flush) to reflect the current version of the
	   mechanism can be applied not only to unmap during migration
	   but any unmap code e.g. unmap in shrink_folio_list().

	5. Fix build error for riscv. (feedbacked by kernel test bot)

	6. Supplement commit messages to describe what this mechanism is
	   for, especially in the patches for arch code. (feedbacked by
	   Thomas Gleixner)

	7. Clean up some trivial things.

Changes from v8:

	1. Rebase on akpm/mm.git mm-unstable as of April 18, 2024.
	2. Supplement comments and commit message.
	3. Change the candidate to apply migrc mechanism:

	   BEFORE - The source folios at demotion and promotion.
	   AFTER  - The souce folios at any type of migration.

	4. Change how migrc mechanism works:

	   BEFORE - Reduce tlb flushes by deferring folio_free() for
	            source folios during demotion and promotion.
	   AFTER  - Reduce tlb flushes by deferring tlb flush until they
	            actually become used, out of pcp or buddy. The
		    current version of migrc does *not* defer calling
	            folio_free() but let it go as it is as the same as
		    vanilla kernel, with the folios marked kind of 'need
		    to tlb flush'. And then handle the flush when the
		    page exits from pcp or buddy so as to prevent
		    changing vm stats e.g. free pages.

Changes from v7:

	1. Rewrite cover letter to explain what 'migrc' mechasism is.
	   (feedbacked by Andrew Morton)
	2. Supplement the commit message of a patch 'mm: Add APIs to
	   free a folio directly to the buddy bypassing pcp'.
	   (feedbacked by Andrew Morton)

Changes from v6:

	1. Fix build errors in case of
	   CONFIG_ARCH_WANT_BATCHED_UNMAP_tlb_FLUSH disabled by moving
	   migrc_flush_{start,end}() calls from arch code to
	   try_to_unmap_flush() in mm/rmap.c.

Changes from v5:

	1. Fix build errors in case of CONFIG_MIGRATION disabled or
	   CONFIG_HWPOISON_INJECT moduled. (feedbacked by kernel test
	   bot and Raymond Jay Golo)
	2. Organize migrc code with two kconfigs, CONFIG_MIGRATION and
	   CONFIG_ARCH_WANT_BATCHED_UNMAP_tlb_FLUSH.

Changes from v4:

	1. Rebase on v6.7.
	2. Fix build errors in arm64 that is doing nothing for tlb flush
	   but has CONFIG_ARCH_WANT_BATCHED_UNMAP_tlb_FLUSH. (reported
	   by kernel test robot)
	3. Don't use any page flag. So the system would give up migrc
	   mechanism more often but it's okay. The final improvement is
	   good enough.
	4. Instead, optimize full tlb flush(arch_tlbbatch_flush()) by
	   avoiding redundant CPUs from tlb flush.

Changes from v3:

	1. Don't use the kconfig, CONFIG_MIGRC, and remove sysctl knob,
	   migrc_enable. (feedbacked by Nadav)
	2. Remove the optimization skipping CPUs that have already
	   performed tlb flushes needed by any reason when performing
	   tlb flushes by migrc because I can't tell the performance
	   difference between w/ the optimization and w/o that.
	   (feedbacked by Nadav)
	3. Minimize arch-specific code. While at it, move all the migrc
           declarations and inline functions from include/linux/mm.h to
           mm/internal.h (feedbacked by Dave Hansen, Nadav)
	4. Separate a part making migrc paused when the system is in
	   high memory pressure to another patch. (feedbacked by Nadav)
	5. Rename:
	      a. arch_tlbbatch_clean() to arch_tlbbatch_clear(),
	      b. tlb_ubc_nowr to tlb_ubc_ro,
	      c. migrc_try_flush_free_folios() to migrc_flush_free_folios(),
	      d. migrc_stop to migrc_pause.
	   (feedbacked by Nadav)
	6. Use ->lru list_head instead of introducing a new llist_head.
	   (feedbacked by Nadav)
	7. Use non-atomic operations of page-flag when it's safe.
	   (feedbacked by Nadav)
	8. Use stack instead of keeping a pointer of 'struct migrc_req'
	   in struct task, which is for manipulating it locally.
	   (feedbacked by Nadav)
	9. Replace a lot of simple functions to inline functions placed
	   in a header, mm/internal.h. (feedbacked by Nadav)
	10. Add additional sufficient comments. (feedbacked by Nadav)
	11. Remove a lot of wrapper functions. (feedbacked by Nadav)

Changes from RFC v2:

	1. Remove additional occupation in struct page. To do that,
	   unioned with lru field for migrc's list and added a page
	   flag. I know page flag is a thing that we don't like to add
	   but no choice because migrc should distinguish folios under
	   migrc's control from others. Instead, I force migrc to be
	   used only on 64 bit system to mitigate you guys from getting
	   angry.
	2. Remove meaningless internal object allocator that I
	   introduced to minimize impact onto the system. However, a ton
	   of tests showed there was no difference.
	3. Stop migrc from working when the system is in high memory
	   pressure like about to perform direct reclaim. At the
	   condition where the swap mechanism is heavily used, I found
	   the system suffered from regression without this control.
	4. Exclude folios that pte_dirty() == true from migrc's interest
	   so that migrc can work simpler.
	5. Combine several patches that work tightly coupled to one.
	6. Add sufficient comments for better review.
	7. Manage migrc's request in per-node manner (from globally).
	8. Add tlb miss improvement in commit message.
	9. Test with more CPUs(4 -> 16) to see bigger improvement.

Changes from RFC:

	1. Fix a bug triggered when a destination folio at the previous
	   migration becomes a source folio at the next migration,
	   before the folio gets handled properly so that the folio can
	   play with another migration. There was inconsistency in the
	   folio's state. Fixed it.
	2. Split the patch set into more pieces so that the folks can
	   review better. (Feedbacked by Nadav Amit)
	3. Fix a wrong usage of barrier e.g. smp_mb__after_atomic().
	   (Feedbacked by Nadav Amit)
	4. Tried to add sufficient comments to explain the patch set
	   better. (Feedbacked by Nadav Amit)

Byungchul Park (12):
  x86/tlb: add APIs manipulating tlb batch's arch data
  arm64: tlbflush: add APIs manipulating tlb batch's arch data
  riscv, tlb: add APIs manipulating tlb batch's arch data
  x86/tlb, riscv/tlb, mm/rmap: separate arch_tlbbatch_clear() out of
    arch_tlbbatch_flush()
  mm: buddy: make room for a new variable, ugen, in struct page
  mm: add folio_put_ugen() to deliver unmap generation number to pcp or
    buddy
  mm: add a parameter, unmap generation number, to free_unref_folios()
  mm/rmap: recognize read-only tlb entries during batched tlb flush
  mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get
    unmapped
  mm: separate move/undo parts from migrate_pages_batch()
  mm, migrate: apply luf mechanism to unmapping during migration
  mm, vmscan: apply luf mechanism to unmapping during folio reclaim

 arch/arm64/include/asm/tlbflush.h |  18 ++
 arch/riscv/include/asm/tlbflush.h |  21 ++
 arch/riscv/mm/tlbflush.c          |   1 -
 arch/x86/include/asm/tlbflush.h   |  18 ++
 arch/x86/mm/tlb.c                 |   2 -
 include/linux/mm.h                |  22 ++
 include/linux/mm_types.h          |  40 +++-
 include/linux/rmap.h              |   7 +-
 include/linux/sched.h             |  11 +
 mm/compaction.c                   |  10 +
 mm/internal.h                     | 115 +++++++++-
 mm/memory.c                       |   8 +
 mm/migrate.c                      | 184 ++++++++++------
 mm/mmap.c                         |   8 +
 mm/page_alloc.c                   | 157 +++++++++++---
 mm/page_isolation.c               |   6 +
 mm/page_reporting.c               |  10 +
 mm/rmap.c                         | 345 +++++++++++++++++++++++++++++-
 mm/swap.c                         |  18 +-
 mm/vmscan.c                       |  29 ++-
 20 files changed, 904 insertions(+), 126 deletions(-)


base-commit: f52bcd4a9f6058704a6f6b6b50418f579defd4fe

Comments

Huang, Ying May 11, 2024, 6:54 a.m. UTC | #1
Byungchul Park <byungchul@sk.com> writes:

> Hi everyone,
>
> While I'm working with a tiered memory system e.g. CXL memory, I have
> been facing migration overhead esp. tlb shootdown on promotion or
> demotion between different tiers.  Yeah..  most tlb shootdowns on
> migration through hinting fault can be avoided thanks to Huang Ying's
> work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE
> is inaccessible").  See the following link for more information:
>
> https://lore.kernel.org/lkml/20231115025755.GA29979@system.software.com/
>
> However, it's only for migration through hinting fault.  I thought it'd
> be much better if we have a general mechanism to reduce all the tlb
> numbers that we can apply to any unmap code, that we normally believe
> tlb flush should be followed.
>
> I'm suggesting a new mechanism, LUF(Lazy Unmap Flush), defers tlb flush
> until folios that have been unmapped and freed, eventually get allocated
> again.  It's safe for folios that had been mapped read-only and were
> unmapped, since the contents of the folios don't change while staying in
> pcp or buddy so we can still read the data through the stale tlb entries.
>
> tlb flush can be defered when folios get unmapped as long as it
> guarantees to perform tlb flush needed, before the folios actually
> become used, of course, only if all the corresponding ptes don't have
> write permission.  Otherwise, the system will get messed up.
>
> To achieve that:
>
>    1. For the folios that map only to non-writable tlb entries, prevent
>       tlb flush during unmapping but perform it just before the folios
>       actually become used, out of buddy or pcp.
>
>    2. When any non-writable ptes change to writable e.g. through fault
>       handler, give up luf mechanism and perform tlb flush required
>       right away.
>
>    3. When a writable mapping is created e.g. through mmap(), give up
>       luf mechanism and perform tlb flush required right away.
>
> No matter what type of workload is used for performance evaluation, the
> result would be positive thanks to the unconditional reduction of tlb
> flushes, tlb misses and interrupts.

Are there any downsides of the optimization?  Will it cause regression
for workloads with almost no read-only mappings?  Will it cause
regression for page allocation?

> For the test, I picked up one of
> the most popular and heavy workload, llama.cpp that is a
> LLM(Large Language Model) inference engine.

IIUC, llama.cpp is a workload with huge read-only mapping.

> The result would depend on memory latency and how often reclaim runs,
> which implies tlb miss overhead and how many times unmapping happens.
> In my system, the result shows:
>
>    1. tlb flushes are reduced about 95%.
>    2. tlb misses(itlb) are reduced about 80%.
>    3. tlb misses(dtlb store) are reduced about 57%.
>    4. tlb misses(dtlb load) are reduced about 24%.
>    5. tlb shootdown interrupts are reduced about 95%.
>    6. The test program runtime is reduced about 5%.
>
> The test environment and the result is like:
>
>    Machine: bare metal, x86_64, Intel(R) Xeon(R) Gold 6430
>    CPU: 1 socket 64 core with hyper thread on
>    Numa: 2 nodes (64 CPUs DRAM 42GB, no CPUs CXL expander 98GB)
>    Config: swap off, numa balancing tiering on, demotion enabled
>
>    The test set:
>
>       llama.cpp/main -m $(70G_model1) -p "who are you?" -s 1 -t 15 -n 20 &
>       llama.cpp/main -m $(70G_model2) -p "who are you?" -s 1 -t 15 -n 20 &
>       llama.cpp/main -m $(70G_model3) -p "who are you?" -s 1 -t 15 -n 20 &
>       wait
>
>       where -t: nr of threads, -s: seed used to make the runtime stable,
>       -n: nr of tokens that determines the runtime, -p: prompt to ask,
>       -m: LLM model to use.
>
>    Run the test set 10 times successively with caches dropped every run
>    via 'echo 3 > /proc/sys/vm/drop_caches'.  Each inference prints its
>    runtime at the end of each.
>
>    1. Runtime from the output of llama.cpp:
>
>    BEFORE
>    ------
>    llama_print_timings:       total time = 1002461.95 ms /    24 tokens
>    llama_print_timings:       total time = 1044978.38 ms /    24 tokens
>    llama_print_timings:       total time = 1000653.09 ms /    24 tokens
>    llama_print_timings:       total time = 1047104.80 ms /    24 tokens
>    llama_print_timings:       total time = 1069430.36 ms /    24 tokens
>    llama_print_timings:       total time = 1068201.16 ms /    24 tokens
>    llama_print_timings:       total time = 1078092.59 ms /    24 tokens
>    llama_print_timings:       total time = 1073200.45 ms /    24 tokens
>    llama_print_timings:       total time = 1067136.00 ms /    24 tokens
>    llama_print_timings:       total time = 1076442.56 ms /    24 tokens
>    llama_print_timings:       total time = 1004142.64 ms /    24 tokens
>    llama_print_timings:       total time = 1042942.65 ms /    24 tokens
>    llama_print_timings:       total time =  999933.76 ms /    24 tokens
>    llama_print_timings:       total time = 1046548.83 ms /    24 tokens
>    llama_print_timings:       total time = 1068671.48 ms /    24 tokens
>    llama_print_timings:       total time = 1068285.76 ms /    24 tokens
>    llama_print_timings:       total time = 1077789.63 ms /    24 tokens
>    llama_print_timings:       total time = 1071558.93 ms /    24 tokens
>    llama_print_timings:       total time = 1066181.55 ms /    24 tokens
>    llama_print_timings:       total time = 1076767.53 ms /    24 tokens
>    llama_print_timings:       total time = 1004065.63 ms /    24 tokens
>    llama_print_timings:       total time = 1044522.13 ms /    24 tokens
>    llama_print_timings:       total time =  999725.33 ms /    24 tokens
>    llama_print_timings:       total time = 1047510.77 ms /    24 tokens
>    llama_print_timings:       total time = 1068010.27 ms /    24 tokens
>    llama_print_timings:       total time = 1068999.31 ms /    24 tokens
>    llama_print_timings:       total time = 1077648.05 ms /    24 tokens
>    llama_print_timings:       total time = 1071378.96 ms /    24 tokens
>    llama_print_timings:       total time = 1066326.32 ms /    24 tokens
>    llama_print_timings:       total time = 1077088.92 ms /    24 tokens
>
>    AFTER
>    -----
>    llama_print_timings:       total time =  988522.03 ms /    24 tokens
>    llama_print_timings:       total time =  997204.52 ms /    24 tokens
>    llama_print_timings:       total time =  996605.86 ms /    24 tokens
>    llama_print_timings:       total time =  991985.50 ms /    24 tokens
>    llama_print_timings:       total time = 1035143.31 ms /    24 tokens
>    llama_print_timings:       total time =  993660.18 ms /    24 tokens
>    llama_print_timings:       total time =  983082.14 ms /    24 tokens
>    llama_print_timings:       total time =  990431.36 ms /    24 tokens
>    llama_print_timings:       total time =  992707.09 ms /    24 tokens
>    llama_print_timings:       total time =  992673.27 ms /    24 tokens
>    llama_print_timings:       total time =  989285.43 ms /    24 tokens
>    llama_print_timings:       total time =  996710.06 ms /    24 tokens
>    llama_print_timings:       total time =  996534.64 ms /    24 tokens
>    llama_print_timings:       total time =  991344.17 ms /    24 tokens
>    llama_print_timings:       total time = 1035210.84 ms /    24 tokens
>    llama_print_timings:       total time =  994714.13 ms /    24 tokens
>    llama_print_timings:       total time =  984184.15 ms /    24 tokens
>    llama_print_timings:       total time =  990909.45 ms /    24 tokens
>    llama_print_timings:       total time =  991881.48 ms /    24 tokens
>    llama_print_timings:       total time =  993918.03 ms /    24 tokens
>    llama_print_timings:       total time =  990061.34 ms /    24 tokens
>    llama_print_timings:       total time =  998076.69 ms /    24 tokens
>    llama_print_timings:       total time =  997082.59 ms /    24 tokens
>    llama_print_timings:       total time =  990677.58 ms /    24 tokens
>    llama_print_timings:       total time = 1036054.94 ms /    24 tokens
>    llama_print_timings:       total time =  994125.93 ms /    24 tokens
>    llama_print_timings:       total time =  982467.01 ms /    24 tokens
>    llama_print_timings:       total time =  990191.60 ms /    24 tokens
>    llama_print_timings:       total time =  993319.24 ms /    24 tokens
>    llama_print_timings:       total time =  992540.57 ms /    24 tokens
>
>    2. tlb shootdowns from 'cat /proc/interrupts':
>
>    BEFORE
>    ------
>    TLB:
>    125553646  141418810  161932620  176853972  186655697  190399283
>    192143823  196414038  192872439  193313658  193395617  192521416
>    190788161  195067598  198016061  193607347  194293972  190786732
>    191545637  194856822  191801931  189634535  190399803  196365922
>    195268398  190115840  188050050  193194908  195317617  190820190
>    190164820  185556071  226797214  229592631  216112464  209909495
>    205575979  205950252  204948111  197999795  198892232  205287952
>    199344631  195015158  195869844  198858745  195692876  200961904
>    203463252  205921722  199850838  206145986  199613202  199961345
>    200129577  203020521  207873649  203697671  197093386  204243803
>    205993323  200934664  204193128  194435376  TLB shootdowns
>
>    AFTER
>    -----
>    TLB:
>      5648092    6610142    7032849    7882308    8088518    8352310
>      8656536    8705136    8647426    8905583    8985408    8704522
>      8884344    9026261    8929974    8869066    8877575    8810096
>      8770984    8754503    8801694    8865925    8787524    8656432
>      8755912    8682034    8773935    8832925    8797997    8515777
>      8481240    8891258   10595243   10285973    9756935    9573681
>      9398968    9069244    9242984    8899009    9310690    9029095
>      9069758    9105825    9092703    9270202    9460287    9258546
>      9180415    9232723    9270611    9175020    9490420    9360316
>      9420818    9057663    9525631    9310152    9152242    8654483
>      9181804    9050847    8919916    8883856  TLB shootdowns
>
>    3. tlb numbers from 'perf stat' per test set:
>
>    BEFORE
>    ------
>    3163679332	dTLB-load-misses
>    2017751856	dTLB-store-misses
>    327092903	iTLB-load-misses
>    1357543886	tlb:tlb_flush
>
>    AFTER
>    -----
>    2394694609	dTLB-load-misses
>    861144167	dTLB-store-misses
>    64055579	iTLB-load-misses
>    69175002	tlb:tlb_flush
>
> ---
>
> Changes from v9:
>
> 	1. Expand the candidate to apply this mechanism:
>
> 	   BEFORE - The souce folios at any type of migration.
> 	   AFTER  - Any folios that have been unmapped and freed.
>
> 	2. Change the workload for test:
>
> 	   BEFORE - XSBench
> 	   AFTER  - llama.cpp (one of the most popluar real workload)
>
> 	3. Change the test environment:
>
> 	   BEFORE - qemu machine, too small DRAM(1GB), large remote mem
> 	   AFTER  - bare metal, real CXL memory, practical memory size
>
> 	4. Rename the mechanism from MIGRC(Migration Read Copy) to
> 	   LUF(Lazy Unmap Flush) to reflect the current version of the
> 	   mechanism can be applied not only to unmap during migration
> 	   but any unmap code e.g. unmap in shrink_folio_list().
>
> 	5. Fix build error for riscv. (feedbacked by kernel test bot)
>
> 	6. Supplement commit messages to describe what this mechanism is
> 	   for, especially in the patches for arch code. (feedbacked by
> 	   Thomas Gleixner)
>
> 	7. Clean up some trivial things.
>
> Changes from v8:
>
> 	1. Rebase on akpm/mm.git mm-unstable as of April 18, 2024.
> 	2. Supplement comments and commit message.
> 	3. Change the candidate to apply migrc mechanism:
>
> 	   BEFORE - The source folios at demotion and promotion.
> 	   AFTER  - The souce folios at any type of migration.
>
> 	4. Change how migrc mechanism works:
>
> 	   BEFORE - Reduce tlb flushes by deferring folio_free() for
> 	            source folios during demotion and promotion.
> 	   AFTER  - Reduce tlb flushes by deferring tlb flush until they
> 	            actually become used, out of pcp or buddy. The
> 		    current version of migrc does *not* defer calling
> 	            folio_free() but let it go as it is as the same as
> 		    vanilla kernel, with the folios marked kind of 'need
> 		    to tlb flush'. And then handle the flush when the
> 		    page exits from pcp or buddy so as to prevent
> 		    changing vm stats e.g. free pages.
>
> Changes from v7:
>
> 	1. Rewrite cover letter to explain what 'migrc' mechasism is.
> 	   (feedbacked by Andrew Morton)
> 	2. Supplement the commit message of a patch 'mm: Add APIs to
> 	   free a folio directly to the buddy bypassing pcp'.
> 	   (feedbacked by Andrew Morton)
>
> Changes from v6:
>
> 	1. Fix build errors in case of
> 	   CONFIG_ARCH_WANT_BATCHED_UNMAP_tlb_FLUSH disabled by moving
> 	   migrc_flush_{start,end}() calls from arch code to
> 	   try_to_unmap_flush() in mm/rmap.c.
>
> Changes from v5:
>
> 	1. Fix build errors in case of CONFIG_MIGRATION disabled or
> 	   CONFIG_HWPOISON_INJECT moduled. (feedbacked by kernel test
> 	   bot and Raymond Jay Golo)
> 	2. Organize migrc code with two kconfigs, CONFIG_MIGRATION and
> 	   CONFIG_ARCH_WANT_BATCHED_UNMAP_tlb_FLUSH.
>
> Changes from v4:
>
> 	1. Rebase on v6.7.
> 	2. Fix build errors in arm64 that is doing nothing for tlb flush
> 	   but has CONFIG_ARCH_WANT_BATCHED_UNMAP_tlb_FLUSH. (reported
> 	   by kernel test robot)
> 	3. Don't use any page flag. So the system would give up migrc
> 	   mechanism more often but it's okay. The final improvement is
> 	   good enough.
> 	4. Instead, optimize full tlb flush(arch_tlbbatch_flush()) by
> 	   avoiding redundant CPUs from tlb flush.
>
> Changes from v3:
>
> 	1. Don't use the kconfig, CONFIG_MIGRC, and remove sysctl knob,
> 	   migrc_enable. (feedbacked by Nadav)
> 	2. Remove the optimization skipping CPUs that have already
> 	   performed tlb flushes needed by any reason when performing
> 	   tlb flushes by migrc because I can't tell the performance
> 	   difference between w/ the optimization and w/o that.
> 	   (feedbacked by Nadav)
> 	3. Minimize arch-specific code. While at it, move all the migrc
>            declarations and inline functions from include/linux/mm.h to
>            mm/internal.h (feedbacked by Dave Hansen, Nadav)
> 	4. Separate a part making migrc paused when the system is in
> 	   high memory pressure to another patch. (feedbacked by Nadav)
> 	5. Rename:
> 	      a. arch_tlbbatch_clean() to arch_tlbbatch_clear(),
> 	      b. tlb_ubc_nowr to tlb_ubc_ro,
> 	      c. migrc_try_flush_free_folios() to migrc_flush_free_folios(),
> 	      d. migrc_stop to migrc_pause.
> 	   (feedbacked by Nadav)
> 	6. Use ->lru list_head instead of introducing a new llist_head.
> 	   (feedbacked by Nadav)
> 	7. Use non-atomic operations of page-flag when it's safe.
> 	   (feedbacked by Nadav)
> 	8. Use stack instead of keeping a pointer of 'struct migrc_req'
> 	   in struct task, which is for manipulating it locally.
> 	   (feedbacked by Nadav)
> 	9. Replace a lot of simple functions to inline functions placed
> 	   in a header, mm/internal.h. (feedbacked by Nadav)
> 	10. Add additional sufficient comments. (feedbacked by Nadav)
> 	11. Remove a lot of wrapper functions. (feedbacked by Nadav)
>
> Changes from RFC v2:
>
> 	1. Remove additional occupation in struct page. To do that,
> 	   unioned with lru field for migrc's list and added a page
> 	   flag. I know page flag is a thing that we don't like to add
> 	   but no choice because migrc should distinguish folios under
> 	   migrc's control from others. Instead, I force migrc to be
> 	   used only on 64 bit system to mitigate you guys from getting
> 	   angry.
> 	2. Remove meaningless internal object allocator that I
> 	   introduced to minimize impact onto the system. However, a ton
> 	   of tests showed there was no difference.
> 	3. Stop migrc from working when the system is in high memory
> 	   pressure like about to perform direct reclaim. At the
> 	   condition where the swap mechanism is heavily used, I found
> 	   the system suffered from regression without this control.
> 	4. Exclude folios that pte_dirty() == true from migrc's interest
> 	   so that migrc can work simpler.
> 	5. Combine several patches that work tightly coupled to one.
> 	6. Add sufficient comments for better review.
> 	7. Manage migrc's request in per-node manner (from globally).
> 	8. Add tlb miss improvement in commit message.
> 	9. Test with more CPUs(4 -> 16) to see bigger improvement.
>
> Changes from RFC:
>
> 	1. Fix a bug triggered when a destination folio at the previous
> 	   migration becomes a source folio at the next migration,
> 	   before the folio gets handled properly so that the folio can
> 	   play with another migration. There was inconsistency in the
> 	   folio's state. Fixed it.
> 	2. Split the patch set into more pieces so that the folks can
> 	   review better. (Feedbacked by Nadav Amit)
> 	3. Fix a wrong usage of barrier e.g. smp_mb__after_atomic().
> 	   (Feedbacked by Nadav Amit)
> 	4. Tried to add sufficient comments to explain the patch set
> 	   better. (Feedbacked by Nadav Amit)
>
> Byungchul Park (12):
>   x86/tlb: add APIs manipulating tlb batch's arch data
>   arm64: tlbflush: add APIs manipulating tlb batch's arch data
>   riscv, tlb: add APIs manipulating tlb batch's arch data
>   x86/tlb, riscv/tlb, mm/rmap: separate arch_tlbbatch_clear() out of
>     arch_tlbbatch_flush()
>   mm: buddy: make room for a new variable, ugen, in struct page
>   mm: add folio_put_ugen() to deliver unmap generation number to pcp or
>     buddy
>   mm: add a parameter, unmap generation number, to free_unref_folios()
>   mm/rmap: recognize read-only tlb entries during batched tlb flush
>   mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get
>     unmapped
>   mm: separate move/undo parts from migrate_pages_batch()
>   mm, migrate: apply luf mechanism to unmapping during migration
>   mm, vmscan: apply luf mechanism to unmapping during folio reclaim
>
>  arch/arm64/include/asm/tlbflush.h |  18 ++
>  arch/riscv/include/asm/tlbflush.h |  21 ++
>  arch/riscv/mm/tlbflush.c          |   1 -
>  arch/x86/include/asm/tlbflush.h   |  18 ++
>  arch/x86/mm/tlb.c                 |   2 -
>  include/linux/mm.h                |  22 ++
>  include/linux/mm_types.h          |  40 +++-
>  include/linux/rmap.h              |   7 +-
>  include/linux/sched.h             |  11 +
>  mm/compaction.c                   |  10 +
>  mm/internal.h                     | 115 +++++++++-
>  mm/memory.c                       |   8 +
>  mm/migrate.c                      | 184 ++++++++++------
>  mm/mmap.c                         |   8 +
>  mm/page_alloc.c                   | 157 +++++++++++---
>  mm/page_isolation.c               |   6 +
>  mm/page_reporting.c               |  10 +
>  mm/rmap.c                         | 345 +++++++++++++++++++++++++++++-
>  mm/swap.c                         |  18 +-
>  mm/vmscan.c                       |  29 ++-
>  20 files changed, 904 insertions(+), 126 deletions(-)
>
>
> base-commit: f52bcd4a9f6058704a6f6b6b50418f579defd4fe

--
Best Regards,
Huang, Ying
Huang, Ying May 11, 2024, 7:15 a.m. UTC | #2
Byungchul Park <byungchul@sk.com> writes:

> Hi everyone,
>
> While I'm working with a tiered memory system e.g. CXL memory, I have
> been facing migration overhead esp. tlb shootdown on promotion or
> demotion between different tiers.  Yeah..  most tlb shootdowns on
> migration through hinting fault can be avoided thanks to Huang Ying's
> work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE
> is inaccessible").  See the following link for more information:
>
> https://lore.kernel.org/lkml/20231115025755.GA29979@system.software.com/

And, I still have interest of the performance impact of commit
7e12beb8ca2a ("migrate_pages: batch flushing TLB").  In the email above,
you said that the performance of v6.5-rc5 + 7e12beb8ca2a reverted has
better performance than v6.5-rc5.  Can you provide more details?  For
example, the number of TLB flushing IPI for two kernels?

I should have followed up the above email.  Sorry about that.  Anyway,
we should try to fix issue of that commit too.

--
Best Regards,
Huang, Ying

[snip]
Byungchul Park May 13, 2024, 1:41 a.m. UTC | #3
On Sat, May 11, 2024 at 02:54:51PM +0800, Huang, Ying wrote:
> Byungchul Park <byungchul@sk.com> writes:
> 
> > Hi everyone,
> >
> > While I'm working with a tiered memory system e.g. CXL memory, I have
> > been facing migration overhead esp. tlb shootdown on promotion or
> > demotion between different tiers.  Yeah..  most tlb shootdowns on
> > migration through hinting fault can be avoided thanks to Huang Ying's
> > work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE
> > is inaccessible").  See the following link for more information:
> >
> > https://lore.kernel.org/lkml/20231115025755.GA29979@system.software.com/
> >
> > However, it's only for migration through hinting fault.  I thought it'd
> > be much better if we have a general mechanism to reduce all the tlb
> > numbers that we can apply to any unmap code, that we normally believe
> > tlb flush should be followed.
> >
> > I'm suggesting a new mechanism, LUF(Lazy Unmap Flush), defers tlb flush
> > until folios that have been unmapped and freed, eventually get allocated
> > again.  It's safe for folios that had been mapped read-only and were
> > unmapped, since the contents of the folios don't change while staying in
> > pcp or buddy so we can still read the data through the stale tlb entries.
> >
> > tlb flush can be defered when folios get unmapped as long as it
> > guarantees to perform tlb flush needed, before the folios actually
> > become used, of course, only if all the corresponding ptes don't have
> > write permission.  Otherwise, the system will get messed up.
> >
> > To achieve that:
> >
> >    1. For the folios that map only to non-writable tlb entries, prevent
> >       tlb flush during unmapping but perform it just before the folios
> >       actually become used, out of buddy or pcp.
> >
> >    2. When any non-writable ptes change to writable e.g. through fault
> >       handler, give up luf mechanism and perform tlb flush required
> >       right away.
> >
> >    3. When a writable mapping is created e.g. through mmap(), give up
> >       luf mechanism and perform tlb flush required right away.
> >
> > No matter what type of workload is used for performance evaluation, the
> > result would be positive thanks to the unconditional reduction of tlb
> > flushes, tlb misses and interrupts.
> 
> Are there any downsides of the optimization?  Will it cause regression
> for workloads with almost no read-only mappings?  Will it cause

IMHO, no.  LUF does almost nothing for folios writable mapped.

> regression for page allocation?

TLB flush might be added in prep_new_page() if pended, however, that
should have been already performed anyway.  It's not additional overhead.

The TLB flush even can be skipped by the batched work.

> > For the test, I picked up one of
> > the most popular and heavy workload, llama.cpp that is a
> > LLM(Large Language Model) inference engine.
> 
> IIUC, llama.cpp is a workload with huge read-only mapping.

Right.  LUF works on read-only mapping.  So the more read-only mappings
are used, the better LUF works.  Fortunately, such a workload with huge
read-only mappings that is normal in a light weight inference engine, is
quite popular these days in the era of LLM.

	Byungchul

> > The result would depend on memory latency and how often reclaim runs,
> > which implies tlb miss overhead and how many times unmapping happens.
> > In my system, the result shows:
> >
> >    1. tlb flushes are reduced about 95%.
> >    2. tlb misses(itlb) are reduced about 80%.
> >    3. tlb misses(dtlb store) are reduced about 57%.
> >    4. tlb misses(dtlb load) are reduced about 24%.
> >    5. tlb shootdown interrupts are reduced about 95%.
> >    6. The test program runtime is reduced about 5%.
> >
> > The test environment and the result is like:
> >
> >    Machine: bare metal, x86_64, Intel(R) Xeon(R) Gold 6430
> >    CPU: 1 socket 64 core with hyper thread on
> >    Numa: 2 nodes (64 CPUs DRAM 42GB, no CPUs CXL expander 98GB)
> >    Config: swap off, numa balancing tiering on, demotion enabled
> >
> >    The test set:
> >
> >       llama.cpp/main -m $(70G_model1) -p "who are you?" -s 1 -t 15 -n 20 &
> >       llama.cpp/main -m $(70G_model2) -p "who are you?" -s 1 -t 15 -n 20 &
> >       llama.cpp/main -m $(70G_model3) -p "who are you?" -s 1 -t 15 -n 20 &
> >       wait
> >
> >       where -t: nr of threads, -s: seed used to make the runtime stable,
> >       -n: nr of tokens that determines the runtime, -p: prompt to ask,
> >       -m: LLM model to use.
> >
> >    Run the test set 10 times successively with caches dropped every run
> >    via 'echo 3 > /proc/sys/vm/drop_caches'.  Each inference prints its
> >    runtime at the end of each.
> >
> >    1. Runtime from the output of llama.cpp:
> >
> >    BEFORE
> >    ------
> >    llama_print_timings:       total time = 1002461.95 ms /    24 tokens
> >    llama_print_timings:       total time = 1044978.38 ms /    24 tokens
> >    llama_print_timings:       total time = 1000653.09 ms /    24 tokens
> >    llama_print_timings:       total time = 1047104.80 ms /    24 tokens
> >    llama_print_timings:       total time = 1069430.36 ms /    24 tokens
> >    llama_print_timings:       total time = 1068201.16 ms /    24 tokens
> >    llama_print_timings:       total time = 1078092.59 ms /    24 tokens
> >    llama_print_timings:       total time = 1073200.45 ms /    24 tokens
> >    llama_print_timings:       total time = 1067136.00 ms /    24 tokens
> >    llama_print_timings:       total time = 1076442.56 ms /    24 tokens
> >    llama_print_timings:       total time = 1004142.64 ms /    24 tokens
> >    llama_print_timings:       total time = 1042942.65 ms /    24 tokens
> >    llama_print_timings:       total time =  999933.76 ms /    24 tokens
> >    llama_print_timings:       total time = 1046548.83 ms /    24 tokens
> >    llama_print_timings:       total time = 1068671.48 ms /    24 tokens
> >    llama_print_timings:       total time = 1068285.76 ms /    24 tokens
> >    llama_print_timings:       total time = 1077789.63 ms /    24 tokens
> >    llama_print_timings:       total time = 1071558.93 ms /    24 tokens
> >    llama_print_timings:       total time = 1066181.55 ms /    24 tokens
> >    llama_print_timings:       total time = 1076767.53 ms /    24 tokens
> >    llama_print_timings:       total time = 1004065.63 ms /    24 tokens
> >    llama_print_timings:       total time = 1044522.13 ms /    24 tokens
> >    llama_print_timings:       total time =  999725.33 ms /    24 tokens
> >    llama_print_timings:       total time = 1047510.77 ms /    24 tokens
> >    llama_print_timings:       total time = 1068010.27 ms /    24 tokens
> >    llama_print_timings:       total time = 1068999.31 ms /    24 tokens
> >    llama_print_timings:       total time = 1077648.05 ms /    24 tokens
> >    llama_print_timings:       total time = 1071378.96 ms /    24 tokens
> >    llama_print_timings:       total time = 1066326.32 ms /    24 tokens
> >    llama_print_timings:       total time = 1077088.92 ms /    24 tokens
> >
> >    AFTER
> >    -----
> >    llama_print_timings:       total time =  988522.03 ms /    24 tokens
> >    llama_print_timings:       total time =  997204.52 ms /    24 tokens
> >    llama_print_timings:       total time =  996605.86 ms /    24 tokens
> >    llama_print_timings:       total time =  991985.50 ms /    24 tokens
> >    llama_print_timings:       total time = 1035143.31 ms /    24 tokens
> >    llama_print_timings:       total time =  993660.18 ms /    24 tokens
> >    llama_print_timings:       total time =  983082.14 ms /    24 tokens
> >    llama_print_timings:       total time =  990431.36 ms /    24 tokens
> >    llama_print_timings:       total time =  992707.09 ms /    24 tokens
> >    llama_print_timings:       total time =  992673.27 ms /    24 tokens
> >    llama_print_timings:       total time =  989285.43 ms /    24 tokens
> >    llama_print_timings:       total time =  996710.06 ms /    24 tokens
> >    llama_print_timings:       total time =  996534.64 ms /    24 tokens
> >    llama_print_timings:       total time =  991344.17 ms /    24 tokens
> >    llama_print_timings:       total time = 1035210.84 ms /    24 tokens
> >    llama_print_timings:       total time =  994714.13 ms /    24 tokens
> >    llama_print_timings:       total time =  984184.15 ms /    24 tokens
> >    llama_print_timings:       total time =  990909.45 ms /    24 tokens
> >    llama_print_timings:       total time =  991881.48 ms /    24 tokens
> >    llama_print_timings:       total time =  993918.03 ms /    24 tokens
> >    llama_print_timings:       total time =  990061.34 ms /    24 tokens
> >    llama_print_timings:       total time =  998076.69 ms /    24 tokens
> >    llama_print_timings:       total time =  997082.59 ms /    24 tokens
> >    llama_print_timings:       total time =  990677.58 ms /    24 tokens
> >    llama_print_timings:       total time = 1036054.94 ms /    24 tokens
> >    llama_print_timings:       total time =  994125.93 ms /    24 tokens
> >    llama_print_timings:       total time =  982467.01 ms /    24 tokens
> >    llama_print_timings:       total time =  990191.60 ms /    24 tokens
> >    llama_print_timings:       total time =  993319.24 ms /    24 tokens
> >    llama_print_timings:       total time =  992540.57 ms /    24 tokens
> >
> >    2. tlb shootdowns from 'cat /proc/interrupts':
> >
> >    BEFORE
> >    ------
> >    TLB:
> >    125553646  141418810  161932620  176853972  186655697  190399283
> >    192143823  196414038  192872439  193313658  193395617  192521416
> >    190788161  195067598  198016061  193607347  194293972  190786732
> >    191545637  194856822  191801931  189634535  190399803  196365922
> >    195268398  190115840  188050050  193194908  195317617  190820190
> >    190164820  185556071  226797214  229592631  216112464  209909495
> >    205575979  205950252  204948111  197999795  198892232  205287952
> >    199344631  195015158  195869844  198858745  195692876  200961904
> >    203463252  205921722  199850838  206145986  199613202  199961345
> >    200129577  203020521  207873649  203697671  197093386  204243803
> >    205993323  200934664  204193128  194435376  TLB shootdowns
> >
> >    AFTER
> >    -----
> >    TLB:
> >      5648092    6610142    7032849    7882308    8088518    8352310
> >      8656536    8705136    8647426    8905583    8985408    8704522
> >      8884344    9026261    8929974    8869066    8877575    8810096
> >      8770984    8754503    8801694    8865925    8787524    8656432
> >      8755912    8682034    8773935    8832925    8797997    8515777
> >      8481240    8891258   10595243   10285973    9756935    9573681
> >      9398968    9069244    9242984    8899009    9310690    9029095
> >      9069758    9105825    9092703    9270202    9460287    9258546
> >      9180415    9232723    9270611    9175020    9490420    9360316
> >      9420818    9057663    9525631    9310152    9152242    8654483
> >      9181804    9050847    8919916    8883856  TLB shootdowns
> >
> >    3. tlb numbers from 'perf stat' per test set:
> >
> >    BEFORE
> >    ------
> >    3163679332	dTLB-load-misses
> >    2017751856	dTLB-store-misses
> >    327092903	iTLB-load-misses
> >    1357543886	tlb:tlb_flush
> >
> >    AFTER
> >    -----
> >    2394694609	dTLB-load-misses
> >    861144167	dTLB-store-misses
> >    64055579	iTLB-load-misses
> >    69175002	tlb:tlb_flush
> >
> > ---
> >
> > Changes from v9:
> >
> > 	1. Expand the candidate to apply this mechanism:
> >
> > 	   BEFORE - The souce folios at any type of migration.
> > 	   AFTER  - Any folios that have been unmapped and freed.
> >
> > 	2. Change the workload for test:
> >
> > 	   BEFORE - XSBench
> > 	   AFTER  - llama.cpp (one of the most popluar real workload)
> >
> > 	3. Change the test environment:
> >
> > 	   BEFORE - qemu machine, too small DRAM(1GB), large remote mem
> > 	   AFTER  - bare metal, real CXL memory, practical memory size
> >
> > 	4. Rename the mechanism from MIGRC(Migration Read Copy) to
> > 	   LUF(Lazy Unmap Flush) to reflect the current version of the
> > 	   mechanism can be applied not only to unmap during migration
> > 	   but any unmap code e.g. unmap in shrink_folio_list().
> >
> > 	5. Fix build error for riscv. (feedbacked by kernel test bot)
> >
> > 	6. Supplement commit messages to describe what this mechanism is
> > 	   for, especially in the patches for arch code. (feedbacked by
> > 	   Thomas Gleixner)
> >
> > 	7. Clean up some trivial things.
> >
> > Changes from v8:
> >
> > 	1. Rebase on akpm/mm.git mm-unstable as of April 18, 2024.
> > 	2. Supplement comments and commit message.
> > 	3. Change the candidate to apply migrc mechanism:
> >
> > 	   BEFORE - The source folios at demotion and promotion.
> > 	   AFTER  - The souce folios at any type of migration.
> >
> > 	4. Change how migrc mechanism works:
> >
> > 	   BEFORE - Reduce tlb flushes by deferring folio_free() for
> > 	            source folios during demotion and promotion.
> > 	   AFTER  - Reduce tlb flushes by deferring tlb flush until they
> > 	            actually become used, out of pcp or buddy. The
> > 		    current version of migrc does *not* defer calling
> > 	            folio_free() but let it go as it is as the same as
> > 		    vanilla kernel, with the folios marked kind of 'need
> > 		    to tlb flush'. And then handle the flush when the
> > 		    page exits from pcp or buddy so as to prevent
> > 		    changing vm stats e.g. free pages.
> >
> > Changes from v7:
> >
> > 	1. Rewrite cover letter to explain what 'migrc' mechasism is.
> > 	   (feedbacked by Andrew Morton)
> > 	2. Supplement the commit message of a patch 'mm: Add APIs to
> > 	   free a folio directly to the buddy bypassing pcp'.
> > 	   (feedbacked by Andrew Morton)
> >
> > Changes from v6:
> >
> > 	1. Fix build errors in case of
> > 	   CONFIG_ARCH_WANT_BATCHED_UNMAP_tlb_FLUSH disabled by moving
> > 	   migrc_flush_{start,end}() calls from arch code to
> > 	   try_to_unmap_flush() in mm/rmap.c.
> >
> > Changes from v5:
> >
> > 	1. Fix build errors in case of CONFIG_MIGRATION disabled or
> > 	   CONFIG_HWPOISON_INJECT moduled. (feedbacked by kernel test
> > 	   bot and Raymond Jay Golo)
> > 	2. Organize migrc code with two kconfigs, CONFIG_MIGRATION and
> > 	   CONFIG_ARCH_WANT_BATCHED_UNMAP_tlb_FLUSH.
> >
> > Changes from v4:
> >
> > 	1. Rebase on v6.7.
> > 	2. Fix build errors in arm64 that is doing nothing for tlb flush
> > 	   but has CONFIG_ARCH_WANT_BATCHED_UNMAP_tlb_FLUSH. (reported
> > 	   by kernel test robot)
> > 	3. Don't use any page flag. So the system would give up migrc
> > 	   mechanism more often but it's okay. The final improvement is
> > 	   good enough.
> > 	4. Instead, optimize full tlb flush(arch_tlbbatch_flush()) by
> > 	   avoiding redundant CPUs from tlb flush.
> >
> > Changes from v3:
> >
> > 	1. Don't use the kconfig, CONFIG_MIGRC, and remove sysctl knob,
> > 	   migrc_enable. (feedbacked by Nadav)
> > 	2. Remove the optimization skipping CPUs that have already
> > 	   performed tlb flushes needed by any reason when performing
> > 	   tlb flushes by migrc because I can't tell the performance
> > 	   difference between w/ the optimization and w/o that.
> > 	   (feedbacked by Nadav)
> > 	3. Minimize arch-specific code. While at it, move all the migrc
> >            declarations and inline functions from include/linux/mm.h to
> >            mm/internal.h (feedbacked by Dave Hansen, Nadav)
> > 	4. Separate a part making migrc paused when the system is in
> > 	   high memory pressure to another patch. (feedbacked by Nadav)
> > 	5. Rename:
> > 	      a. arch_tlbbatch_clean() to arch_tlbbatch_clear(),
> > 	      b. tlb_ubc_nowr to tlb_ubc_ro,
> > 	      c. migrc_try_flush_free_folios() to migrc_flush_free_folios(),
> > 	      d. migrc_stop to migrc_pause.
> > 	   (feedbacked by Nadav)
> > 	6. Use ->lru list_head instead of introducing a new llist_head.
> > 	   (feedbacked by Nadav)
> > 	7. Use non-atomic operations of page-flag when it's safe.
> > 	   (feedbacked by Nadav)
> > 	8. Use stack instead of keeping a pointer of 'struct migrc_req'
> > 	   in struct task, which is for manipulating it locally.
> > 	   (feedbacked by Nadav)
> > 	9. Replace a lot of simple functions to inline functions placed
> > 	   in a header, mm/internal.h. (feedbacked by Nadav)
> > 	10. Add additional sufficient comments. (feedbacked by Nadav)
> > 	11. Remove a lot of wrapper functions. (feedbacked by Nadav)
> >
> > Changes from RFC v2:
> >
> > 	1. Remove additional occupation in struct page. To do that,
> > 	   unioned with lru field for migrc's list and added a page
> > 	   flag. I know page flag is a thing that we don't like to add
> > 	   but no choice because migrc should distinguish folios under
> > 	   migrc's control from others. Instead, I force migrc to be
> > 	   used only on 64 bit system to mitigate you guys from getting
> > 	   angry.
> > 	2. Remove meaningless internal object allocator that I
> > 	   introduced to minimize impact onto the system. However, a ton
> > 	   of tests showed there was no difference.
> > 	3. Stop migrc from working when the system is in high memory
> > 	   pressure like about to perform direct reclaim. At the
> > 	   condition where the swap mechanism is heavily used, I found
> > 	   the system suffered from regression without this control.
> > 	4. Exclude folios that pte_dirty() == true from migrc's interest
> > 	   so that migrc can work simpler.
> > 	5. Combine several patches that work tightly coupled to one.
> > 	6. Add sufficient comments for better review.
> > 	7. Manage migrc's request in per-node manner (from globally).
> > 	8. Add tlb miss improvement in commit message.
> > 	9. Test with more CPUs(4 -> 16) to see bigger improvement.
> >
> > Changes from RFC:
> >
> > 	1. Fix a bug triggered when a destination folio at the previous
> > 	   migration becomes a source folio at the next migration,
> > 	   before the folio gets handled properly so that the folio can
> > 	   play with another migration. There was inconsistency in the
> > 	   folio's state. Fixed it.
> > 	2. Split the patch set into more pieces so that the folks can
> > 	   review better. (Feedbacked by Nadav Amit)
> > 	3. Fix a wrong usage of barrier e.g. smp_mb__after_atomic().
> > 	   (Feedbacked by Nadav Amit)
> > 	4. Tried to add sufficient comments to explain the patch set
> > 	   better. (Feedbacked by Nadav Amit)
> >
> > Byungchul Park (12):
> >   x86/tlb: add APIs manipulating tlb batch's arch data
> >   arm64: tlbflush: add APIs manipulating tlb batch's arch data
> >   riscv, tlb: add APIs manipulating tlb batch's arch data
> >   x86/tlb, riscv/tlb, mm/rmap: separate arch_tlbbatch_clear() out of
> >     arch_tlbbatch_flush()
> >   mm: buddy: make room for a new variable, ugen, in struct page
> >   mm: add folio_put_ugen() to deliver unmap generation number to pcp or
> >     buddy
> >   mm: add a parameter, unmap generation number, to free_unref_folios()
> >   mm/rmap: recognize read-only tlb entries during batched tlb flush
> >   mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get
> >     unmapped
> >   mm: separate move/undo parts from migrate_pages_batch()
> >   mm, migrate: apply luf mechanism to unmapping during migration
> >   mm, vmscan: apply luf mechanism to unmapping during folio reclaim
> >
> >  arch/arm64/include/asm/tlbflush.h |  18 ++
> >  arch/riscv/include/asm/tlbflush.h |  21 ++
> >  arch/riscv/mm/tlbflush.c          |   1 -
> >  arch/x86/include/asm/tlbflush.h   |  18 ++
> >  arch/x86/mm/tlb.c                 |   2 -
> >  include/linux/mm.h                |  22 ++
> >  include/linux/mm_types.h          |  40 +++-
> >  include/linux/rmap.h              |   7 +-
> >  include/linux/sched.h             |  11 +
> >  mm/compaction.c                   |  10 +
> >  mm/internal.h                     | 115 +++++++++-
> >  mm/memory.c                       |   8 +
> >  mm/migrate.c                      | 184 ++++++++++------
> >  mm/mmap.c                         |   8 +
> >  mm/page_alloc.c                   | 157 +++++++++++---
> >  mm/page_isolation.c               |   6 +
> >  mm/page_reporting.c               |  10 +
> >  mm/rmap.c                         | 345 +++++++++++++++++++++++++++++-
> >  mm/swap.c                         |  18 +-
> >  mm/vmscan.c                       |  29 ++-
> >  20 files changed, 904 insertions(+), 126 deletions(-)
> >
> >
> > base-commit: f52bcd4a9f6058704a6f6b6b50418f579defd4fe
> 
> --
> Best Regards,
> Huang, Ying
Byungchul Park May 13, 2024, 1:44 a.m. UTC | #4
On Sat, May 11, 2024 at 03:15:01PM +0800, Huang, Ying wrote:
> Byungchul Park <byungchul@sk.com> writes:
> 
> > Hi everyone,
> >
> > While I'm working with a tiered memory system e.g. CXL memory, I have
> > been facing migration overhead esp. tlb shootdown on promotion or
> > demotion between different tiers.  Yeah..  most tlb shootdowns on
> > migration through hinting fault can be avoided thanks to Huang Ying's
> > work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE
> > is inaccessible").  See the following link for more information:
> >
> > https://lore.kernel.org/lkml/20231115025755.GA29979@system.software.com/
> 
> And, I still have interest of the performance impact of commit
> 7e12beb8ca2a ("migrate_pages: batch flushing TLB").  In the email above,
> you said that the performance of v6.5-rc5 + 7e12beb8ca2a reverted has
> better performance than v6.5-rc5.  Can you provide more details?  For
> example, the number of TLB flushing IPI for two kernels?

Okay.  I will test and share the result with what you asked me now once
I get available for the test.

	Byungchul

> I should have followed up the above email.  Sorry about that.  Anyway,
> we should try to fix issue of that commit too.
> 
> --
> Best Regards,
> Huang, Ying
> 
> [snip]
Byungchul Park May 22, 2024, 2:16 a.m. UTC | #5
On Mon, May 13, 2024 at 10:44:29AM +0900, Byungchul Park wrote:
> On Sat, May 11, 2024 at 03:15:01PM +0800, Huang, Ying wrote:
> > Byungchul Park <byungchul@sk.com> writes:
> > 
> > > Hi everyone,
> > >
> > > While I'm working with a tiered memory system e.g. CXL memory, I have
> > > been facing migration overhead esp. tlb shootdown on promotion or
> > > demotion between different tiers.  Yeah..  most tlb shootdowns on
> > > migration through hinting fault can be avoided thanks to Huang Ying's
> > > work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE
> > > is inaccessible").  See the following link for more information:
> > >
> > > https://lore.kernel.org/lkml/20231115025755.GA29979@system.software.com/
> > 
> > And, I still have interest of the performance impact of commit
> > 7e12beb8ca2a ("migrate_pages: batch flushing TLB").  In the email above,
> > you said that the performance of v6.5-rc5 + 7e12beb8ca2a reverted has
> > better performance than v6.5-rc5.  Can you provide more details?  For
> > example, the number of TLB flushing IPI for two kernels?
> 
> Okay.  I will test and share the result with what you asked me now once
> I get available for the test.

I should admit that the test using qemu is so unstable.  While using
qemu for the test, kernel with 7e12beb8ca2a applied gave better results
sometimes and worse ones sometimes.  I should've used a bare metal from
the beginning.  Sorry for making you confused with the unstable result.

Since I thought you asked me for the test with the same environment in
the link above, I used qemu to reproduce the similar result but changed
the number of threads for the test from 16 to 14 to get rid of noise
that might be introduced by other than the intended test just in case.

As expected, the stats are better with your work:

   ------------------------------------------
   v6.6-rc5 with 7e12beb8ca2a commit reverted
   ------------------------------------------

   1) from output of XSBench

   Threads:     14              
   Runtime:     1127.043 seconds
   Lookups:     1,700,000,000   
   Lookups/s:   1,508,371       

   2) from /proc/vmstat

   numa_hit 15580171                      
   numa_miss 1034233                      
   numa_foreign 1034233                   
   numa_interleave 773                    
   numa_local 7927442                     
   numa_other 8686962                     
   numa_pte_updates 24068923              
   numa_hint_faults 24061125              
   numa_hint_faults_local 0               
   numa_pages_migrated 7426480            
   pgmigrate_success 15407375             
   pgmigrate_fail 1849                    
   compact_migrate_scanned 4445414        
   compact_daemon_migrate_scanned 4445414 
   pgdemote_kswapd 7651061                
   pgdemote_direct 0                      
   nr_tlb_remote_flush 8080092            
   nr_tlb_remote_flush_received 109915713 
   nr_tlb_local_flush_all 53800           
   nr_tlb_local_flush_one 770466                                                   
   
   3) from /proc/interrupts

   TLB: 8022927    7840769     123588    7837008    7835967    7839837
   	7838332    7839886    7837610    7837221    7834524     407260
   	7430090    7835696    7839081    7712568    TLB shootdowns  
   
   4) from 'perf stat -a'

   222371217		itlb.itlb_flush      
   919832520		tlb_flush.dtlb_thread
   372223809		tlb_flush.stlb_any   
   120210808042		dTLB-load-misses     
   979352769		dTLB-store-misses    
   3650767665		iTLB-load-misses     

   -----------------------------------------
   v6.6-rc5 with 7e12beb8ca2a commit applied
   -----------------------------------------

   1) from output of XSBench

   Threads:     14
   Runtime:     1105.521 seconds
   Lookups:     1,700,000,000
   Lookups/s:   1,537,737

   2) from /proc/vmstat

   numa_hit 24148399
   numa_miss 797483
   numa_foreign 797483
   numa_interleave 772
   numa_local 12214575
   numa_other 12731307
   numa_pte_updates 24250278
   numa_hint_faults 24199756
   numa_hint_faults_local 0
   numa_pages_migrated 11476195
   pgmigrate_success 23634639
   pgmigrate_fail 1391
   compact_migrate_scanned 3760803
   compact_daemon_migrate_scanned 3760803
   pgdemote_kswapd 11932217
   pgdemote_direct 0
   nr_tlb_remote_flush 2151945
   nr_tlb_remote_flush_received 29672808
   nr_tlb_local_flush_all 124006
   nr_tlb_local_flush_one 741165
   
   3) from /proc/interrupts

   TLB: 2130784    2120142    2117571     844962    2071766     114675
   	2117258    2119596    2116816    1205446    2119176    2119209
   	2116792    2118763    2118773    2117762    TLB shootdowns

   4) from 'perf stat -a'

   60851902		itlb.itlb_flush
   334068491		tlb_flush.dtlb_thread
   223732916		tlb_flush.stlb_any
   120207083382		dTLB-load-misses
   446823059		dTLB-store-misses
   1926669373		iTLB-load-misses

---

	Byungchul
Huang, Ying May 22, 2024, 7:38 a.m. UTC | #6
Hi, Byungchul,

Byungchul Park <byungchul@sk.com> writes:

> On Mon, May 13, 2024 at 10:44:29AM +0900, Byungchul Park wrote:
>> On Sat, May 11, 2024 at 03:15:01PM +0800, Huang, Ying wrote:
>> > Byungchul Park <byungchul@sk.com> writes:
>> > 
>> > > Hi everyone,
>> > >
>> > > While I'm working with a tiered memory system e.g. CXL memory, I have
>> > > been facing migration overhead esp. tlb shootdown on promotion or
>> > > demotion between different tiers.  Yeah..  most tlb shootdowns on
>> > > migration through hinting fault can be avoided thanks to Huang Ying's
>> > > work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE
>> > > is inaccessible").  See the following link for more information:
>> > >
>> > > https://lore.kernel.org/lkml/20231115025755.GA29979@system.software.com/
>> > 
>> > And, I still have interest of the performance impact of commit
>> > 7e12beb8ca2a ("migrate_pages: batch flushing TLB").  In the email above,
>> > you said that the performance of v6.5-rc5 + 7e12beb8ca2a reverted has
>> > better performance than v6.5-rc5.  Can you provide more details?  For
>> > example, the number of TLB flushing IPI for two kernels?
>> 
>> Okay.  I will test and share the result with what you asked me now once
>> I get available for the test.
>
> I should admit that the test using qemu is so unstable.  While using
> qemu for the test, kernel with 7e12beb8ca2a applied gave better results
> sometimes and worse ones sometimes.  I should've used a bare metal from
> the beginning.  Sorry for making you confused with the unstable result.
>
> Since I thought you asked me for the test with the same environment in
> the link above, I used qemu to reproduce the similar result but changed
> the number of threads for the test from 16 to 14 to get rid of noise
> that might be introduced by other than the intended test just in case.
>
> As expected, the stats are better with your work:
>
>    ------------------------------------------
>    v6.6-rc5 with 7e12beb8ca2a commit reverted
>    ------------------------------------------
>
>    1) from output of XSBench
>
>    Threads:     14              
>    Runtime:     1127.043 seconds
>    Lookups:     1,700,000,000   
>    Lookups/s:   1,508,371       
>
>    2) from /proc/vmstat
>
>    numa_hit 15580171                      
>    numa_miss 1034233                      
>    numa_foreign 1034233                   
>    numa_interleave 773                    
>    numa_local 7927442                     
>    numa_other 8686962                     
>    numa_pte_updates 24068923              
>    numa_hint_faults 24061125              
>    numa_hint_faults_local 0               
>    numa_pages_migrated 7426480            
>    pgmigrate_success 15407375             
>    pgmigrate_fail 1849                    
>    compact_migrate_scanned 4445414        
>    compact_daemon_migrate_scanned 4445414 
>    pgdemote_kswapd 7651061                
>    pgdemote_direct 0                      
>    nr_tlb_remote_flush 8080092            
>    nr_tlb_remote_flush_received 109915713 
>    nr_tlb_local_flush_all 53800           
>    nr_tlb_local_flush_one 770466                                                   
>    
>    3) from /proc/interrupts
>
>    TLB: 8022927    7840769     123588    7837008    7835967    7839837
>    	7838332    7839886    7837610    7837221    7834524     407260
>    	7430090    7835696    7839081    7712568    TLB shootdowns  
>    
>    4) from 'perf stat -a'
>
>    222371217		itlb.itlb_flush      
>    919832520		tlb_flush.dtlb_thread
>    372223809		tlb_flush.stlb_any   
>    120210808042		dTLB-load-misses     
>    979352769		dTLB-store-misses    
>    3650767665		iTLB-load-misses     
>
>    -----------------------------------------
>    v6.6-rc5 with 7e12beb8ca2a commit applied
>    -----------------------------------------
>
>    1) from output of XSBench
>
>    Threads:     14
>    Runtime:     1105.521 seconds
>    Lookups:     1,700,000,000
>    Lookups/s:   1,537,737
>
>    2) from /proc/vmstat
>
>    numa_hit 24148399
>    numa_miss 797483
>    numa_foreign 797483
>    numa_interleave 772
>    numa_local 12214575
>    numa_other 12731307
>    numa_pte_updates 24250278
>    numa_hint_faults 24199756
>    numa_hint_faults_local 0
>    numa_pages_migrated 11476195
>    pgmigrate_success 23634639
>    pgmigrate_fail 1391
>    compact_migrate_scanned 3760803
>    compact_daemon_migrate_scanned 3760803
>    pgdemote_kswapd 11932217
>    pgdemote_direct 0
>    nr_tlb_remote_flush 2151945
>    nr_tlb_remote_flush_received 29672808
>    nr_tlb_local_flush_all 124006
>    nr_tlb_local_flush_one 741165
>    
>    3) from /proc/interrupts
>
>    TLB: 2130784    2120142    2117571     844962    2071766     114675
>    	2117258    2119596    2116816    1205446    2119176    2119209
>    	2116792    2118763    2118773    2117762    TLB shootdowns
>
>    4) from 'perf stat -a'
>
>    60851902		itlb.itlb_flush
>    334068491		tlb_flush.dtlb_thread
>    223732916		tlb_flush.stlb_any
>    120207083382		dTLB-load-misses
>    446823059		dTLB-store-misses
>    1926669373		iTLB-load-misses
>

Thanks a lot for test results!

From your test results, the TLB shootdown IPI can be reduced effectively
with commit 7e12beb8ca2a.  So that the benchmark score improved a
little.

And, your changes will reduce the TLB shootdown IPI further, right?  Do
you have the number?

--
Best Regards,
Huang, Ying
Byungchul Park May 22, 2024, 10:27 a.m. UTC | #7
On Wed, May 22, 2024 at 03:38:04PM +0800, Huang, Ying wrote:
> Hi, Byungchul,
> 
> Byungchul Park <byungchul@sk.com> writes:
> 
> > On Mon, May 13, 2024 at 10:44:29AM +0900, Byungchul Park wrote:
> >> On Sat, May 11, 2024 at 03:15:01PM +0800, Huang, Ying wrote:
> >> > Byungchul Park <byungchul@sk.com> writes:
> >> > 
> >> > > Hi everyone,
> >> > >
> >> > > While I'm working with a tiered memory system e.g. CXL memory, I have
> >> > > been facing migration overhead esp. tlb shootdown on promotion or
> >> > > demotion between different tiers.  Yeah..  most tlb shootdowns on
> >> > > migration through hinting fault can be avoided thanks to Huang Ying's
> >> > > work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE
> >> > > is inaccessible").  See the following link for more information:
> >> > >
> >> > > https://lore.kernel.org/lkml/20231115025755.GA29979@system.software.com/
> >> > 
> >> > And, I still have interest of the performance impact of commit
> >> > 7e12beb8ca2a ("migrate_pages: batch flushing TLB").  In the email above,
> >> > you said that the performance of v6.5-rc5 + 7e12beb8ca2a reverted has
> >> > better performance than v6.5-rc5.  Can you provide more details?  For
> >> > example, the number of TLB flushing IPI for two kernels?
> >> 
> >> Okay.  I will test and share the result with what you asked me now once
> >> I get available for the test.
> >
> > I should admit that the test using qemu is so unstable.  While using
> > qemu for the test, kernel with 7e12beb8ca2a applied gave better results
> > sometimes and worse ones sometimes.  I should've used a bare metal from
> > the beginning.  Sorry for making you confused with the unstable result.
> >
> > Since I thought you asked me for the test with the same environment in
> > the link above, I used qemu to reproduce the similar result but changed
> > the number of threads for the test from 16 to 14 to get rid of noise
> > that might be introduced by other than the intended test just in case.
> >
> > As expected, the stats are better with your work:
> >
> >    ------------------------------------------
> >    v6.6-rc5 with 7e12beb8ca2a commit reverted
> >    ------------------------------------------
> >
> >    1) from output of XSBench
> >
> >    Threads:     14              
> >    Runtime:     1127.043 seconds
> >    Lookups:     1,700,000,000   
> >    Lookups/s:   1,508,371       
> >
> >    2) from /proc/vmstat
> >
> >    numa_hit 15580171                      
> >    numa_miss 1034233                      
> >    numa_foreign 1034233                   
> >    numa_interleave 773                    
> >    numa_local 7927442                     
> >    numa_other 8686962                     
> >    numa_pte_updates 24068923              
> >    numa_hint_faults 24061125              
> >    numa_hint_faults_local 0               
> >    numa_pages_migrated 7426480            
> >    pgmigrate_success 15407375             
> >    pgmigrate_fail 1849                    
> >    compact_migrate_scanned 4445414        
> >    compact_daemon_migrate_scanned 4445414 
> >    pgdemote_kswapd 7651061                
> >    pgdemote_direct 0                      
> >    nr_tlb_remote_flush 8080092            
> >    nr_tlb_remote_flush_received 109915713 
> >    nr_tlb_local_flush_all 53800           
> >    nr_tlb_local_flush_one 770466                                                   
> >    
> >    3) from /proc/interrupts
> >
> >    TLB: 8022927    7840769     123588    7837008    7835967    7839837
> >    	7838332    7839886    7837610    7837221    7834524     407260
> >    	7430090    7835696    7839081    7712568    TLB shootdowns  
> >    
> >    4) from 'perf stat -a'
> >
> >    222371217		itlb.itlb_flush      
> >    919832520		tlb_flush.dtlb_thread
> >    372223809		tlb_flush.stlb_any   
> >    120210808042		dTLB-load-misses     
> >    979352769		dTLB-store-misses    
> >    3650767665		iTLB-load-misses     
> >
> >    -----------------------------------------
> >    v6.6-rc5 with 7e12beb8ca2a commit applied
> >    -----------------------------------------
> >
> >    1) from output of XSBench
> >
> >    Threads:     14
> >    Runtime:     1105.521 seconds
> >    Lookups:     1,700,000,000
> >    Lookups/s:   1,537,737
> >
> >    2) from /proc/vmstat
> >
> >    numa_hit 24148399
> >    numa_miss 797483
> >    numa_foreign 797483
> >    numa_interleave 772
> >    numa_local 12214575
> >    numa_other 12731307
> >    numa_pte_updates 24250278
> >    numa_hint_faults 24199756
> >    numa_hint_faults_local 0
> >    numa_pages_migrated 11476195
> >    pgmigrate_success 23634639
> >    pgmigrate_fail 1391
> >    compact_migrate_scanned 3760803
> >    compact_daemon_migrate_scanned 3760803
> >    pgdemote_kswapd 11932217
> >    pgdemote_direct 0
> >    nr_tlb_remote_flush 2151945
> >    nr_tlb_remote_flush_received 29672808
> >    nr_tlb_local_flush_all 124006
> >    nr_tlb_local_flush_one 741165
> >    
> >    3) from /proc/interrupts
> >
> >    TLB: 2130784    2120142    2117571     844962    2071766     114675
> >    	2117258    2119596    2116816    1205446    2119176    2119209
> >    	2116792    2118763    2118773    2117762    TLB shootdowns
> >
> >    4) from 'perf stat -a'
> >
> >    60851902		itlb.itlb_flush
> >    334068491		tlb_flush.dtlb_thread
> >    223732916		tlb_flush.stlb_any
> >    120207083382		dTLB-load-misses
> >    446823059		dTLB-store-misses
> >    1926669373		iTLB-load-misses
> >
> 
> Thanks a lot for test results!
> 
> >From your test results, the TLB shootdown IPI can be reduced effectively
> with commit 7e12beb8ca2a.  So that the benchmark score improved a
> little.
> 
> And, your changes will reduce the TLB shootdown IPI further, right?  Do

Yes, right. LUF(Lazy Unmap Flush) reduces TLB shootdown IPI further.

> you have the number?

You can find the number obtained from llama.cpp in this cover letter:

   https://lore.kernel.org/lkml/20240520021734.21527-1-byungchul@sk.com/

If you meant the number from the same test above, XSBench + qemu, I will
re-test with mm-unstable branch of mm tree and share the result shortly.

	Byungchul
Byungchul Park May 22, 2024, 2:15 p.m. UTC | #8
On Wed, May 22, 2024 at 07:27:44PM +0900, Byungchul Park wrote:
> On Wed, May 22, 2024 at 03:38:04PM +0800, Huang, Ying wrote:
> > Hi, Byungchul,
> > 
> > Byungchul Park <byungchul@sk.com> writes:
> > 
> > > On Mon, May 13, 2024 at 10:44:29AM +0900, Byungchul Park wrote:
> > >> On Sat, May 11, 2024 at 03:15:01PM +0800, Huang, Ying wrote:
> > >> > Byungchul Park <byungchul@sk.com> writes:
> > >> > 
> > >> > > Hi everyone,
> > >> > >
> > >> > > While I'm working with a tiered memory system e.g. CXL memory, I have
> > >> > > been facing migration overhead esp. tlb shootdown on promotion or
> > >> > > demotion between different tiers.  Yeah..  most tlb shootdowns on
> > >> > > migration through hinting fault can be avoided thanks to Huang Ying's
> > >> > > work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE
> > >> > > is inaccessible").  See the following link for more information:
> > >> > >
> > >> > > https://lore.kernel.org/lkml/20231115025755.GA29979@system.software.com/
> > >> > 
> > >> > And, I still have interest of the performance impact of commit
> > >> > 7e12beb8ca2a ("migrate_pages: batch flushing TLB").  In the email above,
> > >> > you said that the performance of v6.5-rc5 + 7e12beb8ca2a reverted has
> > >> > better performance than v6.5-rc5.  Can you provide more details?  For
> > >> > example, the number of TLB flushing IPI for two kernels?
> > >> 
> > >> Okay.  I will test and share the result with what you asked me now once
> > >> I get available for the test.
> > >
> > > I should admit that the test using qemu is so unstable.  While using
> > > qemu for the test, kernel with 7e12beb8ca2a applied gave better results
> > > sometimes and worse ones sometimes.  I should've used a bare metal from
> > > the beginning.  Sorry for making you confused with the unstable result.
> > >
> > > Since I thought you asked me for the test with the same environment in
> > > the link above, I used qemu to reproduce the similar result but changed
> > > the number of threads for the test from 16 to 14 to get rid of noise
> > > that might be introduced by other than the intended test just in case.
> > >
> > > As expected, the stats are better with your work:
> > >
> > >    ------------------------------------------
> > >    v6.6-rc5 with 7e12beb8ca2a commit reverted
> > >    ------------------------------------------
> > >
> > >    1) from output of XSBench
> > >
> > >    Threads:     14              
> > >    Runtime:     1127.043 seconds
> > >    Lookups:     1,700,000,000   
> > >    Lookups/s:   1,508,371       
> > >
> > >    2) from /proc/vmstat
> > >
> > >    numa_hit 15580171                      
> > >    numa_miss 1034233                      
> > >    numa_foreign 1034233                   
> > >    numa_interleave 773                    
> > >    numa_local 7927442                     
> > >    numa_other 8686962                     
> > >    numa_pte_updates 24068923              
> > >    numa_hint_faults 24061125              
> > >    numa_hint_faults_local 0               
> > >    numa_pages_migrated 7426480            
> > >    pgmigrate_success 15407375             
> > >    pgmigrate_fail 1849                    
> > >    compact_migrate_scanned 4445414        
> > >    compact_daemon_migrate_scanned 4445414 
> > >    pgdemote_kswapd 7651061                
> > >    pgdemote_direct 0                      
> > >    nr_tlb_remote_flush 8080092            
> > >    nr_tlb_remote_flush_received 109915713 
> > >    nr_tlb_local_flush_all 53800           
> > >    nr_tlb_local_flush_one 770466                                                   
> > >    
> > >    3) from /proc/interrupts
> > >
> > >    TLB: 8022927    7840769     123588    7837008    7835967    7839837
> > >    	7838332    7839886    7837610    7837221    7834524     407260
> > >    	7430090    7835696    7839081    7712568    TLB shootdowns  
> > >    
> > >    4) from 'perf stat -a'
> > >
> > >    222371217		itlb.itlb_flush      
> > >    919832520		tlb_flush.dtlb_thread
> > >    372223809		tlb_flush.stlb_any   
> > >    120210808042		dTLB-load-misses     
> > >    979352769		dTLB-store-misses    
> > >    3650767665		iTLB-load-misses     
> > >
> > >    -----------------------------------------
> > >    v6.6-rc5 with 7e12beb8ca2a commit applied
> > >    -----------------------------------------
> > >
> > >    1) from output of XSBench
> > >
> > >    Threads:     14
> > >    Runtime:     1105.521 seconds
> > >    Lookups:     1,700,000,000
> > >    Lookups/s:   1,537,737
> > >
> > >    2) from /proc/vmstat
> > >
> > >    numa_hit 24148399
> > >    numa_miss 797483
> > >    numa_foreign 797483
> > >    numa_interleave 772
> > >    numa_local 12214575
> > >    numa_other 12731307
> > >    numa_pte_updates 24250278
> > >    numa_hint_faults 24199756
> > >    numa_hint_faults_local 0
> > >    numa_pages_migrated 11476195
> > >    pgmigrate_success 23634639
> > >    pgmigrate_fail 1391
> > >    compact_migrate_scanned 3760803
> > >    compact_daemon_migrate_scanned 3760803
> > >    pgdemote_kswapd 11932217
> > >    pgdemote_direct 0
> > >    nr_tlb_remote_flush 2151945
> > >    nr_tlb_remote_flush_received 29672808
> > >    nr_tlb_local_flush_all 124006
> > >    nr_tlb_local_flush_one 741165
> > >    
> > >    3) from /proc/interrupts
> > >
> > >    TLB: 2130784    2120142    2117571     844962    2071766     114675
> > >    	2117258    2119596    2116816    1205446    2119176    2119209
> > >    	2116792    2118763    2118773    2117762    TLB shootdowns
> > >
> > >    4) from 'perf stat -a'
> > >
> > >    60851902		itlb.itlb_flush
> > >    334068491		tlb_flush.dtlb_thread
> > >    223732916		tlb_flush.stlb_any
> > >    120207083382		dTLB-load-misses
> > >    446823059		dTLB-store-misses
> > >    1926669373		iTLB-load-misses
> > >
> > 
> > Thanks a lot for test results!
> > 
> > >From your test results, the TLB shootdown IPI can be reduced effectively
> > with commit 7e12beb8ca2a.  So that the benchmark score improved a
> > little.
> > 
> > And, your changes will reduce the TLB shootdown IPI further, right?  Do
> 
> Yes, right. LUF(Lazy Unmap Flush) reduces TLB shootdown IPI further.
> 
> > you have the number?
> 
> You can find the number obtained from llama.cpp in this cover letter:
> 
>    https://lore.kernel.org/lkml/20240520021734.21527-1-byungchul@sk.com/
> 
> If you meant the number from the same test above, XSBench + qemu, I will
> re-test with mm-unstable branch of mm tree and share the result shortly.

I retested the same test but based on a recent mm-unstable branch of mm
tree instead of v6.6-rc5.  The result changed because of the different
base, from v6.6-rc5 to a recent mm-unstable branch of mm tree.

   ----------------------------------------------------
   mm-unstable branch with 7e12beb8ca2a commit reverted
   ----------------------------------------------------

   1) from output of XSBench

   Threads:     14
   Runtime:     1067.771 seconds
   Lookups:     1,700,000,000
   Lookups/s:   1,592,101

   2) from /proc/vmstat

   numa_hit 11502876
   numa_miss 1130877
   numa_foreign 1130877
   numa_interleave 115
   numa_local 5879006
   numa_other 6754747
   numa_pte_updates 19390661
   numa_hint_faults 19319467
   numa_hint_faults_local 0
   numa_pages_migrated 5472749
   pgmigrate_success 11593079
   pgmigrate_fail 549666
   compact_migrate_scanned 5408404
   compact_daemon_migrate_scanned 5408404
   pgdemote_kswapd 5610705
   pgdemote_direct 0
   nr_tlb_remote_flush 6200106
   nr_tlb_remote_flush_received 84362539
   nr_tlb_local_flush_all 39202
   nr_tlb_local_flush_one 760046

   3) from /proc/interrupts

   TLB: 3812782    3840646    4806989    5235846    5127512    5048603
	6012100    6022642    5088907    5212207    4076329    6014857
	6017060    6014964    6009362    6018368    TLB shootdowns

   4) from 'perf stat -a'

   180449546		itlb.itlb_flush
   768913454		tlb_flush.dtlb_thread
   304745973		tlb_flush.stlb_any
   119589742349		dTLB-load-misses
   826525376		dTLB-store-misses
   2950724801		iTLB-load-misses

   ---------------------------------------------------
   mm-unstable branch with 7e12beb8ca2a commit applied
   ---------------------------------------------------

   1) from output of XSBench

   Threads:     14
   Runtime:     1043.972 seconds
   Lookups:     1,700,000,000
   Lookups/s:   1,628,395

   2) from /proc/vmstat

   numa_hit 16865880
   numa_miss 1129958
   numa_foreign 1129958
   numa_interleave 115
   numa_local 8565072
   numa_other 9430766
   numa_pte_updates 19240583
   numa_hint_faults 19239948
   numa_hint_faults_local 0
   numa_pages_migrated 8159078
   pgmigrate_success 17000781
   pgmigrate_fail 1410437
   compact_migrate_scanned 5075605
   compact_daemon_migrate_scanned 5075605
   pgdemote_kswapd 8297460
   pgdemote_direct 0
   nr_tlb_remote_flush 1516807
   nr_tlb_remote_flush_received 20938785
   nr_tlb_local_flush_all 95801
   nr_tlb_local_flush_one 740597

   3) from /proc/interrupts

   TLB:  927080     567584     840684    1484285    1495859    1408641
	1496227    1493909    1359465    1227623    1265431    1496361
	1392337    1489451    1495799    1494700    TLB shootdowns

   4) from 'perf stat -a'

   43564429		itlb.itlb_flush
   272921880		tlb_flush.dtlb_thread
   175495467		tlb_flush.stlb_any
   119602211976		dTLB-load-misses
   355190881		dTLB-store-misses
   1539926469		iTLB-load-misses

   ---------------------------------------------------------
   mm-unstable branch with 7e12beb8ca2a commit applied + LUF
   ---------------------------------------------------------

   1) from output of XSBench

   Threads:     14
   Runtime:     1033.973 seconds
   Lookups:     1,700,000,000
   Lookups/s:   1,644,144

   2) from /proc/vmstat

   numa_hit 18617127
   numa_miss 1075467
   numa_foreign 1075467
   numa_interleave 115
   numa_local 9440134
   numa_other 10252460
   numa_pte_updates 19473883
   numa_hint_faults 19470143
   numa_hint_faults_local 0
   numa_pages_migrated 8978959
   pgmigrate_success 18675500
   pgmigrate_fail 1577460
   compact_migrate_scanned 5465414
   compact_daemon_migrate_scanned 5465414
   pgdemote_kswapd 9172431
   pgdemote_direct 0
   nr_tlb_remote_flush 85818
   nr_tlb_remote_flush_received 1036316
   nr_tlb_local_flush_all 34674
   nr_tlb_local_flush_one 740870

   3) from /proc/interrupts

   TLB: 55328      31254      44449      72887      73407      73775
	73353      73658      35802      68184      70998      73504
	74072      64700      73718      73862      TLB shootdowns

   4) from 'perf stat -a'

   2054390		itlb.itlb_flush
   150073902		tlb_flush.dtlb_thread
   135630767		tlb_flush.stlb_any
   117880065362		dTLB-load-misses
   217521760		dTLB-store-misses
   908338035		iTLB-load-misses

---

The result looks incredible.  You can also see the result if you try to
test a workload triggering reclaim or migration with LUF.

	Byungchul
Dave Hansen May 24, 2024, 5:16 p.m. UTC | #9
On 5/9/24 23:51, Byungchul Park wrote:
> To achieve that:
> 
>    1. For the folios that map only to non-writable tlb entries, prevent
>       tlb flush during unmapping but perform it just before the folios
>       actually become used, out of buddy or pcp.

Is this just _pure_ unmapping (like MADV_DONTNEED), or does it apply to
changing the memory map, like munmap() itself?

>    2. When any non-writable ptes change to writable e.g. through fault
>       handler, give up luf mechanism and perform tlb flush required
>       right away.
> 
>    3. When a writable mapping is created e.g. through mmap(), give up
>       luf mechanism and perform tlb flush required right away.

Let's say you do this:

	fd = open("/some/file", O_RDONLY);
	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
	foo1 = *ptr1;

You now have a read-only PTE pointing to the first page of /some/file.
Let's say try_to_unmap() comes along and decides it can_luf_folio().
The page gets pulled out of the page cache and freed, the PTE is zeroed.
 But the TLB is never flushed.

Now, someone does:

	fd2 = open("/some/other/file", O_RDONLY);
	ptr2 = mmap(ptr1, size, PROT_READ, MAP_FIXED, fd, ...);
	foo2 = *ptr2;

and they overwrite the old VMA.  Does foo2 have the contents of the new
"/some/other/file" or the old "/some/file"?  How does the new mmap()
know that there was something to flush?

BTW, the same thing could happen without a new mmap().  Someone could
modify the file in the middle, maybe even from another process.

	fd = open("/some/file", O_RDONLY);
	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
	foo1 = *ptr1;
	// LUF happens here
	// "/some/file" changes
	foo2 = *ptr1; // Does this see the change?
Byungchul Park May 27, 2024, 1:57 a.m. UTC | #10
On Fri, May 24, 2024 at 10:16:39AM -0700, Dave Hansen wrote:
> On 5/9/24 23:51, Byungchul Park wrote:
> > To achieve that:
> > 
> >    1. For the folios that map only to non-writable tlb entries, prevent
> >       tlb flush during unmapping but perform it just before the folios
> >       actually become used, out of buddy or pcp.
> 
> Is this just _pure_ unmapping (like MADV_DONTNEED), or does it apply to
> changing the memory map, like munmap() itself?

I think it can be applied to any unmapping of ro ones but LUF for now is
working only with unmapping during folio migrion and reclaim.

> >    2. When any non-writable ptes change to writable e.g. through fault
> >       handler, give up luf mechanism and perform tlb flush required
> >       right away.
> > 
> >    3. When a writable mapping is created e.g. through mmap(), give up
> >       luf mechanism and perform tlb flush required right away.
> 
> Let's say you do this:
> 
> 	fd = open("/some/file", O_RDONLY);
> 	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
> 	foo1 = *ptr1;
> 
> You now have a read-only PTE pointing to the first page of /some/file.
> Let's say try_to_unmap() comes along and decides it can_luf_folio().
> The page gets pulled out of the page cache and freed, the PTE is zeroed.
>  But the TLB is never flushed.
> 
> Now, someone does:
> 
> 	fd2 = open("/some/other/file", O_RDONLY);
> 	ptr2 = mmap(ptr1, size, PROT_READ, MAP_FIXED, fd, ...);
> 	foo2 = *ptr2;
> 
> and they overwrite the old VMA.  Does foo2 have the contents of the new
> "/some/other/file" or the old "/some/file"?  How does the new mmap()

Good point.  It should've give up LUF at the 2nd mmap() in this case.
I will fix it by introducing a new flag in task_struct indicating if LUF
has left stale maps for the task so that LUF can give up and flush right
away in mmap().

> know that there was something to flush?
> 
> BTW, the same thing could happen without a new mmap().  Someone could
> modify the file in the middle, maybe even from another process.

Thank you for the pointing out.  I will fix it too by introducing a new
flag in inode or something to make LUF aware if updating the file has
been tried so that LUF can give up and flush right away in the case.

Plus, I will add another give-up at code changing the permission of vma
to writable.

Thank you very much.

	Byungchul

> 	fd = open("/some/file", O_RDONLY);
> 	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
> 	foo1 = *ptr1;
> 	// LUF happens here
> 	// "/some/file" changes
> 	foo2 = *ptr1; // Does this see the change?
Dave Hansen May 27, 2024, 2:43 a.m. UTC | #11
On 5/26/24 18:57, Byungchul Park wrote:
...
> Plus, I will add another give-up at code changing the permission of vma
> to writable.

I suspect you have a much more general problem on your hands. Just
tweaking the VFS or mmap() code likely isn't going to cut it.

I guess we'll see what you come up with next, but this email was really
just the result of Vlastimil and I chatting on IRC for five minutes
about this set.

It has absolutely not been tested nor reviewed enough.  <fud>I hope the
performance gains stick around once more of the bugs are gone.</fud>
Huang, Ying May 27, 2024, 3:10 a.m. UTC | #12
Byungchul Park <byungchul@sk.com> writes:

> On Fri, May 24, 2024 at 10:16:39AM -0700, Dave Hansen wrote:
>> On 5/9/24 23:51, Byungchul Park wrote:
>> > To achieve that:
>> > 
>> >    1. For the folios that map only to non-writable tlb entries, prevent
>> >       tlb flush during unmapping but perform it just before the folios
>> >       actually become used, out of buddy or pcp.
>> 
>> Is this just _pure_ unmapping (like MADV_DONTNEED), or does it apply to
>> changing the memory map, like munmap() itself?
>
> I think it can be applied to any unmapping of ro ones but LUF for now is
> working only with unmapping during folio migrion and reclaim.
>
>> >    2. When any non-writable ptes change to writable e.g. through fault
>> >       handler, give up luf mechanism and perform tlb flush required
>> >       right away.
>> > 
>> >    3. When a writable mapping is created e.g. through mmap(), give up
>> >       luf mechanism and perform tlb flush required right away.
>> 
>> Let's say you do this:
>> 
>> 	fd = open("/some/file", O_RDONLY);
>> 	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
>> 	foo1 = *ptr1;
>> 
>> You now have a read-only PTE pointing to the first page of /some/file.
>> Let's say try_to_unmap() comes along and decides it can_luf_folio().
>> The page gets pulled out of the page cache and freed, the PTE is zeroed.
>>  But the TLB is never flushed.
>> 
>> Now, someone does:
>> 
>> 	fd2 = open("/some/other/file", O_RDONLY);
>> 	ptr2 = mmap(ptr1, size, PROT_READ, MAP_FIXED, fd, ...);
>> 	foo2 = *ptr2;
>> 
>> and they overwrite the old VMA.  Does foo2 have the contents of the new
>> "/some/other/file" or the old "/some/file"?  How does the new mmap()
>
> Good point.  It should've give up LUF at the 2nd mmap() in this case.
> I will fix it by introducing a new flag in task_struct indicating if LUF
> has left stale maps for the task so that LUF can give up and flush right
> away in mmap().
>
>> know that there was something to flush?
>> 
>> BTW, the same thing could happen without a new mmap().  Someone could
>> modify the file in the middle, maybe even from another process.
>
> Thank you for the pointing out.  I will fix it too by introducing a new
> flag in inode or something to make LUF aware if updating the file has
> been tried so that LUF can give up and flush right away in the case.
>
> Plus, I will add another give-up at code changing the permission of vma
> to writable.

I guess that you need a framework similar as
"flush_tlb_batched_pending()" to deal with interaction with other TLB
related operations.

--
Best Regards,
Huang, Ying

> Thank you very much.
>
> 	Byungchul
>
>> 	fd = open("/some/file", O_RDONLY);
>> 	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
>> 	foo1 = *ptr1;
>> 	// LUF happens here
>> 	// "/some/file" changes
>> 	foo2 = *ptr1; // Does this see the change?
Byungchul Park May 27, 2024, 3:46 a.m. UTC | #13
On Sun, May 26, 2024 at 07:43:10PM -0700, Dave Hansen wrote:
> On 5/26/24 18:57, Byungchul Park wrote:
> ...
> > Plus, I will add another give-up at code changing the permission of vma
> > to writable.
> 
> I suspect you have a much more general problem on your hands. Just
> tweaking the VFS or mmap() code likely isn't going to cut it.

LUF is interested in limited folios that are migratable or reclaimable
in lru for now.  So, IMHO, fixing a few things is going to cut it.

> I guess we'll see what you come up with next, but this email was really
> just the result of Vlastimil and I chatting on IRC for five minutes
> about this set.
> 
> It has absolutely not been tested nor reviewed enough.  <fud>I hope the
> performance gains stick around once more of the bugs are gone.</fud>

Sure. It should be.

	Byungchul
Byungchul Park May 27, 2024, 3:56 a.m. UTC | #14
On Mon, May 27, 2024 at 11:10:15AM +0800, Huang, Ying wrote:
> Byungchul Park <byungchul@sk.com> writes:
> 
> > On Fri, May 24, 2024 at 10:16:39AM -0700, Dave Hansen wrote:
> >> On 5/9/24 23:51, Byungchul Park wrote:
> >> > To achieve that:
> >> > 
> >> >    1. For the folios that map only to non-writable tlb entries, prevent
> >> >       tlb flush during unmapping but perform it just before the folios
> >> >       actually become used, out of buddy or pcp.
> >> 
> >> Is this just _pure_ unmapping (like MADV_DONTNEED), or does it apply to
> >> changing the memory map, like munmap() itself?
> >
> > I think it can be applied to any unmapping of ro ones but LUF for now is
> > working only with unmapping during folio migrion and reclaim.
> >
> >> >    2. When any non-writable ptes change to writable e.g. through fault
> >> >       handler, give up luf mechanism and perform tlb flush required
> >> >       right away.
> >> > 
> >> >    3. When a writable mapping is created e.g. through mmap(), give up
> >> >       luf mechanism and perform tlb flush required right away.
> >> 
> >> Let's say you do this:
> >> 
> >> 	fd = open("/some/file", O_RDONLY);
> >> 	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
> >> 	foo1 = *ptr1;
> >> 
> >> You now have a read-only PTE pointing to the first page of /some/file.
> >> Let's say try_to_unmap() comes along and decides it can_luf_folio().
> >> The page gets pulled out of the page cache and freed, the PTE is zeroed.
> >>  But the TLB is never flushed.
> >> 
> >> Now, someone does:
> >> 
> >> 	fd2 = open("/some/other/file", O_RDONLY);
> >> 	ptr2 = mmap(ptr1, size, PROT_READ, MAP_FIXED, fd, ...);
> >> 	foo2 = *ptr2;
> >> 
> >> and they overwrite the old VMA.  Does foo2 have the contents of the new
> >> "/some/other/file" or the old "/some/file"?  How does the new mmap()
> >
> > Good point.  It should've give up LUF at the 2nd mmap() in this case.
> > I will fix it by introducing a new flag in task_struct indicating if LUF
> > has left stale maps for the task so that LUF can give up and flush right
> > away in mmap().
> >
> >> know that there was something to flush?
> >> 
> >> BTW, the same thing could happen without a new mmap().  Someone could
> >> modify the file in the middle, maybe even from another process.
> >
> > Thank you for the pointing out.  I will fix it too by introducing a new
> > flag in inode or something to make LUF aware if updating the file has
> > been tried so that LUF can give up and flush right away in the case.
> >
> > Plus, I will add another give-up at code changing the permission of vma
> > to writable.
> 
> I guess that you need a framework similar as
> "flush_tlb_batched_pending()" to deal with interaction with other TLB
> related operations.

Thank you.  I will check it.

	Byungchul

> --
> Best Regards,
> Huang, Ying
> 
> > Thank you very much.
> >
> > 	Byungchul
> >
> >> 	fd = open("/some/file", O_RDONLY);
> >> 	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
> >> 	foo1 = *ptr1;
> >> 	// LUF happens here
> >> 	// "/some/file" changes
> >> 	foo2 = *ptr1; // Does this see the change?
Byungchul Park May 27, 2024, 4:19 a.m. UTC | #15
On Mon, May 27, 2024 at 12:46:14PM +0900, Byungchul Park wrote:
> On Sun, May 26, 2024 at 07:43:10PM -0700, Dave Hansen wrote:
> > On 5/26/24 18:57, Byungchul Park wrote:
> > ...
> > > Plus, I will add another give-up at code changing the permission of vma
> > > to writable.
> > 
> > I suspect you have a much more general problem on your hands. Just
> > tweaking the VFS or mmap() code likely isn't going to cut it.

What a stupid idiot I am.

I already discuss the exact cases with Nadav Amit at the very beginning
around v1.  I didn't remember it when I was answering to you.

mmap() or changing the permission by user already performs TLB flush
needed within that code, which LUF never touch.

Worth noting currently LUF touchs only unmapping during migration or
reclaim.  Other updating mapping would perform TLB flush it needs, as is.
I guess updating page cache is also already perform TLB flush needed.
I need to check it.  Probably, it would already do.

	Byungchul

> LUF is interested in limited folios that are migratable or reclaimable
> in lru for now.  So, IMHO, fixing a few things is going to cut it.
> 
> > I guess we'll see what you come up with next, but this email was really
> > just the result of Vlastimil and I chatting on IRC for five minutes
> > about this set.
> > 
> > It has absolutely not been tested nor reviewed enough.  <fud>I hope the
> > performance gains stick around once more of the bugs are gone.</fud>
> 
> Sure. It should be.
> 
> 	Byungchul
Byungchul Park May 27, 2024, 4:25 a.m. UTC | #16
On Mon, May 27, 2024 at 01:19:46PM +0900, Byungchul Park wrote:
> On Mon, May 27, 2024 at 12:46:14PM +0900, Byungchul Park wrote:
> > On Sun, May 26, 2024 at 07:43:10PM -0700, Dave Hansen wrote:
> > > On 5/26/24 18:57, Byungchul Park wrote:
> > > ...
> > > > Plus, I will add another give-up at code changing the permission of vma
> > > > to writable.
> > > 
> > > I suspect you have a much more general problem on your hands. Just
> > > tweaking the VFS or mmap() code likely isn't going to cut it.
> 
> What a stupid idiot I am.
> 
> I already discuss the exact cases with Nadav Amit at the very beginning
> around v1.  I didn't remember it when I was answering to you.
> 
> mmap() or changing the permission by user already performs TLB flush
> needed within that code, which LUF never touch.
> 
> Worth noting currently LUF touchs only unmapping during migration or
> reclaim.  Other updating mapping would perform TLB flush it needs, as is.
> I guess updating page cache is also already perform TLB flush needed.

This may not be the case tho..  I might need to work on page cache.

	Byungchul

> I need to check it.  Probably, it would already do.
> 
> 	Byungchul
> 
> > LUF is interested in limited folios that are migratable or reclaimable
> > in lru for now.  So, IMHO, fixing a few things is going to cut it.
> > 
> > > I guess we'll see what you come up with next, but this email was really
> > > just the result of Vlastimil and I chatting on IRC for five minutes
> > > about this set.
> > > 
> > > It has absolutely not been tested nor reviewed enough.  <fud>I hope the
> > > performance gains stick around once more of the bugs are gone.</fud>
> > 
> > Sure. It should be.
> > 
> > 	Byungchul
Byungchul Park May 27, 2024, 10:58 p.m. UTC | #17
On Sun, May 26, 2024 at 07:43:10PM -0700, Dave Hansen wrote:
> It has absolutely not been tested nor reviewed enough.  <fud>I hope the

It has been tested enough on my side, and it should be reviewed enough
for sure.  I will respin after rebasing the current mm-unstble and
working on vfs shortly.

	Byungchul

> performance gains stick around once more of the bugs are gone.</fud>
David Hildenbrand May 28, 2024, 8:41 a.m. UTC | #18
Am 10.05.24 um 08:51 schrieb Byungchul Park:
> Hi everyone,
> 
> While I'm working with a tiered memory system e.g. CXL memory, I have
> been facing migration overhead esp. tlb shootdown on promotion or
> demotion between different tiers.  Yeah..  most tlb shootdowns on
> migration through hinting fault can be avoided thanks to Huang Ying's
> work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE
> is inaccessible").  See the following link for more information:
> 
> https://lore.kernel.org/lkml/20231115025755.GA29979@system.software.com/
> 
> However, it's only for migration through hinting fault.  I thought it'd
> be much better if we have a general mechanism to reduce all the tlb
> numbers that we can apply to any unmap code, that we normally believe
> tlb flush should be followed.
> 
> I'm suggesting a new mechanism, LUF(Lazy Unmap Flush), defers tlb flush
> until folios that have been unmapped and freed, eventually get allocated
> again.  It's safe for folios that had been mapped read-only and were
> unmapped, since the contents of the folios don't change while staying in
> pcp or buddy so we can still read the data through the stale tlb entries.
> 
> tlb flush can be defered when folios get unmapped as long as it
> guarantees to perform tlb flush needed, before the folios actually
> become used, of course, only if all the corresponding ptes don't have
> write permission.  Otherwise, the system will get messed up.
> 
> To achieve that:
> 
>     1. For the folios that map only to non-writable tlb entries, prevent
>        tlb flush during unmapping but perform it just before the folios
>        actually become used, out of buddy or pcp.

Trying to understand the impact: Effectively, a CPU could still read data from a 
page that has already been freed, until that page gets reallocated again.

The important part I can see is

1) PCP/buddy must not change page content (e.g., poison, init_on_free), 
otherwise an app might read wrong content.

2) If we mess up the flush-before-realloc, an app might observe data written by 
whoever allocated the page.

3) We must reliably detect+handle any read-only PTEs for which we didn't flush 
the TLB yet, otherwise an app could see its memory writes getting lost. I recall 
that at least uffd-wp might defer TLB flushes (see comment in do_wp_page()). Not 
sure about other pte_wrprotect() callers that flush the TLB after processing 
multiple page tables, whereby rmap code might succeed in unmapping a page before 
the TLB flush happened.

Any other possible issues you stumbled over that are worth mentioning?
Dave Hansen May 28, 2024, 3:14 p.m. UTC | #19
On 5/26/24 20:10, Huang, Ying wrote:
>> Thank you for the pointing out.  I will fix it too by introducing a new
>> flag in inode or something to make LUF aware if updating the file has
>> been tried so that LUF can give up and flush right away in the case.
>>
>> Plus, I will add another give-up at code changing the permission of vma
>> to writable.
> I guess that you need a framework similar as
> "flush_tlb_batched_pending()" to deal with interaction with other TLB
> related operations.

Where "other TLB related operations" includes both things that
traditionally invalidate TLBs (like going Present 1=>0) and things like
fault-in that go Present 0=>1 that can result in TLB population.

It's actually a really crummy problem to solve.  We don't have _any_
machinery to say, "Hey, you know that PTE you wanted to install?  There
was something there before you and we haven't flushed it yet.  Can you
be a doll and do a flush before _populating_ that PTE?"

To solve it generically, I suspect you'll need some kind of special
non-present PTE to say:

	There _was_ a PTE here that wasn't flushed.

Sure, you can add gunk to the VMA to track when this happens.  But
that'll penalize anyone populating a PTE anywhere in the VMA at least
once.  If there were other threads faulting in pages to the same VMA,
they'll just end up doing the flush that LUF tried to avoid in the first
place.
Huang, Ying May 29, 2024, 2:16 a.m. UTC | #20
Byungchul Park <byungchul@sk.com> writes:

> On Sun, May 26, 2024 at 07:43:10PM -0700, Dave Hansen wrote:
>> It has absolutely not been tested nor reviewed enough.  <fud>I hope the
>
> It has been tested enough on my side, and it should be reviewed enough
> for sure.

I believe that you have tested and reviewed the patchset by yourself.
But there are some other cases that you haven't thought about enough
before, as Dave pointed out.

So, I suggest you to try to find out more possible weakness of your
patchset.  Begin with what Dave and David pointed out.

> I will respin after rebasing the current mm-unstble and
> working on vfs shortly.
>
> 	Byungchul
>
>> performance gains stick around once more of the bugs are gone.</fud>

--
Best Regards,
Huang, Ying
Byungchul Park May 29, 2024, 4:39 a.m. UTC | #21
On Tue, May 28, 2024 at 10:41:54AM +0200, David Hildenbrand wrote:
> Am 10.05.24 um 08:51 schrieb Byungchul Park:
> > Hi everyone,
> > 
> > While I'm working with a tiered memory system e.g. CXL memory, I have
> > been facing migration overhead esp. tlb shootdown on promotion or
> > demotion between different tiers.  Yeah..  most tlb shootdowns on
> > migration through hinting fault can be avoided thanks to Huang Ying's
> > work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE
> > is inaccessible").  See the following link for more information:
> > 
> > https://lore.kernel.org/lkml/20231115025755.GA29979@system.software.com/
> > 
> > However, it's only for migration through hinting fault.  I thought it'd
> > be much better if we have a general mechanism to reduce all the tlb
> > numbers that we can apply to any unmap code, that we normally believe
> > tlb flush should be followed.
> > 
> > I'm suggesting a new mechanism, LUF(Lazy Unmap Flush), defers tlb flush
> > until folios that have been unmapped and freed, eventually get allocated
> > again.  It's safe for folios that had been mapped read-only and were
> > unmapped, since the contents of the folios don't change while staying in
> > pcp or buddy so we can still read the data through the stale tlb entries.
> > 
> > tlb flush can be defered when folios get unmapped as long as it
> > guarantees to perform tlb flush needed, before the folios actually
> > become used, of course, only if all the corresponding ptes don't have
> > write permission.  Otherwise, the system will get messed up.
> > 
> > To achieve that:
> > 
> >     1. For the folios that map only to non-writable tlb entries, prevent
> >        tlb flush during unmapping but perform it just before the folios
> >        actually become used, out of buddy or pcp.
> 
> Trying to understand the impact: Effectively, a CPU could still read data
> from a page that has already been freed, until that page gets reallocated
> again.
> 
> The important part I can see is
> 
> 1) PCP/buddy must not change page content (e.g., poison, init_on_free),
> otherwise an app might read wrong content.

Exactly.  I will take them into account.  Thank you.

> 2) If we mess up the flush-before-realloc, an app might observe data written
> by whoever allocated the page.

Yes.  However, appropiate TLB flush is performed in prep_new_page().
Basically you are right.  I need to pay enough attention to it.

> 3) We must reliably detect+handle any read-only PTEs for which we didn't
> flush the TLB yet, otherwise an app could see its memory writes getting
> lost. I recall that at least uffd-wp might defer TLB flushes (see comment in
> do_wp_page()). Not sure about other pte_wrprotect() callers that flush the
> TLB after processing multiple page tables, whereby rmap code might succeed
> in unmapping a page before the TLB flush happened.
> 
> Any other possible issues you stumbled over that are worth mentioning?

You mentioned all that I'm concerning but in a clear way.

	Byungchul

> 
> -- 
> Thanks,
> 
> David / dhildenb
Byungchul Park May 29, 2024, 5 a.m. UTC | #22
On Tue, May 28, 2024 at 08:14:43AM -0700, Dave Hansen wrote:
> On 5/26/24 20:10, Huang, Ying wrote:
> >> Thank you for the pointing out.  I will fix it too by introducing a new
> >> flag in inode or something to make LUF aware if updating the file has
> >> been tried so that LUF can give up and flush right away in the case.
> >>
> >> Plus, I will add another give-up at code changing the permission of vma
> >> to writable.
> > I guess that you need a framework similar as
> > "flush_tlb_batched_pending()" to deal with interaction with other TLB
> > related operations.
> 
> Where "other TLB related operations" includes both things that
> traditionally invalidate TLBs (like going Present 1=>0) and things like
> fault-in that go Present 0=>1 that can result in TLB population.
> 
> It's actually a really crummy problem to solve.  We don't have _any_
> machinery to say, "Hey, you know that PTE you wanted to install?  There
> was something there before you and we haven't flushed it yet.  Can you
> be a doll and do a flush before _populating_ that PTE?"

All the code updating ptes already performs TLB flush needed in a safe
way if it's inevitable e.g. munmap.  LUF which controls when to flush in
a higer level than arch code, just leaves stale ro tlb entries that are
currently supposed to be in use.  Could you give a scenario that you are
concering?

	Byungchul

> To solve it generically, I suspect you'll need some kind of special
> non-present PTE to say:
> 
> 	There _was_ a PTE here that wasn't flushed.
> 
> Sure, you can add gunk to the VMA to track when this happens.  But
> that'll penalize anyone populating a PTE anywhere in the VMA at least
> once.  If there were other threads faulting in pages to the same VMA,
> they'll just end up doing the flush that LUF tried to avoid in the first
> place.
Dave Hansen May 29, 2024, 4:41 p.m. UTC | #23
On 5/28/24 22:00, Byungchul Park wrote:
> All the code updating ptes already performs TLB flush needed in a safe
> way if it's inevitable e.g. munmap.  LUF which controls when to flush in
> a higer level than arch code, just leaves stale ro tlb entries that are
> currently supposed to be in use.  Could you give a scenario that you are
> concering?

Let's go back this scenario:

 	fd = open("/some/file", O_RDONLY);
 	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
 	foo1 = *ptr1;

There's a read-only PTE at 'ptr1'.  Right?  The page being pointed to is
eligible for LUF via the try_to_unmap() paths.  In other words, the page
might be reclaimed at any time.  If it is reclaimed, the PTE will be
cleared.

Then, the user might do:

	munmap(ptr1, PAGE_SIZE);

Which will _eventually_ wind up in the zap_pte_range() loop.  But that
loop will only see pte_none().  It doesn't do _anything_ to the 'struct
mmu_gather'.

The munmap() then lands in tlb_flush_mmu_tlbonly() where it looks at the
'struct mmu_gather':

        if (!(tlb->freed_tables || tlb->cleared_ptes ||
	      tlb->cleared_pmds || tlb->cleared_puds ||
	      tlb->cleared_p4ds))
                return;

But since there were no cleared PTEs (or anything else) during the
unmap, this just returns and doesn't flush the TLB.

We now have an address space with a stale TLB entry at 'ptr1' and not
even a VMA there.  There's nothing to stop a new VMA from going in,
installing a *new* PTE, but getting data from the stale TLB entry that
still hasn't been flushed.
Byungchul Park May 30, 2024, 12:50 a.m. UTC | #24
On Wed, May 29, 2024 at 09:41:22AM -0700, Dave Hansen wrote:
> On 5/28/24 22:00, Byungchul Park wrote:
> > All the code updating ptes already performs TLB flush needed in a safe
> > way if it's inevitable e.g. munmap.  LUF which controls when to flush in
> > a higer level than arch code, just leaves stale ro tlb entries that are
> > currently supposed to be in use.  Could you give a scenario that you are
> > concering?
> 
> Let's go back this scenario:
> 
>  	fd = open("/some/file", O_RDONLY);
>  	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
>  	foo1 = *ptr1;
> 
> There's a read-only PTE at 'ptr1'.  Right?  The page being pointed to is
> eligible for LUF via the try_to_unmap() paths.  In other words, the page
> might be reclaimed at any time.  If it is reclaimed, the PTE will be
> cleared.
> 
> Then, the user might do:
> 
> 	munmap(ptr1, PAGE_SIZE);
> 
> Which will _eventually_ wind up in the zap_pte_range() loop.  But that
> loop will only see pte_none().  It doesn't do _anything_ to the 'struct
> mmu_gather'.
> 
> The munmap() then lands in tlb_flush_mmu_tlbonly() where it looks at the
> 'struct mmu_gather':
> 
>         if (!(tlb->freed_tables || tlb->cleared_ptes ||
> 	      tlb->cleared_pmds || tlb->cleared_puds ||
> 	      tlb->cleared_p4ds))
>                 return;
> 
> But since there were no cleared PTEs (or anything else) during the
> unmap, this just returns and doesn't flush the TLB.
> 
> We now have an address space with a stale TLB entry at 'ptr1' and not
> even a VMA there.  There's nothing to stop a new VMA from going in,
> installing a *new* PTE, but getting data from the stale TLB entry that
> still hasn't been flushed.

Thank you for the explanation.  I got you.  I think I could handle the
case through a new flag in vma or something indicating LUF has deferred
necessary TLB flush for it during unmapping so that mmu_gather mechanism
can be aware of it.  Of course, the performance change should be checked
again.  Thoughts?

Thanks again.

	Byungchul
Byungchul Park May 30, 2024, 12:59 a.m. UTC | #25
On Thu, May 30, 2024 at 09:50:26AM +0900, Byungchul Park wrote:
> On Wed, May 29, 2024 at 09:41:22AM -0700, Dave Hansen wrote:
> > On 5/28/24 22:00, Byungchul Park wrote:
> > > All the code updating ptes already performs TLB flush needed in a safe
> > > way if it's inevitable e.g. munmap.  LUF which controls when to flush in
> > > a higer level than arch code, just leaves stale ro tlb entries that are
> > > currently supposed to be in use.  Could you give a scenario that you are
> > > concering?
> > 
> > Let's go back this scenario:
> > 
> >  	fd = open("/some/file", O_RDONLY);
> >  	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
> >  	foo1 = *ptr1;
> > 
> > There's a read-only PTE at 'ptr1'.  Right?  The page being pointed to is
> > eligible for LUF via the try_to_unmap() paths.  In other words, the page
> > might be reclaimed at any time.  If it is reclaimed, the PTE will be
> > cleared.
> > 
> > Then, the user might do:
> > 
> > 	munmap(ptr1, PAGE_SIZE);
> > 
> > Which will _eventually_ wind up in the zap_pte_range() loop.  But that
> > loop will only see pte_none().  It doesn't do _anything_ to the 'struct
> > mmu_gather'.
> > 
> > The munmap() then lands in tlb_flush_mmu_tlbonly() where it looks at the
> > 'struct mmu_gather':
> > 
> >         if (!(tlb->freed_tables || tlb->cleared_ptes ||
> > 	      tlb->cleared_pmds || tlb->cleared_puds ||
> > 	      tlb->cleared_p4ds))
> >                 return;
> > 
> > But since there were no cleared PTEs (or anything else) during the
> > unmap, this just returns and doesn't flush the TLB.
> > 
> > We now have an address space with a stale TLB entry at 'ptr1' and not
> > even a VMA there.  There's nothing to stop a new VMA from going in,
> > installing a *new* PTE, but getting data from the stale TLB entry that
> > still hasn't been flushed.
> 
> Thank you for the explanation.  I got you.  I think I could handle the
> case through a new flag in vma or something indicating LUF has deferred
> necessary TLB flush for it during unmapping so that mmu_gather mechanism
> can be aware of it.  Of course, the performance change should be checked
> again.  Thoughts?

I will check the existing optimization of TLB flsuh more in arch level
and suggest a better way.

	Byungchul

> Thanks again.
> 
> 	Byungchul
Byungchul Park May 30, 2024, 1:02 a.m. UTC | #26
On Wed, May 29, 2024 at 10:16:26AM +0800, Huang, Ying wrote:
> Byungchul Park <byungchul@sk.com> writes:
> 
> > On Sun, May 26, 2024 at 07:43:10PM -0700, Dave Hansen wrote:
> >> It has absolutely not been tested nor reviewed enough.  <fud>I hope the
> >
> > It has been tested enough on my side, and it should be reviewed enough
> > for sure.
> 
> I believe that you have tested and reviewed the patchset by yourself.
> But there are some other cases that you haven't thought about enough
> before, as Dave pointed out.
> 
> So, I suggest you to try to find out more possible weakness of your
> patchset.  Begin with what Dave and David pointed out.

I will.

	Byungchul

> > I will respin after rebasing the current mm-unstble and
> > working on vfs shortly.
> >
> > 	Byungchul
> >
> >> performance gains stick around once more of the bugs are gone.</fud>
> 
> --
> Best Regards,
> Huang, Ying
Huang, Ying May 30, 2024, 1:11 a.m. UTC | #27
Byungchul Park <byungchul@sk.com> writes:

> On Wed, May 29, 2024 at 09:41:22AM -0700, Dave Hansen wrote:
>> On 5/28/24 22:00, Byungchul Park wrote:
>> > All the code updating ptes already performs TLB flush needed in a safe
>> > way if it's inevitable e.g. munmap.  LUF which controls when to flush in
>> > a higer level than arch code, just leaves stale ro tlb entries that are
>> > currently supposed to be in use.  Could you give a scenario that you are
>> > concering?
>> 
>> Let's go back this scenario:
>> 
>>  	fd = open("/some/file", O_RDONLY);
>>  	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
>>  	foo1 = *ptr1;
>> 
>> There's a read-only PTE at 'ptr1'.  Right?  The page being pointed to is
>> eligible for LUF via the try_to_unmap() paths.  In other words, the page
>> might be reclaimed at any time.  If it is reclaimed, the PTE will be
>> cleared.
>> 
>> Then, the user might do:
>> 
>> 	munmap(ptr1, PAGE_SIZE);
>> 
>> Which will _eventually_ wind up in the zap_pte_range() loop.  But that
>> loop will only see pte_none().  It doesn't do _anything_ to the 'struct
>> mmu_gather'.
>> 
>> The munmap() then lands in tlb_flush_mmu_tlbonly() where it looks at the
>> 'struct mmu_gather':
>> 
>>         if (!(tlb->freed_tables || tlb->cleared_ptes ||
>> 	      tlb->cleared_pmds || tlb->cleared_puds ||
>> 	      tlb->cleared_p4ds))
>>                 return;
>> 
>> But since there were no cleared PTEs (or anything else) during the
>> unmap, this just returns and doesn't flush the TLB.
>> 
>> We now have an address space with a stale TLB entry at 'ptr1' and not
>> even a VMA there.  There's nothing to stop a new VMA from going in,
>> installing a *new* PTE, but getting data from the stale TLB entry that
>> still hasn't been flushed.
>
> Thank you for the explanation.  I got you.  I think I could handle the
> case through a new flag in vma or something indicating LUF has deferred
> necessary TLB flush for it during unmapping so that mmu_gather mechanism
> can be aware of it.  Of course, the performance change should be checked
> again.  Thoughts?

I suggest you to start with the simple case.  That is, only support page
reclaiming and migration.  A TLB flushing can be enforced during unmap
with something similar as flush_tlb_batched_pending().

--
Best Regards,
Huang, Ying
Byungchul Park May 30, 2024, 1:33 a.m. UTC | #28
On Thu, May 30, 2024 at 09:11:45AM +0800, Huang, Ying wrote:
> Byungchul Park <byungchul@sk.com> writes:
> 
> > On Wed, May 29, 2024 at 09:41:22AM -0700, Dave Hansen wrote:
> >> On 5/28/24 22:00, Byungchul Park wrote:
> >> > All the code updating ptes already performs TLB flush needed in a safe
> >> > way if it's inevitable e.g. munmap.  LUF which controls when to flush in
> >> > a higer level than arch code, just leaves stale ro tlb entries that are
> >> > currently supposed to be in use.  Could you give a scenario that you are
> >> > concering?
> >> 
> >> Let's go back this scenario:
> >> 
> >>  	fd = open("/some/file", O_RDONLY);
> >>  	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
> >>  	foo1 = *ptr1;
> >> 
> >> There's a read-only PTE at 'ptr1'.  Right?  The page being pointed to is
> >> eligible for LUF via the try_to_unmap() paths.  In other words, the page
> >> might be reclaimed at any time.  If it is reclaimed, the PTE will be
> >> cleared.
> >> 
> >> Then, the user might do:
> >> 
> >> 	munmap(ptr1, PAGE_SIZE);
> >> 
> >> Which will _eventually_ wind up in the zap_pte_range() loop.  But that
> >> loop will only see pte_none().  It doesn't do _anything_ to the 'struct
> >> mmu_gather'.
> >> 
> >> The munmap() then lands in tlb_flush_mmu_tlbonly() where it looks at the
> >> 'struct mmu_gather':
> >> 
> >>         if (!(tlb->freed_tables || tlb->cleared_ptes ||
> >> 	      tlb->cleared_pmds || tlb->cleared_puds ||
> >> 	      tlb->cleared_p4ds))
> >>                 return;
> >> 
> >> But since there were no cleared PTEs (or anything else) during the
> >> unmap, this just returns and doesn't flush the TLB.
> >> 
> >> We now have an address space with a stale TLB entry at 'ptr1' and not
> >> even a VMA there.  There's nothing to stop a new VMA from going in,
> >> installing a *new* PTE, but getting data from the stale TLB entry that
> >> still hasn't been flushed.
> >
> > Thank you for the explanation.  I got you.  I think I could handle the
> > case through a new flag in vma or something indicating LUF has deferred
> > necessary TLB flush for it during unmapping so that mmu_gather mechanism
> > can be aware of it.  Of course, the performance change should be checked
> > again.  Thoughts?
> 
> I suggest you to start with the simple case.  That is, only support page
> reclaiming and migration.  A TLB flushing can be enforced during unmap
> with something similar as flush_tlb_batched_pending().

Right.  I'm thinking to add a related code to flush_tlb_batched_pending().

	Byungchul

> --
> Best Regards,
> Huang, Ying
Byungchul Park May 30, 2024, 7:18 a.m. UTC | #29
On Thu, May 30, 2024 at 09:11:45AM +0800, Huang, Ying wrote:
> Byungchul Park <byungchul@sk.com> writes:
> 
> > On Wed, May 29, 2024 at 09:41:22AM -0700, Dave Hansen wrote:
> >> On 5/28/24 22:00, Byungchul Park wrote:
> >> > All the code updating ptes already performs TLB flush needed in a safe
> >> > way if it's inevitable e.g. munmap.  LUF which controls when to flush in
> >> > a higer level than arch code, just leaves stale ro tlb entries that are
> >> > currently supposed to be in use.  Could you give a scenario that you are
> >> > concering?
> >> 
> >> Let's go back this scenario:
> >> 
> >>  	fd = open("/some/file", O_RDONLY);
> >>  	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
> >>  	foo1 = *ptr1;
> >> 
> >> There's a read-only PTE at 'ptr1'.  Right?  The page being pointed to is
> >> eligible for LUF via the try_to_unmap() paths.  In other words, the page
> >> might be reclaimed at any time.  If it is reclaimed, the PTE will be
> >> cleared.
> >> 
> >> Then, the user might do:
> >> 
> >> 	munmap(ptr1, PAGE_SIZE);
> >> 
> >> Which will _eventually_ wind up in the zap_pte_range() loop.  But that
> >> loop will only see pte_none().  It doesn't do _anything_ to the 'struct
> >> mmu_gather'.
> >> 
> >> The munmap() then lands in tlb_flush_mmu_tlbonly() where it looks at the
> >> 'struct mmu_gather':
> >> 
> >>         if (!(tlb->freed_tables || tlb->cleared_ptes ||
> >> 	      tlb->cleared_pmds || tlb->cleared_puds ||
> >> 	      tlb->cleared_p4ds))
> >>                 return;
> >> 
> >> But since there were no cleared PTEs (or anything else) during the
> >> unmap, this just returns and doesn't flush the TLB.
> >> 
> >> We now have an address space with a stale TLB entry at 'ptr1' and not
> >> even a VMA there.  There's nothing to stop a new VMA from going in,
> >> installing a *new* PTE, but getting data from the stale TLB entry that
> >> still hasn't been flushed.
> >
> > Thank you for the explanation.  I got you.  I think I could handle the
> > case through a new flag in vma or something indicating LUF has deferred
> > necessary TLB flush for it during unmapping so that mmu_gather mechanism
> > can be aware of it.  Of course, the performance change should be checked
> > again.  Thoughts?
> 
> I suggest you to start with the simple case.  That is, only support page
> reclaiming and migration.  A TLB flushing can be enforced during unmap
> with something similar as flush_tlb_batched_pending().

While reading flush_tlb_batched_pending(mm), I found it already performs
TLB flush for the target mm, if set_tlb_ubc_flush_pending(mm) has been
hit at least once since the last flush_tlb_batched_pending(mm).

Since LUF also relies on set_tlb_ubc_flush_pending(mm), it's going to
perform TLB flush required, in flush_tlb_batched_pending(mm) during
munmap().  So it looks safe to me with regard to munmap() already.

Is there something that I'm missing?

JFYI, regarding to mmap(), I have reworked on fault handler to give up
luf when needed in a better way.

	Byungchul

> --
> Best Regards,
> Huang, Ying
Huang, Ying May 30, 2024, 8:24 a.m. UTC | #30
Byungchul Park <byungchul@sk.com> writes:

> On Thu, May 30, 2024 at 09:11:45AM +0800, Huang, Ying wrote:
>> Byungchul Park <byungchul@sk.com> writes:
>> 
>> > On Wed, May 29, 2024 at 09:41:22AM -0700, Dave Hansen wrote:
>> >> On 5/28/24 22:00, Byungchul Park wrote:
>> >> > All the code updating ptes already performs TLB flush needed in a safe
>> >> > way if it's inevitable e.g. munmap.  LUF which controls when to flush in
>> >> > a higer level than arch code, just leaves stale ro tlb entries that are
>> >> > currently supposed to be in use.  Could you give a scenario that you are
>> >> > concering?
>> >> 
>> >> Let's go back this scenario:
>> >> 
>> >>  	fd = open("/some/file", O_RDONLY);
>> >>  	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
>> >>  	foo1 = *ptr1;
>> >> 
>> >> There's a read-only PTE at 'ptr1'.  Right?  The page being pointed to is
>> >> eligible for LUF via the try_to_unmap() paths.  In other words, the page
>> >> might be reclaimed at any time.  If it is reclaimed, the PTE will be
>> >> cleared.
>> >> 
>> >> Then, the user might do:
>> >> 
>> >> 	munmap(ptr1, PAGE_SIZE);
>> >> 
>> >> Which will _eventually_ wind up in the zap_pte_range() loop.  But that
>> >> loop will only see pte_none().  It doesn't do _anything_ to the 'struct
>> >> mmu_gather'.
>> >> 
>> >> The munmap() then lands in tlb_flush_mmu_tlbonly() where it looks at the
>> >> 'struct mmu_gather':
>> >> 
>> >>         if (!(tlb->freed_tables || tlb->cleared_ptes ||
>> >> 	      tlb->cleared_pmds || tlb->cleared_puds ||
>> >> 	      tlb->cleared_p4ds))
>> >>                 return;
>> >> 
>> >> But since there were no cleared PTEs (or anything else) during the
>> >> unmap, this just returns and doesn't flush the TLB.
>> >> 
>> >> We now have an address space with a stale TLB entry at 'ptr1' and not
>> >> even a VMA there.  There's nothing to stop a new VMA from going in,
>> >> installing a *new* PTE, but getting data from the stale TLB entry that
>> >> still hasn't been flushed.
>> >
>> > Thank you for the explanation.  I got you.  I think I could handle the
>> > case through a new flag in vma or something indicating LUF has deferred
>> > necessary TLB flush for it during unmapping so that mmu_gather mechanism
>> > can be aware of it.  Of course, the performance change should be checked
>> > again.  Thoughts?
>> 
>> I suggest you to start with the simple case.  That is, only support page
>> reclaiming and migration.  A TLB flushing can be enforced during unmap
>> with something similar as flush_tlb_batched_pending().
>
> While reading flush_tlb_batched_pending(mm), I found it already performs
> TLB flush for the target mm, if set_tlb_ubc_flush_pending(mm) has been
> hit at least once since the last flush_tlb_batched_pending(mm).
>
> Since LUF also relies on set_tlb_ubc_flush_pending(mm), it's going to
> perform TLB flush required, in flush_tlb_batched_pending(mm) during
> munmap().  So it looks safe to me with regard to munmap() already.
>
> Is there something that I'm missing?
>
> JFYI, regarding to mmap(), I have reworked on fault handler to give up
> luf when needed in a better way.

If TLB flush is always enforced during munmap(), then your solution can
only avoid TLB flushing for page reclaiming and migration, not unmap.
Or do I miss something?

--
Best Regards,
Huang, Ying
Byungchul Park May 30, 2024, 8:41 a.m. UTC | #31
On Thu, May 30, 2024 at 04:24:12PM +0800, Huang, Ying wrote:
> Byungchul Park <byungchul@sk.com> writes:
> 
> > On Thu, May 30, 2024 at 09:11:45AM +0800, Huang, Ying wrote:
> >> Byungchul Park <byungchul@sk.com> writes:
> >> 
> >> > On Wed, May 29, 2024 at 09:41:22AM -0700, Dave Hansen wrote:
> >> >> On 5/28/24 22:00, Byungchul Park wrote:
> >> >> > All the code updating ptes already performs TLB flush needed in a safe
> >> >> > way if it's inevitable e.g. munmap.  LUF which controls when to flush in
> >> >> > a higer level than arch code, just leaves stale ro tlb entries that are
> >> >> > currently supposed to be in use.  Could you give a scenario that you are
> >> >> > concering?
> >> >> 
> >> >> Let's go back this scenario:
> >> >> 
> >> >>  	fd = open("/some/file", O_RDONLY);
> >> >>  	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
> >> >>  	foo1 = *ptr1;
> >> >> 
> >> >> There's a read-only PTE at 'ptr1'.  Right?  The page being pointed to is
> >> >> eligible for LUF via the try_to_unmap() paths.  In other words, the page
> >> >> might be reclaimed at any time.  If it is reclaimed, the PTE will be
> >> >> cleared.
> >> >> 
> >> >> Then, the user might do:
> >> >> 
> >> >> 	munmap(ptr1, PAGE_SIZE);
> >> >> 
> >> >> Which will _eventually_ wind up in the zap_pte_range() loop.  But that
> >> >> loop will only see pte_none().  It doesn't do _anything_ to the 'struct
> >> >> mmu_gather'.
> >> >> 
> >> >> The munmap() then lands in tlb_flush_mmu_tlbonly() where it looks at the
> >> >> 'struct mmu_gather':
> >> >> 
> >> >>         if (!(tlb->freed_tables || tlb->cleared_ptes ||
> >> >> 	      tlb->cleared_pmds || tlb->cleared_puds ||
> >> >> 	      tlb->cleared_p4ds))
> >> >>                 return;
> >> >> 
> >> >> But since there were no cleared PTEs (or anything else) during the
> >> >> unmap, this just returns and doesn't flush the TLB.
> >> >> 
> >> >> We now have an address space with a stale TLB entry at 'ptr1' and not
> >> >> even a VMA there.  There's nothing to stop a new VMA from going in,
> >> >> installing a *new* PTE, but getting data from the stale TLB entry that
> >> >> still hasn't been flushed.
> >> >
> >> > Thank you for the explanation.  I got you.  I think I could handle the
> >> > case through a new flag in vma or something indicating LUF has deferred
> >> > necessary TLB flush for it during unmapping so that mmu_gather mechanism
> >> > can be aware of it.  Of course, the performance change should be checked
> >> > again.  Thoughts?
> >> 
> >> I suggest you to start with the simple case.  That is, only support page
> >> reclaiming and migration.  A TLB flushing can be enforced during unmap
> >> with something similar as flush_tlb_batched_pending().
> >
> > While reading flush_tlb_batched_pending(mm), I found it already performs
> > TLB flush for the target mm, if set_tlb_ubc_flush_pending(mm) has been
> > hit at least once since the last flush_tlb_batched_pending(mm).
> >
> > Since LUF also relies on set_tlb_ubc_flush_pending(mm), it's going to
> > perform TLB flush required, in flush_tlb_batched_pending(mm) during
> > munmap().  So it looks safe to me with regard to munmap() already.
> >
> > Is there something that I'm missing?
> >
> > JFYI, regarding to mmap(), I have reworked on fault handler to give up
> > luf when needed in a better way.
> 
> If TLB flush is always enforced during munmap(), then your solution can
> only avoid TLB flushing for page reclaiming and migration, not unmap.
								 ^
								 munmap()?

Do you mean munmap()?  IIUC, yes.  LUF only works for page reclaiming
and migration, but not for munmap().  When munmap()ing, LUF rather needs
to give up and perform tlb flush pended.

LUF should not optimize tlb flushes for mappings that users explicitly
change e.g. through mmap() and munmap().

	Byungchul

> Or do I miss something?
> 
> --
> Best Regards,
> Huang, Ying
Byungchul Park May 30, 2024, 9:33 a.m. UTC | #32
On Thu, May 30, 2024 at 04:24:12PM +0800, Huang, Ying wrote:
> Byungchul Park <byungchul@sk.com> writes:
> 
> > On Thu, May 30, 2024 at 09:11:45AM +0800, Huang, Ying wrote:
> >> Byungchul Park <byungchul@sk.com> writes:
> >> 
> >> > On Wed, May 29, 2024 at 09:41:22AM -0700, Dave Hansen wrote:
> >> >> On 5/28/24 22:00, Byungchul Park wrote:
> >> >> > All the code updating ptes already performs TLB flush needed in a safe
> >> >> > way if it's inevitable e.g. munmap.  LUF which controls when to flush in
> >> >> > a higer level than arch code, just leaves stale ro tlb entries that are
> >> >> > currently supposed to be in use.  Could you give a scenario that you are
> >> >> > concering?
> >> >> 
> >> >> Let's go back this scenario:
> >> >> 
> >> >>  	fd = open("/some/file", O_RDONLY);
> >> >>  	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
> >> >>  	foo1 = *ptr1;
> >> >> 
> >> >> There's a read-only PTE at 'ptr1'.  Right?  The page being pointed to is
> >> >> eligible for LUF via the try_to_unmap() paths.  In other words, the page
> >> >> might be reclaimed at any time.  If it is reclaimed, the PTE will be
> >> >> cleared.
> >> >> 
> >> >> Then, the user might do:
> >> >> 
> >> >> 	munmap(ptr1, PAGE_SIZE);
> >> >> 
> >> >> Which will _eventually_ wind up in the zap_pte_range() loop.  But that
> >> >> loop will only see pte_none().  It doesn't do _anything_ to the 'struct
> >> >> mmu_gather'.
> >> >> 
> >> >> The munmap() then lands in tlb_flush_mmu_tlbonly() where it looks at the
> >> >> 'struct mmu_gather':
> >> >> 
> >> >>         if (!(tlb->freed_tables || tlb->cleared_ptes ||
> >> >> 	      tlb->cleared_pmds || tlb->cleared_puds ||
> >> >> 	      tlb->cleared_p4ds))
> >> >>                 return;
> >> >> 
> >> >> But since there were no cleared PTEs (or anything else) during the
> >> >> unmap, this just returns and doesn't flush the TLB.
> >> >> 
> >> >> We now have an address space with a stale TLB entry at 'ptr1' and not
> >> >> even a VMA there.  There's nothing to stop a new VMA from going in,
> >> >> installing a *new* PTE, but getting data from the stale TLB entry that
> >> >> still hasn't been flushed.
> >> >
> >> > Thank you for the explanation.  I got you.  I think I could handle the
> >> > case through a new flag in vma or something indicating LUF has deferred
> >> > necessary TLB flush for it during unmapping so that mmu_gather mechanism
> >> > can be aware of it.  Of course, the performance change should be checked
> >> > again.  Thoughts?
> >> 
> >> I suggest you to start with the simple case.  That is, only support page
> >> reclaiming and migration.  A TLB flushing can be enforced during unmap
> >> with something similar as flush_tlb_batched_pending().
> >
> > While reading flush_tlb_batched_pending(mm), I found it already performs
> > TLB flush for the target mm, if set_tlb_ubc_flush_pending(mm) has been
> > hit at least once since the last flush_tlb_batched_pending(mm).
> >
> > Since LUF also relies on set_tlb_ubc_flush_pending(mm), it's going to
> > perform TLB flush required, in flush_tlb_batched_pending(mm) during
> > munmap().  So it looks safe to me with regard to munmap() already.
> >
> > Is there something that I'm missing?
> >
> > JFYI, regarding to mmap(), I have reworked on fault handler to give up
> > luf when needed in a better way.
> 
> If TLB flush is always enforced during munmap(), then your solution can
> only avoid TLB flushing for page reclaiming and migration, not unmap.

I'm not sure if I understand what you meant.  Could you explain it in
more detail?

LUF works for only *unmapping* that happens during page reclaiming and
migration.  Other unmappings than page reclaiming and migration are not
what LUF works for.  That's why I thought flush_tlb_batched_pending()
could handle the pending tlb flushes in the case.

It'd be appreciated if you explain what you meant more.

	Byungchul

> Or do I miss something?
> 
> --
> Best Regards,
> Huang, Ying
Dave Hansen May 30, 2024, 1:50 p.m. UTC | #33
On 5/30/24 01:41, Byungchul Park wrote:
> LUF should not optimize tlb flushes for mappings that users explicitly
> change e.g. through mmap() and munmap().

We are thoroughly going around in circles at this point.

I'm not quite sure what to do.  Ying and I see a problem that we've
tried to explain a couple of times.  We've tried to show the connection
between a LUF-elided TLB flush and how that could affect a later
munmap() or mmap(MAP_FIXED).

But these responses seem to keep going back to the fact that LUF doesn't
directly affect munmap(), which is true, but quite irrelevant to the
problem being described.

So we're at an impasse.

Byungchul, perhaps you should spin another series and maybe Ying and I
have to write up a test case to show the bug that we see.  Or perhaps
someone else can jump into the thread and bridge the communication gap.
Huang, Ying May 31, 2024, 1:45 a.m. UTC | #34
Byungchul Park <byungchul@sk.com> writes:

> On Thu, May 30, 2024 at 04:24:12PM +0800, Huang, Ying wrote:
>> Byungchul Park <byungchul@sk.com> writes:
>> 
>> > On Thu, May 30, 2024 at 09:11:45AM +0800, Huang, Ying wrote:
>> >> Byungchul Park <byungchul@sk.com> writes:
>> >> 
>> >> > On Wed, May 29, 2024 at 09:41:22AM -0700, Dave Hansen wrote:
>> >> >> On 5/28/24 22:00, Byungchul Park wrote:
>> >> >> > All the code updating ptes already performs TLB flush needed in a safe
>> >> >> > way if it's inevitable e.g. munmap.  LUF which controls when to flush in
>> >> >> > a higer level than arch code, just leaves stale ro tlb entries that are
>> >> >> > currently supposed to be in use.  Could you give a scenario that you are
>> >> >> > concering?
>> >> >> 
>> >> >> Let's go back this scenario:
>> >> >> 
>> >> >>  	fd = open("/some/file", O_RDONLY);
>> >> >>  	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
>> >> >>  	foo1 = *ptr1;
>> >> >> 
>> >> >> There's a read-only PTE at 'ptr1'.  Right?  The page being pointed to is
>> >> >> eligible for LUF via the try_to_unmap() paths.  In other words, the page
>> >> >> might be reclaimed at any time.  If it is reclaimed, the PTE will be
>> >> >> cleared.
>> >> >> 
>> >> >> Then, the user might do:
>> >> >> 
>> >> >> 	munmap(ptr1, PAGE_SIZE);
>> >> >> 
>> >> >> Which will _eventually_ wind up in the zap_pte_range() loop.  But that
>> >> >> loop will only see pte_none().  It doesn't do _anything_ to the 'struct
>> >> >> mmu_gather'.
>> >> >> 
>> >> >> The munmap() then lands in tlb_flush_mmu_tlbonly() where it looks at the
>> >> >> 'struct mmu_gather':
>> >> >> 
>> >> >>         if (!(tlb->freed_tables || tlb->cleared_ptes ||
>> >> >> 	      tlb->cleared_pmds || tlb->cleared_puds ||
>> >> >> 	      tlb->cleared_p4ds))
>> >> >>                 return;
>> >> >> 
>> >> >> But since there were no cleared PTEs (or anything else) during the
>> >> >> unmap, this just returns and doesn't flush the TLB.
>> >> >> 
>> >> >> We now have an address space with a stale TLB entry at 'ptr1' and not
>> >> >> even a VMA there.  There's nothing to stop a new VMA from going in,
>> >> >> installing a *new* PTE, but getting data from the stale TLB entry that
>> >> >> still hasn't been flushed.
>> >> >
>> >> > Thank you for the explanation.  I got you.  I think I could handle the
>> >> > case through a new flag in vma or something indicating LUF has deferred
>> >> > necessary TLB flush for it during unmapping so that mmu_gather mechanism
>> >> > can be aware of it.  Of course, the performance change should be checked
>> >> > again.  Thoughts?
>> >> 
>> >> I suggest you to start with the simple case.  That is, only support page
>> >> reclaiming and migration.  A TLB flushing can be enforced during unmap
>> >> with something similar as flush_tlb_batched_pending().
>> >
>> > While reading flush_tlb_batched_pending(mm), I found it already performs
>> > TLB flush for the target mm, if set_tlb_ubc_flush_pending(mm) has been
>> > hit at least once since the last flush_tlb_batched_pending(mm).
>> >
>> > Since LUF also relies on set_tlb_ubc_flush_pending(mm), it's going to
>> > perform TLB flush required, in flush_tlb_batched_pending(mm) during
>> > munmap().  So it looks safe to me with regard to munmap() already.
>> >
>> > Is there something that I'm missing?
>> >
>> > JFYI, regarding to mmap(), I have reworked on fault handler to give up
>> > luf when needed in a better way.
>> 
>> If TLB flush is always enforced during munmap(), then your solution can
>> only avoid TLB flushing for page reclaiming and migration, not unmap.
>
> I'm not sure if I understand what you meant.  Could you explain it in
> more detail?
>
> LUF works for only *unmapping* that happens during page reclaiming and
> migration.  Other unmappings than page reclaiming and migration are not
> what LUF works for.  That's why I thought flush_tlb_batched_pending()
> could handle the pending tlb flushes in the case.
>
> It'd be appreciated if you explain what you meant more.
>

In the following email, you have claimed that LUF can avoid TLB flushing
for munmap()/mmap().

https://lore.kernel.org/linux-mm/20240527015732.GA61604@system.software.com/

Now, you said it can only avoid TLB flushing for page reclaiming and
migration.

So, to avoid confusion, I suggest you to send out a new series and make
it explicit that it can only optimize page reclaiming and migration, but
not munmap().  And it would be good too to add some words about how it
interact with other TLB flushing mechanisms.

--
Best Regards,
Huang, Ying
Byungchul Park May 31, 2024, 2:06 a.m. UTC | #35
On Thu, May 30, 2024 at 06:50:48AM -0700, Dave Hansen wrote:
> On 5/30/24 01:41, Byungchul Park wrote:
> > LUF should not optimize tlb flushes for mappings that users explicitly
> > change e.g. through mmap() and munmap().
> 
> We are thoroughly going around in circles at this point.
> 
> I'm not quite sure what to do.  Ying and I see a problem that we've
> tried to explain a couple of times.  We've tried to show the connection
> between a LUF-elided TLB flush and how that could affect a later
> munmap() or mmap(MAP_FIXED).
> 
> But these responses seem to keep going back to the fact that LUF doesn't

I just wanted to understand exactly what Ying meant.  My answer might be
done in a wrong way if I wrongly got him.

> directly affect munmap(), which is true, but quite irrelevant to the
> problem being described.
> 
> So we're at an impasse.
> 
> Byungchul, perhaps you should spin another series and maybe Ying and I

I don't think the current implementation is perfect.  I just wanted to
know what I'm missing now but.. yes.  It would be much better to
communicate with a real bug if existing.

I will respin the next version shortly.

	Byungchul

> have to write up a test case to show the bug that we see.  Or perhaps
> someone else can jump into the thread and bridge the communication gap.
Byungchul Park May 31, 2024, 2:20 a.m. UTC | #36
On Fri, May 31, 2024 at 09:45:33AM +0800, Huang, Ying wrote:
> Byungchul Park <byungchul@sk.com> writes:
> 
> > On Thu, May 30, 2024 at 04:24:12PM +0800, Huang, Ying wrote:
> >> Byungchul Park <byungchul@sk.com> writes:
> >> 
> >> > On Thu, May 30, 2024 at 09:11:45AM +0800, Huang, Ying wrote:
> >> >> Byungchul Park <byungchul@sk.com> writes:
> >> >> 
> >> >> > On Wed, May 29, 2024 at 09:41:22AM -0700, Dave Hansen wrote:
> >> >> >> On 5/28/24 22:00, Byungchul Park wrote:
> >> >> >> > All the code updating ptes already performs TLB flush needed in a safe
> >> >> >> > way if it's inevitable e.g. munmap.  LUF which controls when to flush in
> >> >> >> > a higer level than arch code, just leaves stale ro tlb entries that are
> >> >> >> > currently supposed to be in use.  Could you give a scenario that you are
> >> >> >> > concering?
> >> >> >> 
> >> >> >> Let's go back this scenario:
> >> >> >> 
> >> >> >>  	fd = open("/some/file", O_RDONLY);
> >> >> >>  	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
> >> >> >>  	foo1 = *ptr1;
> >> >> >> 
> >> >> >> There's a read-only PTE at 'ptr1'.  Right?  The page being pointed to is
> >> >> >> eligible for LUF via the try_to_unmap() paths.  In other words, the page
> >> >> >> might be reclaimed at any time.  If it is reclaimed, the PTE will be
> >> >> >> cleared.
> >> >> >> 
> >> >> >> Then, the user might do:
> >> >> >> 
> >> >> >> 	munmap(ptr1, PAGE_SIZE);
> >> >> >> 
> >> >> >> Which will _eventually_ wind up in the zap_pte_range() loop.  But that
> >> >> >> loop will only see pte_none().  It doesn't do _anything_ to the 'struct
> >> >> >> mmu_gather'.
> >> >> >> 
> >> >> >> The munmap() then lands in tlb_flush_mmu_tlbonly() where it looks at the
> >> >> >> 'struct mmu_gather':
> >> >> >> 
> >> >> >>         if (!(tlb->freed_tables || tlb->cleared_ptes ||
> >> >> >> 	      tlb->cleared_pmds || tlb->cleared_puds ||
> >> >> >> 	      tlb->cleared_p4ds))
> >> >> >>                 return;
> >> >> >> 
> >> >> >> But since there were no cleared PTEs (or anything else) during the
> >> >> >> unmap, this just returns and doesn't flush the TLB.
> >> >> >> 
> >> >> >> We now have an address space with a stale TLB entry at 'ptr1' and not
> >> >> >> even a VMA there.  There's nothing to stop a new VMA from going in,
> >> >> >> installing a *new* PTE, but getting data from the stale TLB entry that
> >> >> >> still hasn't been flushed.
> >> >> >
> >> >> > Thank you for the explanation.  I got you.  I think I could handle the
> >> >> > case through a new flag in vma or something indicating LUF has deferred
> >> >> > necessary TLB flush for it during unmapping so that mmu_gather mechanism
> >> >> > can be aware of it.  Of course, the performance change should be checked
> >> >> > again.  Thoughts?
> >> >> 
> >> >> I suggest you to start with the simple case.  That is, only support page
> >> >> reclaiming and migration.  A TLB flushing can be enforced during unmap
> >> >> with something similar as flush_tlb_batched_pending().
> >> >
> >> > While reading flush_tlb_batched_pending(mm), I found it already performs
> >> > TLB flush for the target mm, if set_tlb_ubc_flush_pending(mm) has been
> >> > hit at least once since the last flush_tlb_batched_pending(mm).
> >> >
> >> > Since LUF also relies on set_tlb_ubc_flush_pending(mm), it's going to
> >> > perform TLB flush required, in flush_tlb_batched_pending(mm) during
> >> > munmap().  So it looks safe to me with regard to munmap() already.
> >> >
> >> > Is there something that I'm missing?
> >> >
> >> > JFYI, regarding to mmap(), I have reworked on fault handler to give up
> >> > luf when needed in a better way.
> >> 
> >> If TLB flush is always enforced during munmap(), then your solution can
> >> only avoid TLB flushing for page reclaiming and migration, not unmap.
> >
> > I'm not sure if I understand what you meant.  Could you explain it in
> > more detail?
> >
> > LUF works for only *unmapping* that happens during page reclaiming and
> > migration.  Other unmappings than page reclaiming and migration are not
> > what LUF works for.  That's why I thought flush_tlb_batched_pending()
> > could handle the pending tlb flushes in the case.
> >
> > It'd be appreciated if you explain what you meant more.
> >
> 
> In the following email, you have claimed that LUF can avoid TLB flushing
> for munmap()/mmap().

My bad.  Sorry for that confusing expression.

"give up LUF at mmap()" doesn't mean giving up applying LUF to mmap().

"give up LUF at mmap()" means giving up the pending that has been
induced by LUF, in other words, giving up the benefit by LUF because we
are going through mmap() / munmap().

I will be more careful in expressing these things.

> https://lore.kernel.org/linux-mm/20240527015732.GA61604@system.software.com/
> 
> Now, you said it can only avoid TLB flushing for page reclaiming and
> migration.

This is true.

	Byungchul

> So, to avoid confusion, I suggest you to send out a new series and make
> it explicit that it can only optimize page reclaiming and migration, but
> not munmap().  And it would be good too to add some words about how it
> interact with other TLB flushing mechanisms.
> 
> --
> Best Regards,
> Huang, Ying