mbox series

[RFC,v4,0/6] Promotion of Unmapped Page Cache Folios.

Message ID 20250411221111.493193-1-gourry@gourry.net (mailing list archive)
Headers show
Series Promotion of Unmapped Page Cache Folios. | expand

Message

Gregory Price April 11, 2025, 10:11 p.m. UTC
Unmapped page cache pages can be demoted to low-tier memory, but
they can presently only be promoted in two conditions:
    1) The page is fully swapped out and re-faulted
    2) The page becomes mapped (and exposed to NUMA hint faults)

This RFC proposes promoting unmapped page cache pages by using
folio_mark_accessed as a hotness hint for unmapped pages.

We show in a microbenchmark that this mechanism can increase
performance up to 23.5% compared to leaving page cache on the
low tier - when that page cache becomes excessively hot.

When disabled (NUMA tiering off), overhead in folio_mark_accessed
was limited to <1% in a worst case scenario (all work is file_read()).

Patches 1-2
	allow NULL as valid input to migration prep interfaces
	for vmf/vma - which is not present in unmapped folios.
Patch 3
	adds NUMA_HINT_PAGE_CACHE to vmstat
Patch 4
	Implement migrate_misplaced_folio_batch
Patch 5
	add the promotion mechanism, along with a sysfs
	extension which defaults the behavior to off.
	/sys/kernel/mm/numa/pagecache_promotion_enabled
Patch 6
	add MGLRU implementation by Donet Tom

v4 Notes
===
- Add MGLRU implementation
- dropped ifdef change patch after build testing
- Worst-case scenario analysis (thrashing)
- FIO Test analysis

Test Environment
================
    1.5-3.7GHz CPU, ~4000 BogoMIPS, 
    1TB Machine with 768GB DRAM and 256GB CXL

FIO Test:
=========
We evaluated this with FIO with the page-cache pre-loaded

Step 1: Load 128GB file into page cache with a mempolicy
        (dram, cxl, and cxl to promote)
Step 2: Run FIO with 4 reading jobs
        Config does not invalidate pagecache between runs
Step 3: Repeat with a 900GB file that spills into CXL and
        creates thrashing to show its impact.

Configuration:
[global]
bs=1M
size=128G  # 900G
time_based=1
numjobs=4
rw=randread,randwrite
filename=test.data
direct=0
invalidate=0 # Do not drop the cache between runs
[test1]
ioengine=psync
iodepth=64
runtime=240s # Also did 480s, didn't show much difference

On promotion runs, vmstat reported the entire file is promoted
	numa_pages_migrated 33554772   (128.01GB)

DRAM (128GB):
  lat (usec)   : 50=98.34%, 100=1.61%, 250=0.05%, 500=0.01%
  cpu          : usr=1.42%, sys=98.35%, ctx=213, majf=0, minf=264
  bw=83.5GiB/s (89.7GB/s)

Remote (128GB)
  lat (usec)   : 100=91.78%, 250=8.22%, 500=0.01%
  cpu          : usr=0.66%, sys=99.13%, ctx=449, majf=0, minf=263
  bw=41.4GiB/s (44.4GB/s)

Promo (128GB)
  lat (usec)   : 50=88.02%, 100=10.65%, 250=1.05%, 500=0.20%, 750=0.06%
  lat (usec)   : 1000=0.01%
  lat (msec)   : 2=0.01%
  cpu          : usr=1.44%, sys=97.72%, ctx=1679, majf=0, minf=265
  bw=79.2GiB/s (85.0GB/s)

900GB Hot - No Promotion (~150GB spills to remote node via demotion)
  lat (usec)   : 50=69.26%, 100=13.79%, 250=16.95%, 500=0.01%
  bw=64.5GiB/s (69.3GB/s)

900GB Hot - Promotion (Causes thrashing between demotion/promotion)
  lat (usec)   : 50=47.79%, 100=29.59%, 250=21.16%, 500=1.24%, 750=0.04%
  lat (usec)   : 1000=0.03%
  lat (msec)   : 2=0.15%, 4=0.01%
  bw=47.6GiB/s (51.1GB/s)

900GB Hot - No remote memory (Fault-in/page-out of read-only data)
  lat (usec)   : 50=39.73%, 100=31.34%, 250=4.71%, 500=0.01%, 750=0.01%
  lat (usec)   : 1000=1.78%
  lat (msec)   : 2=21.63%, 4=0.81%, 10=0.01%, 20=0.01%
  bw=10.2GiB/s (10.9GB/s)

Obviously some portion of the overhead comes from migration, but the
results here are pretty dramatic.  We can regain nearly all of the
performance in a degenerate scenario (demoted page cache becomes hot)
by turning on promotion - even temporarily.

In the scenario where the entire workload is hot, turning on promotion
causes thrashing, and we hurt performance.

So this feature is useful in one of two scenarios:
1) Headroom on DRAM is available and we opportunistically move page
   cache to the higher tier as it's accessed, or
2) A lower performance node becomes un-evenly pressured.
   It doesn't help us if the higher node is pressured.

For example, it would be nice for a userland daemon to detect DRAM-tier
memory becomes available, and to flip the bit to migrate any hotter page
cache up a level until DRAM becomes pressured again. Cold pagecache
stays put and new allocations still occur on the fast tier.


Worst Case Scenario Test (Micro-benchmark)
==========================================

Goal:
   Generate promotions and demonstrate upper-bound on performance
   overhead and gain/loss.

System Settings:
   CXL Memory in ZONE_MOVABLE (no fallback alloc, demotion-only use)
   echo 2 > /proc/sys/kernel/numa_balancing
   echo 1 > /sys/kernel/mm/numa/pagecache_promotion_enabled
   echo 1 > /sys/kernel/mm/numa/demotion_enabled
   
Test process:
   In each test, we do a linear read of a 128GB file into a buffer
   in a loop.  To allocate the pagecache into CXL, we use mbind prior
   to the CXL test runs and read the file.  We omit the overhead of
   allocating the buffer and initializing the memory into CXL from the
   test runs.

   1) file allocated in DRAM with mechanisms off
   2) file allocated in DRAM with balancing on but promotion off
   3) file allocated in DRAM with balancing and promotion on
      (promotion check is negative because all pages are top tier)
   4) file allocated in CXL with mechanisms off
   5) file allocated in CXL with mechanisms on

Each test was run with 50 read cycles and averaged (where relevant)
to account for system noise.  This number of cycles gives the promotion
mechanism time to promote the vast majority of memory (usually <1MB
remaining in worst case).

Tests 2 and 3 test the upper bound on overhead of the new checks when
there are no pages to migrate but work is dominated by file_read().

|     1     |    2     |     3       |    4     |      5         |
| DRAM Base | Promo On | TopTier Chk | CXL Base | Post-Promotion |
|  7.5804   |  7.7586  |   7.9726    |   9.75   |    7.8941      |

Baseline DRAM vs Baseline CXL shows a ~28% overhead just allowing the
file to remain on CXL, while after promotion, we see the performance
trend back towards the overhead of the TopTier check time - a total
overhead reduction of ~84% (or ~5% overhead down from ~23.5%).

During promotion, we do see overhead which eventually tapers off over
time.  Here is a sample of the first 10 cycles during which promotion
is the most aggressive, which shows overhead drops off dramatically
as the majority of memory is migrated to the top tier.

12.79, 12.52, 12.33, 12.03, 11.81, 11.58, 11.36, 11.1, 8, 7.96

After promotion, turning the mechanism off via sysfs increased the
overall performance back to the DRAM baseline. The slight (~1%)
increase between post-migration performance and the baseline mechanism
overhead check appears to be general variance as similar times were
observed during the baseline checks on subsequent runs.

The mechanism itself represents a ~2.5% overhead in a worst case
scenario (all work is file_read(), all pages are in DRAM, all pages are
hot - which is highly unrealistic). This is inclusive of any overhead 

Development History and Notes
=======================================
During development, we explored the following proposals:

1) directly promoting within folio_mark_accessed (FMA)
   Originally suggested by Johannes Weiner
   https://lore.kernel.org/all/20240803094715.23900-1-gourry@gourry.net/

   This caused deadlocks due to the fact that the PTL was held
   in a variety of cases - but in particular during task exit.
   It also is incredibly inflexible and causes promotion-on-fault.
   It was discussed that a deferral mechanism was preferred.


2) promoting in filemap.c locations (callers of FMA)
   Originally proposed by Feng Tang and Ying Huang
   https://git.kernel.org/pub/scm/linux/kernel/git/vishal/tiering.git/patch/?id=5f2e64ce75c0322602c2ec8c70b64bb69b1f1329

   First, we saw this as less problematic than directly hooking FMA,
   but we realized this has the potential to miss data in a variety of
   locations: swap.c, memory.c, gup.c, ksm.c, paddr.c - etc.

   Second, we discovered that the lock state of pages is very subtle,
   and that these locations in filemap.c can be called in an atomic
   context.  Prototypes lead to a variety of stalls and lockups.


3) a new LRU - originally proposed by Keith Busch
   https://git.kernel.org/pub/scm/linux/kernel/git/kbusch/linux.git/patch/?id=6616afe9a722f6ebedbb27ade3848cf07b9a3af7

   There are two issues with this approach: PG_promotable and reclaim.

   First - PG_promotable has generally be discouraged.

   Second - Attach this mechanism to an LRU is both backwards and
   counter-intutive.  A promotable list is better served by a MOST
   recently used list, and since LRUs are generally only shrank when
   exposed to pressure it would require implementing a new promotion
   list shrinker that runs separate from the existing reclaim logic.


4) Adding a separate kthread - suggested by many

   This is - to an extent - a more general version of the LRU proposal.
   We still have to track the folios - which likely requires the
   addition of a page flag.  Additionally, this method would actually
   contend pretty heavily with LRU behavior - i.e. we'd want to
   throttle addition to the promotion candidate list in some scenarios.


5) Doing it in task work

   This seemed to be the most realistic after considering the above.

   We observe the following:
    - FMA is an ideal hook for this and isolation is safe here
    - the new promotion_candidate function is an ideal hook for new
      filter logic (throttling, fairness, etc).
    - isolated folios are either promoted or putback on task resume,
      there are no additional concurrency mechanics to worry about
    - The mechanic can be made optional via a sysfs hook to avoid
      overhead in degenerate scenarios (thrashing).


Suggested-by: Huang Ying <ying.huang@linux.alibaba.com>
Suggested-by: Johannes Weiner <hannes@cmpxchg.org>
Suggested-by: Keith Busch <kbusch@meta.com>
Suggested-by: Feng Tang <feng.tang@intel.com>
Tested-by: Neha Gholkar <nehagholkar@meta.com>
Signed-off-by: Gregory Price <gourry@gourry.net>
Co-developed-by: Donet Tom <donettom@linux.ibm.com>

Donet Tom (1):
  mm/swap.c: Enable promotion of unmapped MGLRU page cache pages

Gregory Price (5):
  migrate: Allow migrate_misplaced_folio_prepare() to accept a NULL VMA.
  memory: allow non-fault migration in numa_migrate_check path
  vmstat: add page-cache numa hints
  migrate: implement migrate_misplaced_folio_batch
  migrate,sysfs: add pagecache promotion

 .../ABI/testing/sysfs-kernel-mm-numa          | 20 +++++
 include/linux/memory-tiers.h                  |  2 +
 include/linux/migrate.h                       | 11 +++
 include/linux/sched.h                         |  4 +
 include/linux/sched/sysctl.h                  |  1 +
 include/linux/vm_event_item.h                 |  8 ++
 init/init_task.c                              |  2 +
 kernel/sched/fair.c                           | 24 ++++-
 mm/memcontrol.c                               |  1 +
 mm/memory-tiers.c                             | 27 ++++++
 mm/memory.c                                   | 30 ++++---
 mm/mempolicy.c                                | 25 ++++--
 mm/migrate.c                                  | 88 ++++++++++++++++++-
 mm/swap.c                                     | 15 +++-
 mm/vmstat.c                                   |  2 +
 15 files changed, 236 insertions(+), 24 deletions(-)

Comments

Matthew Wilcox April 11, 2025, 11:49 p.m. UTC | #1
On Fri, Apr 11, 2025 at 06:11:05PM -0400, Gregory Price wrote:
> Unmapped page cache pages can be demoted to low-tier memory, but

No.  Page cache should never be demoted to low-tier memory.
NACK this patchset.
Gregory Price April 12, 2025, 12:09 a.m. UTC | #2
On Sat, Apr 12, 2025 at 12:49:18AM +0100, Matthew Wilcox wrote:
> On Fri, Apr 11, 2025 at 06:11:05PM -0400, Gregory Price wrote:
> > Unmapped page cache pages can be demoted to low-tier memory, but
> 
> No.  Page cache should never be demoted to low-tier memory.
> NACK this patchset.

This wasn't a statement of approval page cache being on lower tiers,
it's a statement of fact.  Enabling demotion causes this issue.

~Gregory
Matthew Wilcox April 12, 2025, 12:35 a.m. UTC | #3
On Fri, Apr 11, 2025 at 08:09:55PM -0400, Gregory Price wrote:
> On Sat, Apr 12, 2025 at 12:49:18AM +0100, Matthew Wilcox wrote:
> > On Fri, Apr 11, 2025 at 06:11:05PM -0400, Gregory Price wrote:
> > > Unmapped page cache pages can be demoted to low-tier memory, but
> > 
> > No.  Page cache should never be demoted to low-tier memory.
> > NACK this patchset.
> 
> This wasn't a statement of approval page cache being on lower tiers,
> it's a statement of fact.  Enabling demotion causes this issue.

Then that's the bug that needs to be fixed.  Not adding 200+ lines
of code to recover from a situation that should never happen.
Gregory Price April 12, 2025, 12:44 a.m. UTC | #4
On Sat, Apr 12, 2025 at 01:35:56AM +0100, Matthew Wilcox wrote:
> On Fri, Apr 11, 2025 at 08:09:55PM -0400, Gregory Price wrote:
> > On Sat, Apr 12, 2025 at 12:49:18AM +0100, Matthew Wilcox wrote:
> > > On Fri, Apr 11, 2025 at 06:11:05PM -0400, Gregory Price wrote:
> > > > Unmapped page cache pages can be demoted to low-tier memory, but
> > > 
> > > No.  Page cache should never be demoted to low-tier memory.
> > > NACK this patchset.
> > 
> > This wasn't a statement of approval page cache being on lower tiers,
> > it's a statement of fact.  Enabling demotion causes this issue.
> 
> Then that's the bug that needs to be fixed.  Not adding 200+ lines
> of code to recover from a situation that should never happen.

Well, I have a use case that make valuable use of putting the page cache
on a farther node rather than pushing it out to disk.  But this
discussion aside, I think we could simply make this a separate mode of
demotion_enabled

/* Only demote anonymous memory */
echo 2 > /sys/kernel/mm/numa/demotion_enabled

Assuming we can recognize anon from just struct folio

~Gregory
Ritesh Harjani (IBM) April 12, 2025, 11:52 a.m. UTC | #5
Gregory Price <gourry@gourry.net> writes:

> On Sat, Apr 12, 2025 at 01:35:56AM +0100, Matthew Wilcox wrote:
>> On Fri, Apr 11, 2025 at 08:09:55PM -0400, Gregory Price wrote:
>> > On Sat, Apr 12, 2025 at 12:49:18AM +0100, Matthew Wilcox wrote:
>> > > On Fri, Apr 11, 2025 at 06:11:05PM -0400, Gregory Price wrote:
>> > > > Unmapped page cache pages can be demoted to low-tier memory, but
>> > > 
>> > > No.  Page cache should never be demoted to low-tier memory.
>> > > NACK this patchset.

Hi Matthew, 

Could you please give some context around why shouldn't page cache be
considered as a demotion target if demotion is enabled? Shouldn't
demoting page cache pages to a lower tier (when we have enough space in
lower tier) can be a better alternative then discarding these pages and
later doing I/Os to read them back again?

>> > 
>> > This wasn't a statement of approval page cache being on lower tiers,
>> > it's a statement of fact.  Enabling demotion causes this issue.
>> 
>> Then that's the bug that needs to be fixed.  Not adding 200+ lines
>> of code to recover from a situation that should never happen.

/me goes and checks when the demotion feature was added... 

Ok, so I believe this was added here [1]
"[PATCH -V10 4/9] mm/migrate: demote pages during reclaim". 
[1]: https://lore.kernel.org/all/20210715055145.195411-5-ying.huang@intel.com/T/#u

I think systems with persistent memory acting as DRAM nodes, could choose
to demote page cache pages too, to lower tier instead of simply
discarding them and later doing I/O to read them back from disk. 

e.g. when one has a smaller size DRAM as faster tier and larger size
PMEM as slower tier. During memory pressure on faster tier, demoting
page cache pages to slower tier can be helpful to avoid doing I/O later
to read them back in, isn't it?

>
> Well, I have a use case that make valuable use of putting the page cache
> on a farther node rather than pushing it out to disk.  But this
> discussion aside, I think we could simply make this a separate mode of
> demotion_enabled
>
> /* Only demote anonymous memory */
> echo 2 > /sys/kernel/mm/numa/demotion_enabled
>

If we are going down this road... then should we consider what other
choices users may need for their usecases? e.g.

0: Demotion disabled
1: Demotion enabled for both anon and file pages
Till here the support is already present.

2: Demotion enabled only for anon pages
3: Demotion enabled only for file pages

Should this be further classified for dirty v/s clean page cache
pages too?

> Assuming we can recognize anon from just struct folio

I am not 100% sure of this, so others should correct. Should this
simply be, folio_is_file_lru() to differentiate page cache pages?

Although this still might give us anon pages which have the
PG_swapbacked dropped as a result of MADV_FREE. Note sure if that need
any special care though?


-ritesh
Gregory Price April 12, 2025, 2:35 p.m. UTC | #6
On Sat, Apr 12, 2025 at 05:22:24PM +0530, Ritesh Harjani wrote:
> Gregory Price <gourry@gourry.net> writes:
> 0: Demotion disabled
> 1: Demotion enabled for both anon and file pages
> Till here the support is already present.
> 
> 2: Demotion enabled only for anon pages
> 3: Demotion enabled only for file pages
> 
> Should this be further classified for dirty v/s clean page cache
> pages too?
> 

There are some limitations around migrating dirty pages IIRC, but right
now the vmscan code indescriminately adds any and all folios to the
demotion list if it gets to that chunk of the code.

> > Assuming we can recognize anon from just struct folio
> 
> I am not 100% sure of this, so others should correct. Should this
> simply be, folio_is_file_lru() to differentiate page cache pages?
> 
> Although this still might give us anon pages which have the
> PG_swapbacked dropped as a result of MADV_FREE. Note sure if that need
> any special care though?
> 

I made the comment without looking but yeah, PageAnon/folio_test_anon
exist, so this exists in some form somewhere.  Basically there's some
space to do something a little less indescriminate here.

~Gregory
Donet Tom April 13, 2025, 5:23 a.m. UTC | #7
On 4/12/25 5:19 AM, Matthew Wilcox wrote:
> On Fri, Apr 11, 2025 at 06:11:05PM -0400, Gregory Price wrote:
>> Unmapped page cache pages can be demoted to low-tier memory, but
> No.  Page cache should never be demoted to low-tier memory.
> NACK this patchset.

Hi Mathew,

I have one doubt. Under memory pressure, page cache allocations can
fall back to lower-tier memory, right? So later, if those page cache pages
become hot, shouldn't we promote them?

Thanks
Donet
Matthew Wilcox April 13, 2025, 12:48 p.m. UTC | #8
On Sun, Apr 13, 2025 at 10:53:48AM +0530, Donet Tom wrote:
> 
> On 4/12/25 5:19 AM, Matthew Wilcox wrote:
> > On Fri, Apr 11, 2025 at 06:11:05PM -0400, Gregory Price wrote:
> > > Unmapped page cache pages can be demoted to low-tier memory, but
> > No.  Page cache should never be demoted to low-tier memory.
> > NACK this patchset.
> 
> Hi Mathew,
> 
> I have one doubt. Under memory pressure, page cache allocations can
> fall back to lower-tier memory, right? So later, if those page cache pages
> become hot, shouldn't we promote them?

That shouldn't happen either.  CXL should never be added to the page
allocator.  You guys are creating a lot of problems for yourselves,
and I've been clear that I want no part of this.
>
SeongJae Park April 15, 2025, 12:45 a.m. UTC | #9
Hi Gregory,


Thank you for continuing and sharing this nice work.  Adding some comments
based on my humble and DAMON-biased perspective below.

On Fri, 11 Apr 2025 18:11:05 -0400 Gregory Price <gourry@gourry.net> wrote:

> Unmapped page cache pages can be demoted to low-tier memory, but
> they can presently only be promoted in two conditions:
>     1) The page is fully swapped out and re-faulted
>     2) The page becomes mapped (and exposed to NUMA hint faults)

Yet another way is running DAMOS with DAMOS_MIGRATE_HOT action and unmapped
pages DAMOS filter.  Or, without unmapped pages DAMOS filter, if you want to
promote both mapped and unmapped pages.

It is easy to tell, but I anticiapte it will show many limitations on your use
case.  I anyway only very recently shared[1] my version of experimental
prototype and its evaluation results.  I also understand this patch series is
the simple and well working solution for your use case, and I have no special
blocker for this work.  Nevertheless, I was just thinking it would be nice if
your anticipated or expected limitations and opportunities of other approaches
including DAMON-based one can be put here together.

[...]
> 
> Development History and Notes
> =======================================
> During development, we explored the following proposals:

This is very informative and helpful for getting the context.  Thank you for
sharing.

[...]
> 4) Adding a separate kthread - suggested by many

DAMON-based approach might also be categorized here since DAMON does access
monitoring and monitoring results-based system operations (migration in this
context) in a separate thread.

> 
>    This is - to an extent - a more general version of the LRU proposal.
>    We still have to track the folios - which likely requires the
>    addition of a page flag.

In case of DAMON-based approach, this is not technically true, since it uses
its own abstraction called DAMON region.  Of course, DAMON region abstraction
is not a panacea.  There were concerns around the accuracy of the region
abstraction.  We found unoptimum DAMON intervals tuning could be one of the
source of the poor accuracy and recently made an automation[2] of the tuning.

I remeber you previously mentioned it might make sense to utilize DAMON as a
way to save such additional information.  It has been one of the motivations
for my recent suggestion of a new DAMON API[3], namely damon_report_access().
It will allow any kernel code reports their access finding to DAMON with
controlled overhead.  The aimed usage is to make page faults handler,
folio_mark_accessed(), and AMD IBS sample handers like code path passes the
information to DAMON via the API function.

>    Additionally, this method would actually
>    contend pretty heavily with LRU behavior - i.e. we'd want to
>    throttle addition to the promotion candidate list in some scenarios.

DAMON-based approach could use DAMOS quota feature for this kind of purpose.

> 
> 
> 5) Doing it in task work
> 
>    This seemed to be the most realistic after considering the above.
> 
>    We observe the following:
>     - FMA is an ideal hook for this and isolation is safe here

My one concern is that users can ask DAMON to call folio_mark_accessed() for
non unmapped page cache folios, via DAMOS_LRU_PRIO.  Promoting the folio could
be understood as a sort of LRU-prioritizing, so I'm not really concerned about
DAMON's behavioral change that this patch series could introduce.  Rather than
that, I'm concerned if vmstat change of this patch series could be confused by
such DAMON users.

>     - the new promotion_candidate function is an ideal hook for new
>       filter logic (throttling, fairness, etc).

Agreed.  DAMON's target filtering and aim-based aggressiveness self-tuning
features could be such logic.  I suggested[3] damos_add_folio() as a potential
future DAMON API for such use cases.

With this patch series, nevertheless, only folio_mark_accessed() called folios
could get such opportunity.  Do you have future plans to integrate faults-based
promotion logic with this function, and extend for other access information
source?

[1] https://lore.kernel.org/20250320053937.57734-1-sj@kernel.org
[2] https://lkml.kernel.org/r/20250303221726.484227-1-sj@kernel.org
[3] https://lwn.net/Articles/1016525/


Thanks,
SJ

[...]