mbox series

[v2,0/6] Follow-up on high-order PCP caching

Message ID 20220217002227.5739-1-mgorman@techsingularity.net (mailing list archive)
Headers show
Series Follow-up on high-order PCP caching | expand

Message

Mel Gorman Feb. 17, 2022, 12:22 a.m. UTC
This series replaces v1 of the "Follow-up on high-order PCP caching"
series in mmots.

Changelog since v1
o Drain the requested PCP list first			(vbabka)
o Use [min|max]_pindex properly to reduce search depth	(vbabka)
o Update benchmark results in changelogs

Commit 44042b449872 ("mm/page_alloc: allow high-order pages to be
stored on the per-cpu lists") was primarily aimed at reducing the cost
of SLUB cache refills of high-order pages in two ways. Firstly, zone
lock acquisitions was reduced and secondly, there were fewer buddy list
modifications. This is a follow-up series fixing some issues that became
apparant after merging.

Patch 1 is a functional fix. It's harmless but inefficient.

Patches 2-5 reduce the overhead of bulk freeing of PCP pages. While
the overhead is small, it's cumulative and noticable when truncating
large files. The changelog for patch 4 includes results of a microbench
that deletes large sparse files with data in page cache. Sparse files
were used to eliminate filesystem overhead.

Patch 6 addresses issues with high-order PCP pages being stored on PCP
lists for too long. Pages freed on a CPU potentially may not be quickly
reused and in some cases this can increase cache miss rates.  Details are
included in the changelog.

 mm/page_alloc.c | 135 +++++++++++++++++++++++++-----------------------
 1 file changed, 69 insertions(+), 66 deletions(-)