mbox series

[v4,0/4] mm: Rework zap ptes on swap entries

Message ID 20220216094810.60572-1-peterx@redhat.com (mailing list archive)
Headers show
Series mm: Rework zap ptes on swap entries | expand

Message

Peter Xu Feb. 16, 2022, 9:48 a.m. UTC
v4:
- Rebase to v5.17-rc4
- Add r-b, and suggested-by on patch 2 [David]
- Fix spelling (s/Quotting/Quoting/) [David]

RFC V1: https://lore.kernel.org/lkml/20211110082952.19266-1-peterx@redhat.com
RFC V2: https://lore.kernel.org/lkml/20211115134951.85286-1-peterx@redhat.com
V3:     https://lore.kernel.org/lkml/20220128045412.18695-1-peterx@redhat.com

Patch 1 should fix a long standing bug for zap_pte_range() on zap_details
usage.  The risk is we could have some swap entries skipped while we should
have zapped them.

Migration entries are not the major concern because file backed memory always
zap in the pattern that "first time without page lock, then re-zap with page
lock" hence the 2nd zap will always make sure all migration entries are already
recovered.

However there can be issues with real swap entries got skipped errornoously.
There's a reproducer provided in commit message of patch 1 for that.

Patch 2-4 are cleanups that are based on patch 1.  After the whole patchset
applied, we should have a very clean view of zap_pte_range().

Only patch 1 needs to be backported to stable if necessary.

Please review, thanks.

Peter Xu (4):
  mm: Don't skip swap entry even if zap_details specified
  mm: Rename zap_skip_check_mapping() to should_zap_page()
  mm: Change zap_details.zap_mapping into even_cows
  mm: Rework swap handling of zap_pte_range

 mm/memory.c | 85 +++++++++++++++++++++++++++++++----------------------
 1 file changed, 50 insertions(+), 35 deletions(-)