mbox series

[v4,0/6] large folios swap-in: handle refault cases first

Message ID 20240508224040.190469-1-21cnbao@gmail.com (mailing list archive)
Headers show
Series large folios swap-in: handle refault cases first | expand

Message

Barry Song May 8, 2024, 10:40 p.m. UTC
From: Barry Song <v-songbaohua@oppo.com>

This patch is extracted from the large folio swapin series[1], primarily addressing
the handling of scenarios involving large folios in the swap cache. Currently, it is
particularly focused on addressing the refaulting of mTHP, which is still undergoing
reclamation. This approach aims to streamline code review and expedite the integration
of this segment into the MM tree.

It relies on Ryan's swap-out series[2], leveraging the helper function
swap_pte_batch() introduced by that series.

Presently, do_swap_page only encounters a large folio in the swap
cache before the large folio is released by vmscan. However, the code
should remain equally useful once we support large folio swap-in via
swapin_readahead(). This approach can effectively reduce page faults
and eliminate most redundant checks and early exits for MTE restoration
in recent MTE patchset[3].

The large folio swap-in for SWP_SYNCHRONOUS_IO and swapin_readahead()
will be split into separate patch sets and sent at a later time.

-v4:
 - collect acked-by/reviewed-by of Ryan, "Huang, Ying", Chris, David and
   Khalid, many thanks!
 - Simplify reuse code in do_swap_page() by checking refcount==1, per
   David;
 - Initialize large folio-related variables later in do_swap_page(), per
   Ryan;
 - define swap_free() as swap_free_nr(1) per Ying and Ryan.

-v3:
 - optimize swap_free_nr using bitmap with single one "long"; "Huang, Ying"
 - drop swap_free() as suggested by "Huang, Ying", now hibernation can get
   batched;
 - lots of cleanup in do_swap_page() as commented by Ryan Roberts and "Huang,
   Ying";
 - handle arch_do_swap_page() with nr pages though the only platform which
   needs it, sparc, doesn't support THP_SWAPOUT as suggested by "Huang,
   Ying";
 - introduce pte_move_swp_offset() as suggested by "Huang, Ying";
 - drop the "any_shared" of checking swap entries with respect to David's
   comment;
 - drop the counter of swapin_refault and keep it for debug purpose per
   Ying
 - collect reviewed-by tags
 Link:
  https://lore.kernel.org/linux-mm/20240503005023.174597-1-21cnbao@gmail.com/

-v2:
 - rebase on top of mm-unstable in which Ryan's swap_pte_batch() has changed
   a lot.
 - remove folio_add_new_anon_rmap() for !folio_test_anon()
   as currently large folios are always anon(refault).
 - add mTHP swpin refault counters
  Link:
  https://lore.kernel.org/linux-mm/20240409082631.187483-1-21cnbao@gmail.com/

-v1:
  Link: https://lore.kernel.org/linux-mm/20240402073237.240995-1-21cnbao@gmail.com/

Differences with the original large folios swap-in series
 - collect r-o-b, acked;
 - rename swap_nr_free to swap_free_nr, according to Ryan;
 - limit the maximum kernel stack usage for swap_free_nr, Ryan;
 - add output argument in swap_pte_batch to expose if all entries are
   exclusive
 - many clean refinements, handle the corner case folio's virtual addr
   might not be naturally aligned

[1] https://lore.kernel.org/linux-mm/20240304081348.197341-1-21cnbao@gmail.com/
[2] https://lore.kernel.org/linux-mm/20240408183946.2991168-1-ryan.roberts@arm.com/
[3] https://lore.kernel.org/linux-mm/20240322114136.61386-1-21cnbao@gmail.com/

Barry Song (3):
  mm: remove the implementation of swap_free() and always use
    swap_free_nr()
  mm: introduce pte_move_swp_offset() helper which can move offset
    bidirectionally
  mm: introduce arch_do_swap_page_nr() which allows restore metadata for
    nr pages

Chuanhua Han (3):
  mm: swap: introduce swap_free_nr() for batched swap_free()
  mm: swap: make should_try_to_free_swap() support large-folio
  mm: swap: entirely map large folios found in swapcache

 include/linux/pgtable.h | 26 +++++++++++++-----
 include/linux/swap.h    |  9 +++++--
 kernel/power/swap.c     |  5 ++--
 mm/internal.h           | 25 ++++++++++++++---
 mm/memory.c             | 60 +++++++++++++++++++++++++++++++++--------
 mm/swapfile.c           | 48 +++++++++++++++++++++++++++++----
 6 files changed, 142 insertions(+), 31 deletions(-)

Comments

Barry Song May 21, 2024, 9:21 p.m. UTC | #1
Hi Andrew,

This patchset missed the merge window, but I've tried and found that it still
applies cleanly to today's mm-unstable. Would you like me to resend it or just
proceed with using this v4 version?

Thanks
Barry

On Thu, May 9, 2024 at 10:41 AM Barry Song <21cnbao@gmail.com> wrote:
>
> From: Barry Song <v-songbaohua@oppo.com>
>
> This patch is extracted from the large folio swapin series[1], primarily addressing
> the handling of scenarios involving large folios in the swap cache. Currently, it is
> particularly focused on addressing the refaulting of mTHP, which is still undergoing
> reclamation. This approach aims to streamline code review and expedite the integration
> of this segment into the MM tree.
>
> It relies on Ryan's swap-out series[2], leveraging the helper function
> swap_pte_batch() introduced by that series.
>
> Presently, do_swap_page only encounters a large folio in the swap
> cache before the large folio is released by vmscan. However, the code
> should remain equally useful once we support large folio swap-in via
> swapin_readahead(). This approach can effectively reduce page faults
> and eliminate most redundant checks and early exits for MTE restoration
> in recent MTE patchset[3].
>
> The large folio swap-in for SWP_SYNCHRONOUS_IO and swapin_readahead()
> will be split into separate patch sets and sent at a later time.
>
> -v4:
>  - collect acked-by/reviewed-by of Ryan, "Huang, Ying", Chris, David and
>    Khalid, many thanks!
>  - Simplify reuse code in do_swap_page() by checking refcount==1, per
>    David;
>  - Initialize large folio-related variables later in do_swap_page(), per
>    Ryan;
>  - define swap_free() as swap_free_nr(1) per Ying and Ryan.
>
> -v3:
>  - optimize swap_free_nr using bitmap with single one "long"; "Huang, Ying"
>  - drop swap_free() as suggested by "Huang, Ying", now hibernation can get
>    batched;
>  - lots of cleanup in do_swap_page() as commented by Ryan Roberts and "Huang,
>    Ying";
>  - handle arch_do_swap_page() with nr pages though the only platform which
>    needs it, sparc, doesn't support THP_SWAPOUT as suggested by "Huang,
>    Ying";
>  - introduce pte_move_swp_offset() as suggested by "Huang, Ying";
>  - drop the "any_shared" of checking swap entries with respect to David's
>    comment;
>  - drop the counter of swapin_refault and keep it for debug purpose per
>    Ying
>  - collect reviewed-by tags
>  Link:
>   https://lore.kernel.org/linux-mm/20240503005023.174597-1-21cnbao@gmail.com/
>
> -v2:
>  - rebase on top of mm-unstable in which Ryan's swap_pte_batch() has changed
>    a lot.
>  - remove folio_add_new_anon_rmap() for !folio_test_anon()
>    as currently large folios are always anon(refault).
>  - add mTHP swpin refault counters
>   Link:
>   https://lore.kernel.org/linux-mm/20240409082631.187483-1-21cnbao@gmail.com/
>
> -v1:
>   Link: https://lore.kernel.org/linux-mm/20240402073237.240995-1-21cnbao@gmail.com/
>
> Differences with the original large folios swap-in series
>  - collect r-o-b, acked;
>  - rename swap_nr_free to swap_free_nr, according to Ryan;
>  - limit the maximum kernel stack usage for swap_free_nr, Ryan;
>  - add output argument in swap_pte_batch to expose if all entries are
>    exclusive
>  - many clean refinements, handle the corner case folio's virtual addr
>    might not be naturally aligned
>
> [1] https://lore.kernel.org/linux-mm/20240304081348.197341-1-21cnbao@gmail.com/
> [2] https://lore.kernel.org/linux-mm/20240408183946.2991168-1-ryan.roberts@arm.com/
> [3] https://lore.kernel.org/linux-mm/20240322114136.61386-1-21cnbao@gmail.com/
>
> Barry Song (3):
>   mm: remove the implementation of swap_free() and always use
>     swap_free_nr()
>   mm: introduce pte_move_swp_offset() helper which can move offset
>     bidirectionally
>   mm: introduce arch_do_swap_page_nr() which allows restore metadata for
>     nr pages
>
> Chuanhua Han (3):
>   mm: swap: introduce swap_free_nr() for batched swap_free()
>   mm: swap: make should_try_to_free_swap() support large-folio
>   mm: swap: entirely map large folios found in swapcache
>
>  include/linux/pgtable.h | 26 +++++++++++++-----
>  include/linux/swap.h    |  9 +++++--
>  kernel/power/swap.c     |  5 ++--
>  mm/internal.h           | 25 ++++++++++++++---
>  mm/memory.c             | 60 +++++++++++++++++++++++++++++++++--------
>  mm/swapfile.c           | 48 +++++++++++++++++++++++++++++----
>  6 files changed, 142 insertions(+), 31 deletions(-)
>
> --
> 2.34.1
>
Andrew Morton May 21, 2024, 9:59 p.m. UTC | #2
On Wed, 22 May 2024 09:21:38 +1200 Barry Song <21cnbao@gmail.com> wrote:

> This patchset missed the merge window, but I've tried and found that it still
> applies cleanly to today's mm-unstable. Would you like me to resend it or just
> proceed with using this v4 version?

It's in my post merge window backlog pile.  I'll let you know when I
get to it ;)