mbox series

[00/10,v7,RESEND] Migrate Pages in lieu of discard

Message ID 20210401183216.443C4443@viggo.jf.intel.com (mailing list archive)
Headers show
Series Migrate Pages in lieu of discard | expand

Message

Dave Hansen April 1, 2021, 6:32 p.m. UTC
I'm resending this because I forgot to cc the mailing lists on the
post yesterday.  Sorry for the noise.  Please reply to this series.

The full series is also available here:

	https://github.com/hansendc/linux/tree/automigrate-20210331

which also inclues some vm.zone_reclaim_mode sysctl ABI fixup
prerequisites:

	https://github.com/hansendc/linux/commit/18daad8f0181a2da57cb43e595303c2ef5bd7b6e
	https://github.com/hansendc/linux/commit/a873f3b6f250581072ab36f2735a3aa341e36705

There are no major changes since the last post.

--

We're starting to see systems with more and more kinds of memory such
as Intel's implementation of persistent memory.

Let's say you have a system with some DRAM and some persistent memory.
Today, once DRAM fills up, reclaim will start and some of the DRAM
contents will be thrown out.  Allocations will, at some point, start
falling over to the slower persistent memory.

That has two nasty properties.  First, the newer allocations can end
up in the slower persistent memory.  Second, reclaimed data in DRAM
are just discarded even if there are gobs of space in persistent
memory that could be used.

This set implements a solution to these problems.  At the end of the
reclaim process in shrink_page_list() just before the last page
refcount is dropped, the page is migrated to persistent memory instead
of being dropped.

While I've talked about a DRAM/PMEM pairing, this approach would
function in any environment where memory tiers exist.

This is not perfect.  It "strands" pages in slower memory and never
brings them back to fast DRAM.  Huang Ying has follow-on work which
repurposes autonuma to promote hot pages back to DRAM.

This is also all based on an upstream mechanism that allows
persistent memory to be onlined and used as if it were volatile:

	http://lkml.kernel.org/r/20190124231441.37A4A305@viggo.jf.intel.com

== Open Issues ==

 * Memory policies and cpusets that, for instance, restrict allocations
   to DRAM can be demoted to PMEM whenever they opt in to this
   new mechanism.  A cgroup-level API to opt-in or opt-out of
   these migrations will likely be required as a follow-on.
 * Could be more aggressive about where anon LRU scanning occurs
   since it no longer necessarily involves I/O.  get_scan_count()
   for instance says: "If we have no swap space, do not bother
   scanning anon pages"

--

 Documentation/admin-guide/sysctl/vm.rst |  12 +
 include/linux/migrate.h                 |  14 +-
 include/linux/swap.h                    |   3 +-
 include/linux/vm_event_item.h           |   2 +
 include/trace/events/migrate.h          |   3 +-
 include/uapi/linux/mempolicy.h          |   1 +
 mm/compaction.c                         |   3 +-
 mm/gup.c                                |   3 +-
 mm/internal.h                           |   5 +
 mm/memory-failure.c                     |   4 +-
 mm/memory_hotplug.c                     |   4 +-
 mm/mempolicy.c                          |   8 +-
 mm/migrate.c                            | 315 +++++++++++++++++++++++-
 mm/page_alloc.c                         |  11 +-
 mm/vmscan.c                             | 158 +++++++++++-
 mm/vmstat.c                             |   2 +
 16 files changed, 520 insertions(+), 28 deletions(-)

--

Changes since (automigrate-20210304):
 * Add ack/review tags
 * Remove duplicate synchronize_rcu() call

Changes since (automigrate-20210122):
 * move from GFP_HIGHUSER -> GFP_HIGHUSER_MOVABLE since pages *are*
   movable.
 * Separate out helpers that check for being able to relaim anonymous
   pages versus being able to meaningfully scan the anon LRU.

Changes since (automigrate-20200818):
 * Fall back to normal reclaim when demotion fails
 * Fix some compile issues, when page migration and NUMA are off

Changes since (automigrate-20201007):
 * separate out checks for "can scan anon LRU" from "can actually
   swap anon pages right now".  Previous series conflated them
   and may have been overly aggressive scanning LRU
 * add MR_DEMOTION to tracepoint header
 * remove unnecessary hugetlb page check

Changes since (https://lwn.net/Articles/824830/):
 * Use higher-level migrate_pages() API approach from Yang Shi's
   earlier patches.
 * made sure to actually check node_reclaim_mode's new bit
 * disabled migration entirely before introducing RECLAIM_MIGRATE
 * Replace GFP_NOWAIT with explicit __GFP_KSWAPD_RECLAIM and
   comment why we want that.
 * Comment on effects of that keep multiple source nodes from
   sharing target nodes

Cc: Yang Shi <yang.shi@linux.alibaba.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: osalvador <osalvador@suse.de>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Wei Xu <weixugc@google.com>

Comments

Michal Hocko April 16, 2021, 12:35 p.m. UTC | #1
Hi,
I am really sorry to jump into this train sooo late. I have quickly
glanced through the series and I have some questions/concerns. Let me
express them here rather than in specific patches.

First of all I do think that demotion is a useful way to balance the
memory in general. And that is not really bound to PMEM equipped
systems. There are larger NUMA machines which are not trivial to
partition and our existing NUMA APIs are far from ideal to help with
that.

I do appreciate that the whole thing is an opt in because this might
break workloads which are careful with the placement. I am not sure
there is a way to handle constrains in an optimal way if that is
possible at all in some cases (e.g. do we have a way to track page to
its cpuset resp. task mempolicy in all cases?).

The cover letter is focusing on usecases but it doesn't really provide
so let me try to lay it down here (let's see whether I missed something
important).
- order for demontion defines a very simple fallback to a single node
  based on the proximity but cycles are not allowed in the fallback
  mask.
  I have to confess that I haven't grasped the initialization
  completely. There is a nice comment explaining a 2 socket system with
  3 different NUMA nodes attached to it with one node being terminal.
  This is OK if the terminal node is PMEM but how that fits into usual
  NUMA setups. E.g.
  4 nodes each with its set of CPUs
  node distances:
  node   0   1   2   3
  0:  10  20  20  20
  1:  20  10  20  20
  2:  20  20  10  20
  3:  20  20  20  10
  Do I get it right that Node 3 would be terminal?
- The demotion is controlled by node_reclaim_mode but unlike other modes
  it applies to both direct and kswapd reclaims.
  I do not see that explained anywhere though.
- The demotion is implemented at shrink_page_list level which migrates
  pages in the first round and then falls back to the regular reclaim
  when migration fails. This means that the reclaim context
  (PF_MEMALLOC) will allocate memory so it has access to full memory
  reserves. Btw. I do not __GFP_NO_MEMALLOC anywhere in the allocation
  mask which looks like a bug rather than an intention. Btw. using
  GFP_NOWAIT in the allocation callback would make more things clear
  IMO.
- Memcg reclaim is excluded from all this because it is not NUMA aware
  which makes sense to me.
- Anonymous pages are bit tricky because they can be demoted even when
  they cannot be reclaimed due to no (or no available) swap storage.
  Unless I have missed something the second round will try to reclaim
  them even the later is true and I am not sure this is completely OK.

I hope I've captured all important parts. There are some more details
but they do not seem that important. 

I am still trying to digest the whole thing but at least jamming
node_reclaim logic into kswapd seems strange to me. Need to think more
about that though.

Btw. do you have any numbers from running this with some real work
workload?
Dave Hansen April 16, 2021, 2:26 p.m. UTC | #2
On 4/16/21 5:35 AM, Michal Hocko wrote:
>   I have to confess that I haven't grasped the initialization
>   completely. There is a nice comment explaining a 2 socket system with
>   3 different NUMA nodes attached to it with one node being terminal.
>   This is OK if the terminal node is PMEM but how that fits into usual
>   NUMA setups. E.g.
>   4 nodes each with its set of CPUs
>   node distances:
>   node   0   1   2   3
>   0:  10  20  20  20
>   1:  20  10  20  20
>   2:  20  20  10  20
>   3:  20  20  20  10
>   Do I get it right that Node 3 would be terminal?

Yes, I think Node 3 would end up being the terminal node in that setup.

That said, I'm not sure how much I expect folks to use this on
traditional, non-tiered setups.  It's also hard to argue what the
migration order *should* be when all the nodes are uniform.

> - The demotion is controlled by node_reclaim_mode but unlike other modes
>   it applies to both direct and kswapd reclaims.
>   I do not see that explained anywhere though.

That's an interesting observation.  Let me do a bit of research and I'll
update the Documentation/ and the changelog.

> - The demotion is implemented at shrink_page_list level which migrates
>   pages in the first round and then falls back to the regular reclaim
>   when migration fails. This means that the reclaim context
>   (PF_MEMALLOC) will allocate memory so it has access to full memory
>   reserves. Btw. I do not __GFP_NO_MEMALLOC anywhere in the allocation
>   mask which looks like a bug rather than an intention. Btw. using
>   GFP_NOWAIT in the allocation callback would make more things clear
>   IMO.

Yes, the lack of __GFP_NO_MEMALLOC is a bug.  I'll fix that up.

GFP_NOWAIT _seems_ like it will work.  I'll give it a shot.

> - Memcg reclaim is excluded from all this because it is not NUMA aware
>   which makes sense to me.
> - Anonymous pages are bit tricky because they can be demoted even when
>   they cannot be reclaimed due to no (or no available) swap storage.
>   Unless I have missed something the second round will try to reclaim
>   them even the later is true and I am not sure this is completely OK.

What we want is something like this:

Swap Space / Demotion OK  -> Can Reclaim
Swap Space / Demotion Off -> Can Reclaim
Swap Full  / Demotion OK  -> Can Reclaim
Swap Full  / Demotion Off -> No Reclaim

I *think* that's what can_reclaim_anon_pages() ends up doing.  Maybe I'm
misunderstanding what you are referring to, though.  By "second round"
did you mean when we do reclaim on a node which is a terminal node?

> I am still trying to digest the whole thing but at least jamming
> node_reclaim logic into kswapd seems strange to me. Need to think more
> about that though.

I'm entirely open to other ways to do the opt-in.  It seemed sane at the
time, but I also understand the kswapd concern.

> Btw. do you have any numbers from running this with some real work
> workload?

Yes, quite a bit.  Do you have a specific scenario in mind?  Folks seem
to come at this in two different ways:

Some want to know how much DRAM they can replace by buying some PMEM.
They tend to care about how much adding the (cheaper) PMEM slows them
down versus (expensive) DRAM.  They're making a cost-benefit call

Others want to repurpose some PMEM they already have.  They want to know
how much using PMEM in this way will speed them up.  They will basically
take any speedup they can get.

I ask because as a kernel developer with PMEM in my systems, I find the
"I'll take what I can get" case more personally appealing.  But, the
business folks are much more keen on the "DRAM replacement" use.  Do you
have any thoughts on what you would like to see?
Michal Hocko April 16, 2021, 3:02 p.m. UTC | #3
On Fri 16-04-21 07:26:43, Dave Hansen wrote:
> On 4/16/21 5:35 AM, Michal Hocko wrote:
> >   I have to confess that I haven't grasped the initialization
> >   completely. There is a nice comment explaining a 2 socket system with
> >   3 different NUMA nodes attached to it with one node being terminal.
> >   This is OK if the terminal node is PMEM but how that fits into usual
> >   NUMA setups. E.g.
> >   4 nodes each with its set of CPUs
> >   node distances:
> >   node   0   1   2   3
> >   0:  10  20  20  20
> >   1:  20  10  20  20
> >   2:  20  20  10  20
> >   3:  20  20  20  10
> >   Do I get it right that Node 3 would be terminal?
> 
> Yes, I think Node 3 would end up being the terminal node in that setup.
> 
> That said, I'm not sure how much I expect folks to use this on
> traditional, non-tiered setups.  It's also hard to argue what the
> migration order *should* be when all the nodes are uniform.

Well, they are not really uniform. The access latency differ and I can
imagine that spreading page cache to a distant node might be just much
better than an IO involved in the refault.

On the other hand I do understand that restricting the feature to CPU
less NUMA setups is quite sane to give us a better understanding of how
this can be used and improve on top. Maybe we will learn that we want to
have the demotion path admin controlable (on the system level or maybe
even more fine grained on the memcg/cpuset).

[...]
> > - The demotion is implemented at shrink_page_list level which migrates
> >   pages in the first round and then falls back to the regular reclaim
> >   when migration fails. This means that the reclaim context
> >   (PF_MEMALLOC) will allocate memory so it has access to full memory
> >   reserves. Btw. I do not __GFP_NO_MEMALLOC anywhere in the allocation
> >   mask which looks like a bug rather than an intention. Btw. using
> >   GFP_NOWAIT in the allocation callback would make more things clear
> >   IMO.
> 
> Yes, the lack of __GFP_NO_MEMALLOC is a bug.  I'll fix that up.
> 
> GFP_NOWAIT _seems_ like it will work.  I'll give it a shot.

Let me clarify a bit. The slow path does involve __gfp_pfmemalloc_flags
before bailing out for non sleeping allocation. So you would need both.
Unless you want to involve reclaim on the target node while you are
reclaiming the origin node.

> > - Memcg reclaim is excluded from all this because it is not NUMA aware
> >   which makes sense to me.
> > - Anonymous pages are bit tricky because they can be demoted even when
> >   they cannot be reclaimed due to no (or no available) swap storage.
> >   Unless I have missed something the second round will try to reclaim
> >   them even the later is true and I am not sure this is completely OK.
> 
> What we want is something like this:
> 
> Swap Space / Demotion OK  -> Can Reclaim
> Swap Space / Demotion Off -> Can Reclaim
> Swap Full  / Demotion OK  -> Can Reclaim
> Swap Full  / Demotion Off -> No Reclaim
> 
> I *think* that's what can_reclaim_anon_pages() ends up doing.  Maybe I'm
> misunderstanding what you are referring to, though.  By "second round"
> did you mean when we do reclaim on a node which is a terminal node?

No, I mean the migration failure case where you splice back to the page
list to reclaim. In that round you do not demote and want to reclaim.
But a lack of swap space will make that page unreclaimable. I suspect
this would just work out fine but I am not sure from the top of my head.

> > I am still trying to digest the whole thing but at least jamming
> > node_reclaim logic into kswapd seems strange to me. Need to think more
> > about that though.
> 
> I'm entirely open to other ways to do the opt-in.  It seemed sane at the
> time, but I also understand the kswapd concern.
> 
> > Btw. do you have any numbers from running this with some real work
> > workload?
> 
> Yes, quite a bit.  Do you have a specific scenario in mind?  Folks seem
> to come at this in two different ways:
> 
> Some want to know how much DRAM they can replace by buying some PMEM.
> They tend to care about how much adding the (cheaper) PMEM slows them
> down versus (expensive) DRAM.  They're making a cost-benefit call
> 
> Others want to repurpose some PMEM they already have.  They want to know
> how much using PMEM in this way will speed them up.  They will basically
> take any speedup they can get.
> 
> I ask because as a kernel developer with PMEM in my systems, I find the
> "I'll take what I can get" case more personally appealing.  But, the
> business folks are much more keen on the "DRAM replacement" use.  Do you
> have any thoughts on what you would like to see?

I was thinking about typical large in memory processing (e.g. in memory
databases) where the hot part of the working set is only a portion and
spilling over to a slower memory can be still benefitial because IO +
data preprocessing on cold data is much slower.
Huang, Ying April 21, 2021, 2:39 a.m. UTC | #4
Michal Hocko <mhocko@suse.com> writes:

> On Fri 16-04-21 07:26:43, Dave Hansen wrote:
>> On 4/16/21 5:35 AM, Michal Hocko wrote:
>> > - Anonymous pages are bit tricky because they can be demoted even when
>> >   they cannot be reclaimed due to no (or no available) swap storage.
>> >   Unless I have missed something the second round will try to reclaim
>> >   them even the later is true and I am not sure this is completely OK.
>> 
>> What we want is something like this:
>> 
>> Swap Space / Demotion OK  -> Can Reclaim
>> Swap Space / Demotion Off -> Can Reclaim
>> Swap Full  / Demotion OK  -> Can Reclaim
>> Swap Full  / Demotion Off -> No Reclaim
>> 
>> I *think* that's what can_reclaim_anon_pages() ends up doing.  Maybe I'm
>> misunderstanding what you are referring to, though.  By "second round"
>> did you mean when we do reclaim on a node which is a terminal node?
>
> No, I mean the migration failure case where you splice back to the page
> list to reclaim. In that round you do not demote and want to reclaim.
> But a lack of swap space will make that page unreclaimable. I suspect
> this would just work out fine but I am not sure from the top of my head.

I have tested this via injecting some migration errors (returning 0 from
demote_page_list() before migration) on a system without swap.  The
system can still work properly.  In ftrace, I can find add_to_swap() are
called much more times, and it can deal with the situation where the
swap space isn't available.

Best Regards,
Huang, Ying
Huang, Ying May 7, 2021, 6:14 a.m. UTC | #5
Hi, Michal,

Michal Hocko <mhocko@suse.com> writes:

[...]

>> 
>> > Btw. do you have any numbers from running this with some real work
>> > workload?
>> 
>> Yes, quite a bit.  Do you have a specific scenario in mind?  Folks seem
>> to come at this in two different ways:
>> 
>> Some want to know how much DRAM they can replace by buying some PMEM.
>> They tend to care about how much adding the (cheaper) PMEM slows them
>> down versus (expensive) DRAM.  They're making a cost-benefit call
>> 
>> Others want to repurpose some PMEM they already have.  They want to know
>> how much using PMEM in this way will speed them up.  They will basically
>> take any speedup they can get.
>> 
>> I ask because as a kernel developer with PMEM in my systems, I find the
>> "I'll take what I can get" case more personally appealing.  But, the
>> business folks are much more keen on the "DRAM replacement" use.  Do you
>> have any thoughts on what you would like to see?
>
> I was thinking about typical large in memory processing (e.g. in memory
> databases) where the hot part of the working set is only a portion and
> spilling over to a slower memory can be still benefitial because IO +
> data preprocessing on cold data is much slower.

We have tested the patchset with the postgresql and pgbench.  On a
machine with DRAM and PMEM, the kernel with the patchset can improve the
score of pgbench up to 22.1% compared with that of the DRAM only + disk
case.  This comes from the reduced disk read throughput (which reduces
up to 70.8%).

Best Regards,
Huang, Ying