Message ID | 20210401183216.443C4443@viggo.jf.intel.com (mailing list archive) |
---|---|
Headers | show |
Series | Migrate Pages in lieu of discard | expand |
Hi, I am really sorry to jump into this train sooo late. I have quickly glanced through the series and I have some questions/concerns. Let me express them here rather than in specific patches. First of all I do think that demotion is a useful way to balance the memory in general. And that is not really bound to PMEM equipped systems. There are larger NUMA machines which are not trivial to partition and our existing NUMA APIs are far from ideal to help with that. I do appreciate that the whole thing is an opt in because this might break workloads which are careful with the placement. I am not sure there is a way to handle constrains in an optimal way if that is possible at all in some cases (e.g. do we have a way to track page to its cpuset resp. task mempolicy in all cases?). The cover letter is focusing on usecases but it doesn't really provide so let me try to lay it down here (let's see whether I missed something important). - order for demontion defines a very simple fallback to a single node based on the proximity but cycles are not allowed in the fallback mask. I have to confess that I haven't grasped the initialization completely. There is a nice comment explaining a 2 socket system with 3 different NUMA nodes attached to it with one node being terminal. This is OK if the terminal node is PMEM but how that fits into usual NUMA setups. E.g. 4 nodes each with its set of CPUs node distances: node 0 1 2 3 0: 10 20 20 20 1: 20 10 20 20 2: 20 20 10 20 3: 20 20 20 10 Do I get it right that Node 3 would be terminal? - The demotion is controlled by node_reclaim_mode but unlike other modes it applies to both direct and kswapd reclaims. I do not see that explained anywhere though. - The demotion is implemented at shrink_page_list level which migrates pages in the first round and then falls back to the regular reclaim when migration fails. This means that the reclaim context (PF_MEMALLOC) will allocate memory so it has access to full memory reserves. Btw. I do not __GFP_NO_MEMALLOC anywhere in the allocation mask which looks like a bug rather than an intention. Btw. using GFP_NOWAIT in the allocation callback would make more things clear IMO. - Memcg reclaim is excluded from all this because it is not NUMA aware which makes sense to me. - Anonymous pages are bit tricky because they can be demoted even when they cannot be reclaimed due to no (or no available) swap storage. Unless I have missed something the second round will try to reclaim them even the later is true and I am not sure this is completely OK. I hope I've captured all important parts. There are some more details but they do not seem that important. I am still trying to digest the whole thing but at least jamming node_reclaim logic into kswapd seems strange to me. Need to think more about that though. Btw. do you have any numbers from running this with some real work workload?
On 4/16/21 5:35 AM, Michal Hocko wrote: > I have to confess that I haven't grasped the initialization > completely. There is a nice comment explaining a 2 socket system with > 3 different NUMA nodes attached to it with one node being terminal. > This is OK if the terminal node is PMEM but how that fits into usual > NUMA setups. E.g. > 4 nodes each with its set of CPUs > node distances: > node 0 1 2 3 > 0: 10 20 20 20 > 1: 20 10 20 20 > 2: 20 20 10 20 > 3: 20 20 20 10 > Do I get it right that Node 3 would be terminal? Yes, I think Node 3 would end up being the terminal node in that setup. That said, I'm not sure how much I expect folks to use this on traditional, non-tiered setups. It's also hard to argue what the migration order *should* be when all the nodes are uniform. > - The demotion is controlled by node_reclaim_mode but unlike other modes > it applies to both direct and kswapd reclaims. > I do not see that explained anywhere though. That's an interesting observation. Let me do a bit of research and I'll update the Documentation/ and the changelog. > - The demotion is implemented at shrink_page_list level which migrates > pages in the first round and then falls back to the regular reclaim > when migration fails. This means that the reclaim context > (PF_MEMALLOC) will allocate memory so it has access to full memory > reserves. Btw. I do not __GFP_NO_MEMALLOC anywhere in the allocation > mask which looks like a bug rather than an intention. Btw. using > GFP_NOWAIT in the allocation callback would make more things clear > IMO. Yes, the lack of __GFP_NO_MEMALLOC is a bug. I'll fix that up. GFP_NOWAIT _seems_ like it will work. I'll give it a shot. > - Memcg reclaim is excluded from all this because it is not NUMA aware > which makes sense to me. > - Anonymous pages are bit tricky because they can be demoted even when > they cannot be reclaimed due to no (or no available) swap storage. > Unless I have missed something the second round will try to reclaim > them even the later is true and I am not sure this is completely OK. What we want is something like this: Swap Space / Demotion OK -> Can Reclaim Swap Space / Demotion Off -> Can Reclaim Swap Full / Demotion OK -> Can Reclaim Swap Full / Demotion Off -> No Reclaim I *think* that's what can_reclaim_anon_pages() ends up doing. Maybe I'm misunderstanding what you are referring to, though. By "second round" did you mean when we do reclaim on a node which is a terminal node? > I am still trying to digest the whole thing but at least jamming > node_reclaim logic into kswapd seems strange to me. Need to think more > about that though. I'm entirely open to other ways to do the opt-in. It seemed sane at the time, but I also understand the kswapd concern. > Btw. do you have any numbers from running this with some real work > workload? Yes, quite a bit. Do you have a specific scenario in mind? Folks seem to come at this in two different ways: Some want to know how much DRAM they can replace by buying some PMEM. They tend to care about how much adding the (cheaper) PMEM slows them down versus (expensive) DRAM. They're making a cost-benefit call Others want to repurpose some PMEM they already have. They want to know how much using PMEM in this way will speed them up. They will basically take any speedup they can get. I ask because as a kernel developer with PMEM in my systems, I find the "I'll take what I can get" case more personally appealing. But, the business folks are much more keen on the "DRAM replacement" use. Do you have any thoughts on what you would like to see?
On Fri 16-04-21 07:26:43, Dave Hansen wrote: > On 4/16/21 5:35 AM, Michal Hocko wrote: > > I have to confess that I haven't grasped the initialization > > completely. There is a nice comment explaining a 2 socket system with > > 3 different NUMA nodes attached to it with one node being terminal. > > This is OK if the terminal node is PMEM but how that fits into usual > > NUMA setups. E.g. > > 4 nodes each with its set of CPUs > > node distances: > > node 0 1 2 3 > > 0: 10 20 20 20 > > 1: 20 10 20 20 > > 2: 20 20 10 20 > > 3: 20 20 20 10 > > Do I get it right that Node 3 would be terminal? > > Yes, I think Node 3 would end up being the terminal node in that setup. > > That said, I'm not sure how much I expect folks to use this on > traditional, non-tiered setups. It's also hard to argue what the > migration order *should* be when all the nodes are uniform. Well, they are not really uniform. The access latency differ and I can imagine that spreading page cache to a distant node might be just much better than an IO involved in the refault. On the other hand I do understand that restricting the feature to CPU less NUMA setups is quite sane to give us a better understanding of how this can be used and improve on top. Maybe we will learn that we want to have the demotion path admin controlable (on the system level or maybe even more fine grained on the memcg/cpuset). [...] > > - The demotion is implemented at shrink_page_list level which migrates > > pages in the first round and then falls back to the regular reclaim > > when migration fails. This means that the reclaim context > > (PF_MEMALLOC) will allocate memory so it has access to full memory > > reserves. Btw. I do not __GFP_NO_MEMALLOC anywhere in the allocation > > mask which looks like a bug rather than an intention. Btw. using > > GFP_NOWAIT in the allocation callback would make more things clear > > IMO. > > Yes, the lack of __GFP_NO_MEMALLOC is a bug. I'll fix that up. > > GFP_NOWAIT _seems_ like it will work. I'll give it a shot. Let me clarify a bit. The slow path does involve __gfp_pfmemalloc_flags before bailing out for non sleeping allocation. So you would need both. Unless you want to involve reclaim on the target node while you are reclaiming the origin node. > > - Memcg reclaim is excluded from all this because it is not NUMA aware > > which makes sense to me. > > - Anonymous pages are bit tricky because they can be demoted even when > > they cannot be reclaimed due to no (or no available) swap storage. > > Unless I have missed something the second round will try to reclaim > > them even the later is true and I am not sure this is completely OK. > > What we want is something like this: > > Swap Space / Demotion OK -> Can Reclaim > Swap Space / Demotion Off -> Can Reclaim > Swap Full / Demotion OK -> Can Reclaim > Swap Full / Demotion Off -> No Reclaim > > I *think* that's what can_reclaim_anon_pages() ends up doing. Maybe I'm > misunderstanding what you are referring to, though. By "second round" > did you mean when we do reclaim on a node which is a terminal node? No, I mean the migration failure case where you splice back to the page list to reclaim. In that round you do not demote and want to reclaim. But a lack of swap space will make that page unreclaimable. I suspect this would just work out fine but I am not sure from the top of my head. > > I am still trying to digest the whole thing but at least jamming > > node_reclaim logic into kswapd seems strange to me. Need to think more > > about that though. > > I'm entirely open to other ways to do the opt-in. It seemed sane at the > time, but I also understand the kswapd concern. > > > Btw. do you have any numbers from running this with some real work > > workload? > > Yes, quite a bit. Do you have a specific scenario in mind? Folks seem > to come at this in two different ways: > > Some want to know how much DRAM they can replace by buying some PMEM. > They tend to care about how much adding the (cheaper) PMEM slows them > down versus (expensive) DRAM. They're making a cost-benefit call > > Others want to repurpose some PMEM they already have. They want to know > how much using PMEM in this way will speed them up. They will basically > take any speedup they can get. > > I ask because as a kernel developer with PMEM in my systems, I find the > "I'll take what I can get" case more personally appealing. But, the > business folks are much more keen on the "DRAM replacement" use. Do you > have any thoughts on what you would like to see? I was thinking about typical large in memory processing (e.g. in memory databases) where the hot part of the working set is only a portion and spilling over to a slower memory can be still benefitial because IO + data preprocessing on cold data is much slower.
Michal Hocko <mhocko@suse.com> writes: > On Fri 16-04-21 07:26:43, Dave Hansen wrote: >> On 4/16/21 5:35 AM, Michal Hocko wrote: >> > - Anonymous pages are bit tricky because they can be demoted even when >> > they cannot be reclaimed due to no (or no available) swap storage. >> > Unless I have missed something the second round will try to reclaim >> > them even the later is true and I am not sure this is completely OK. >> >> What we want is something like this: >> >> Swap Space / Demotion OK -> Can Reclaim >> Swap Space / Demotion Off -> Can Reclaim >> Swap Full / Demotion OK -> Can Reclaim >> Swap Full / Demotion Off -> No Reclaim >> >> I *think* that's what can_reclaim_anon_pages() ends up doing. Maybe I'm >> misunderstanding what you are referring to, though. By "second round" >> did you mean when we do reclaim on a node which is a terminal node? > > No, I mean the migration failure case where you splice back to the page > list to reclaim. In that round you do not demote and want to reclaim. > But a lack of swap space will make that page unreclaimable. I suspect > this would just work out fine but I am not sure from the top of my head. I have tested this via injecting some migration errors (returning 0 from demote_page_list() before migration) on a system without swap. The system can still work properly. In ftrace, I can find add_to_swap() are called much more times, and it can deal with the situation where the swap space isn't available. Best Regards, Huang, Ying
Hi, Michal, Michal Hocko <mhocko@suse.com> writes: [...] >> >> > Btw. do you have any numbers from running this with some real work >> > workload? >> >> Yes, quite a bit. Do you have a specific scenario in mind? Folks seem >> to come at this in two different ways: >> >> Some want to know how much DRAM they can replace by buying some PMEM. >> They tend to care about how much adding the (cheaper) PMEM slows them >> down versus (expensive) DRAM. They're making a cost-benefit call >> >> Others want to repurpose some PMEM they already have. They want to know >> how much using PMEM in this way will speed them up. They will basically >> take any speedup they can get. >> >> I ask because as a kernel developer with PMEM in my systems, I find the >> "I'll take what I can get" case more personally appealing. But, the >> business folks are much more keen on the "DRAM replacement" use. Do you >> have any thoughts on what you would like to see? > > I was thinking about typical large in memory processing (e.g. in memory > databases) where the hot part of the working set is only a portion and > spilling over to a slower memory can be still benefitial because IO + > data preprocessing on cold data is much slower. We have tested the patchset with the postgresql and pgbench. On a machine with DRAM and PMEM, the kernel with the patchset can improve the score of pgbench up to 22.1% compared with that of the DRAM only + disk case. This comes from the reduced disk read throughput (which reduces up to 70.8%). Best Regards, Huang, Ying