Message ID | 20220603134237.131362-1-aneesh.kumar@linux.ibm.com (mailing list archive) |
---|---|
Headers | show |
Series | mm/demotion: Memory tiers and demotion | expand |
Hi Aneesh, On Fri, Jun 03, 2022 at 07:12:28PM +0530, Aneesh Kumar K.V wrote: > * The current tier initialization code always initializes > each memory-only NUMA node into a lower tier. But a memory-only > NUMA node may have a high performance memory device (e.g. a DRAM > device attached via CXL.mem or a DRAM-backed memory-only node on > a virtual machine) and should be put into a higher tier. I have to disagree with this premise. The CXL.mem bus has different latency and bandwidth characteristics. It's also conceivable that cheaper and slower DRAM is connected to the CXL bus (think recycling DDR4 DIMMS after switching to DDR5). DRAM != DRAM. Our experiments with production workloads show regressions between 15-30% in serviced requests when you don't distinguish toptier DRAM from lower tier DRAM. While it's fixable with manual tuning, your patches would bring reintroduce this regression it seems. Making tiers explicit is a good idea, but can we keep the current default that CPU-less nodes are of a lower tier than ones with CPU? I'm having a hard time imagining where this wouldn't be true... Or why it shouldn't be those esoteric cases that need the manual tuning.
On 6/8/22 7:27 PM, Johannes Weiner wrote: > Hi Aneesh, > > On Fri, Jun 03, 2022 at 07:12:28PM +0530, Aneesh Kumar K.V wrote: >> * The current tier initialization code always initializes >> each memory-only NUMA node into a lower tier. But a memory-only >> NUMA node may have a high performance memory device (e.g. a DRAM >> device attached via CXL.mem or a DRAM-backed memory-only node on >> a virtual machine) and should be put into a higher tier. > > I have to disagree with this premise. The CXL.mem bus has different > latency and bandwidth characteristics. It's also conceivable that > cheaper and slower DRAM is connected to the CXL bus (think recycling > DDR4 DIMMS after switching to DDR5). DRAM != DRAM. > > Our experiments with production workloads show regressions between > 15-30% in serviced requests when you don't distinguish toptier DRAM > from lower tier DRAM. While it's fixable with manual tuning, your > patches would bring reintroduce this regression it seems. > > Making tiers explicit is a good idea, but can we keep the current > default that CPU-less nodes are of a lower tier than ones with CPU? > I'm having a hard time imagining where this wouldn't be true... Or why > it shouldn't be those esoteric cases that need the manual tuning. This was mostly driven by virtual machine configs where we can find memory only NUMA nodes depending on the resource availability in the hypervisor. Will these CXL devices be initialized by a driver? For example, if they are going to be initialized via dax kmem, we already consider them lower memory tier as with this patch series. -aneesh
On Wed, 8 Jun 2022 19:50:11 +0530 Aneesh Kumar K V <aneesh.kumar@linux.ibm.com> wrote: > On 6/8/22 7:27 PM, Johannes Weiner wrote: > > Hi Aneesh, > > > > On Fri, Jun 03, 2022 at 07:12:28PM +0530, Aneesh Kumar K.V wrote: > >> * The current tier initialization code always initializes > >> each memory-only NUMA node into a lower tier. But a memory-only > >> NUMA node may have a high performance memory device (e.g. a DRAM > >> device attached via CXL.mem or a DRAM-backed memory-only node on > >> a virtual machine) and should be put into a higher tier. > > > > I have to disagree with this premise. The CXL.mem bus has different > > latency and bandwidth characteristics. It's also conceivable that > > cheaper and slower DRAM is connected to the CXL bus (think recycling > > DDR4 DIMMS after switching to DDR5). DRAM != DRAM. > > > > Our experiments with production workloads show regressions between > > 15-30% in serviced requests when you don't distinguish toptier DRAM > > from lower tier DRAM. While it's fixable with manual tuning, your > > patches would bring reintroduce this regression it seems. > > > > Making tiers explicit is a good idea, but can we keep the current > > default that CPU-less nodes are of a lower tier than ones with CPU? > > I'm having a hard time imagining where this wouldn't be true... Or why > > it shouldn't be those esoteric cases that need the manual tuning. > > This was mostly driven by virtual machine configs where we can find > memory only NUMA nodes depending on the resource availability in the > hypervisor. > > Will these CXL devices be initialized by a driver? In many cases no (almost all cases pre CXL 2.0) - they are setup by the BIOS / firmware and presented just like normal memory (except pmem in which case kmem will be relevant as you suggest). Hopefully everyone will follow the guidance and provide appropriate HMAT as well as SLIT. If we want to do a better job of the default policy, we should take the actual distances into account (ideally including the detail HMAT provides). Jonathan > For example, if they > are going to be initialized via dax kmem, we already consider them lower > memory tier as with this patch series. > > -aneesh