mbox series

[RESEND,0/4] memory tiering: calculate abstract distance based on ACPI HMAT

Message ID 20230721012932.190742-1-ying.huang@intel.com (mailing list archive)
Headers show
Series memory tiering: calculate abstract distance based on ACPI HMAT | expand

Message

Huang, Ying July 21, 2023, 1:29 a.m. UTC
We have the explicit memory tiers framework to manage systems with
multiple types of memory, e.g., DRAM in DIMM slots and CXL memory
devices.  Where, same kind of memory devices will be grouped into
memory types, then put into memory tiers.  To describe the performance
of a memory type, abstract distance is defined.  Which is in direct
proportion to the memory latency and inversely proportional to the
memory bandwidth.  To keep the code as simple as possible, fixed
abstract distance is used in dax/kmem to describe slow memory such as
Optane DCPMM.

To support more memory types, in this series, we added the abstract
distance calculation algorithm management mechanism, provided a
algorithm implementation based on ACPI HMAT, and used the general
abstract distance calculation interface in dax/kmem driver.  So,
dax/kmem can support HBM (high bandwidth memory) in addition to the
original Optane DCPMM.

Changelog:

V1 (from RFC):

- Added some comments per Aneesh's comments, Thanks!

Best Regards,
Huang, Ying

Comments

Alistair Popple July 21, 2023, 4:15 a.m. UTC | #1
Thanks for this Huang, I had been hoping to take a look at it this week
but have run out of time. I'm keen to do some testing with it as well.

Hopefully next week...

Huang Ying <ying.huang@intel.com> writes:

> We have the explicit memory tiers framework to manage systems with
> multiple types of memory, e.g., DRAM in DIMM slots and CXL memory
> devices.  Where, same kind of memory devices will be grouped into
> memory types, then put into memory tiers.  To describe the performance
> of a memory type, abstract distance is defined.  Which is in direct
> proportion to the memory latency and inversely proportional to the
> memory bandwidth.  To keep the code as simple as possible, fixed
> abstract distance is used in dax/kmem to describe slow memory such as
> Optane DCPMM.
>
> To support more memory types, in this series, we added the abstract
> distance calculation algorithm management mechanism, provided a
> algorithm implementation based on ACPI HMAT, and used the general
> abstract distance calculation interface in dax/kmem driver.  So,
> dax/kmem can support HBM (high bandwidth memory) in addition to the
> original Optane DCPMM.
>
> Changelog:
>
> V1 (from RFC):
>
> - Added some comments per Aneesh's comments, Thanks!
>
> Best Regards,
> Huang, Ying
Andrew Morton July 24, 2023, 5:58 p.m. UTC | #2
On Fri, 21 Jul 2023 14:15:31 +1000 Alistair Popple <apopple@nvidia.com> wrote:

> Thanks for this Huang, I had been hoping to take a look at it this week
> but have run out of time. I'm keen to do some testing with it as well.

Thanks.  I'll queue this in mm-unstable for some testing.  Detailed
review and testing would be appreciated.

I made some adjustments to handle the renaming of destroy_memory_type()
to put_memory_type()
(https://lkml.kernel.org/r/20230706063905.543800-1-linmiaohe@huawei.com)
Bharata B Rao Aug. 1, 2023, 2:35 a.m. UTC | #3
On 24-Jul-23 11:28 PM, Andrew Morton wrote:
> On Fri, 21 Jul 2023 14:15:31 +1000 Alistair Popple <apopple@nvidia.com> wrote:
> 
>> Thanks for this Huang, I had been hoping to take a look at it this week
>> but have run out of time. I'm keen to do some testing with it as well.
> 
> Thanks.  I'll queue this in mm-unstable for some testing.  Detailed
> review and testing would be appreciated.

I gave this series a try on a 2P system with 2 CXL cards. I don't trust the
bandwidth and latency numbers reported by HMAT here, but FWIW, this patchset
puts the CXL nodes on a lower tier than DRAM nodes.

Regards,
Bharata.
Huang, Ying Aug. 11, 2023, 6:26 a.m. UTC | #4
Hi, Rao,

Bharata B Rao <bharata@amd.com> writes:

> On 24-Jul-23 11:28 PM, Andrew Morton wrote:
>> On Fri, 21 Jul 2023 14:15:31 +1000 Alistair Popple <apopple@nvidia.com> wrote:
>> 
>>> Thanks for this Huang, I had been hoping to take a look at it this week
>>> but have run out of time. I'm keen to do some testing with it as well.
>> 
>> Thanks.  I'll queue this in mm-unstable for some testing.  Detailed
>> review and testing would be appreciated.
>
> I gave this series a try on a 2P system with 2 CXL cards. I don't trust the
> bandwidth and latency numbers reported by HMAT here, but FWIW, this patchset
> puts the CXL nodes on a lower tier than DRAM nodes.

Thank you very much!

Can I add your "Tested-by" for the series?

--
Best Regards,
Huang, Ying
Bharata B Rao Aug. 11, 2023, 7:49 a.m. UTC | #5
On 11-Aug-23 11:56 AM, Huang, Ying wrote:
> Hi, Rao,
> 
> Bharata B Rao <bharata@amd.com> writes:
> 
>> On 24-Jul-23 11:28 PM, Andrew Morton wrote:
>>> On Fri, 21 Jul 2023 14:15:31 +1000 Alistair Popple <apopple@nvidia.com> wrote:
>>>
>>>> Thanks for this Huang, I had been hoping to take a look at it this week
>>>> but have run out of time. I'm keen to do some testing with it as well.
>>>
>>> Thanks.  I'll queue this in mm-unstable for some testing.  Detailed
>>> review and testing would be appreciated.
>>
>> I gave this series a try on a 2P system with 2 CXL cards. I don't trust the
>> bandwidth and latency numbers reported by HMAT here, but FWIW, this patchset
>> puts the CXL nodes on a lower tier than DRAM nodes.
> 
> Thank you very much!
> 
> Can I add your "Tested-by" for the series?

Yes if the above test qualifies for it, please go ahead.

Regards,
Bharata.