mbox series

[-V9,0/6] NUMA balancing: optimize memory placement for memory tiering system

Message ID 20211008083938.1702663-1-ying.huang@intel.com (mailing list archive)
Headers show
Series NUMA balancing: optimize memory placement for memory tiering system | expand

Message

Huang, Ying Oct. 8, 2021, 8:39 a.m. UTC
The changes since the last post are as follows,

- Rebased on v5.15-rc4

- Make "add promotion counter" the first patch per Yang's comments

--

With the advent of various new memory types, some machines will have
multiple types of memory, e.g. DRAM and PMEM (persistent memory).  The
memory subsystem of these machines can be called memory tiering
system, because the performance of the different types of memory are
different.

After commit c221c0b0308f ("device-dax: "Hotplug" persistent memory
for use like normal RAM"), the PMEM could be used as the
cost-effective volatile memory in separate NUMA nodes.  In a typical
memory tiering system, there are CPUs, DRAM and PMEM in each physical
NUMA node.  The CPUs and the DRAM will be put in one logical node,
while the PMEM will be put in another (faked) logical node.

To optimize the system overall performance, the hot pages should be
placed in DRAM node.  To do that, we need to identify the hot pages in
the PMEM node and migrate them to DRAM node via NUMA migration.

In the original NUMA balancing, there are already a set of existing
mechanisms to identify the pages recently accessed by the CPUs in a
node and migrate the pages to the node.  So we can reuse these
mechanisms to build the mechanisms to optimize the page placement in
the memory tiering system.  This is implemented in this patchset.

At the other hand, the cold pages should be placed in PMEM node.  So,
we also need to identify the cold pages in the DRAM node and migrate
them to PMEM node.

In commit 26aa2d199d6f ("mm/migrate: demote pages during reclaim"), a
mechanism to demote the cold DRAM pages to PMEM node under memory
pressure is implemented.  Based on that, the cold DRAM pages can be
demoted to PMEM node proactively to free some memory space on DRAM
node to accommodate the promoted hot PMEM pages.  This is implemented
in this patchset too.

We have tested the solution with the pmbench memory accessing
benchmark with the 80:20 read/write ratio and the normal access
address distribution on a 2 socket Intel server with Optane DC
Persistent Memory Model.  The test results of the base kernel and step
by step optimizations are as follows,

                Throughput	Promotion      DRAM bandwidth
		  access/s           MB/s                MB/s
               -----------     ----------      --------------
Base		69263986.8			       1830.2
Patch 2	       135691921.4	    385.6	      11315.9
Patch 3	       133239016.8	    384.7	      11065.2
Patch 4	       151310868.9          197.6	      11397.0
Patch 5	       142311252.8           99.3	       9580.8
Patch 6	       149044263.9	     65.5	       9922.8

The whole patchset improves the benchmark score up to 115.2%.  The
basic NUMA balancing based optimization solution (patch 2), the hot
page selection algorithm (patch 4), and the threshold automatic
adjustment algorithms (patch 6) improves the performance or reduce the
overhead (promotion MB/s) greatly.

Changelog:

v9:

- Rebased on v5.15-rc4

- Make "add promotion counter" the first patch per Yang's comments

v8:

- Rebased on v5.15-rc1

- Make user-specified threshold take effect sooner

v7:

- Rebased on the mmots tree of 2021-07-15.

- Some minor fixes.

v6:

- Rebased on the latest page demotion patchset. (which bases on v5.11)

v5:

- Rebased on the latest page demotion patchset. (which bases on v5.10)

v4:

- Rebased on the latest page demotion patchset. (which bases on v5.9-rc6)

- Add page promotion counter.

v3:

- Move the rate limit control as late as possible per Mel Gorman's
  comments.

- Revise the hot page selection implementation to store page scan time
  in struct page.

- Code cleanup.

- Rebased on the latest page demotion patchset.

v2:

- Addressed comments for V1.

- Rebased on v5.5.

Best Regards,
Huang, Ying

Comments

Huang, Ying Oct. 8, 2021, 8:43 a.m. UTC | #1
Hi, Mel,

Huang Ying <ying.huang@intel.com> writes:

> The changes since the last post are as follows,
>
> - Rebased on v5.15-rc4
>
> - Make "add promotion counter" the first patch per Yang's comments
>
> --
>
> With the advent of various new memory types, some machines will have
> multiple types of memory, e.g. DRAM and PMEM (persistent memory).  The
> memory subsystem of these machines can be called memory tiering
> system, because the performance of the different types of memory are
> different.
>
> After commit c221c0b0308f ("device-dax: "Hotplug" persistent memory
> for use like normal RAM"), the PMEM could be used as the
> cost-effective volatile memory in separate NUMA nodes.  In a typical
> memory tiering system, there are CPUs, DRAM and PMEM in each physical
> NUMA node.  The CPUs and the DRAM will be put in one logical node,
> while the PMEM will be put in another (faked) logical node.
>
> To optimize the system overall performance, the hot pages should be
> placed in DRAM node.  To do that, we need to identify the hot pages in
> the PMEM node and migrate them to DRAM node via NUMA migration.
>
> In the original NUMA balancing, there are already a set of existing
> mechanisms to identify the pages recently accessed by the CPUs in a
> node and migrate the pages to the node.  So we can reuse these
> mechanisms to build the mechanisms to optimize the page placement in
> the memory tiering system.  This is implemented in this patchset.
>
> At the other hand, the cold pages should be placed in PMEM node.  So,
> we also need to identify the cold pages in the DRAM node and migrate
> them to PMEM node.
>
> In commit 26aa2d199d6f ("mm/migrate: demote pages during reclaim"), a
> mechanism to demote the cold DRAM pages to PMEM node under memory
> pressure is implemented.  Based on that, the cold DRAM pages can be
> demoted to PMEM node proactively to free some memory space on DRAM
> node to accommodate the promoted hot PMEM pages.  This is implemented
> in this patchset too.
>
> We have tested the solution with the pmbench memory accessing
> benchmark with the 80:20 read/write ratio and the normal access
> address distribution on a 2 socket Intel server with Optane DC
> Persistent Memory Model.  The test results of the base kernel and step
> by step optimizations are as follows,
>
>                 Throughput	Promotion      DRAM bandwidth
> 		  access/s           MB/s                MB/s
>                -----------     ----------      --------------
> Base		69263986.8			       1830.2
> Patch 2	       135691921.4	    385.6	      11315.9
> Patch 3	       133239016.8	    384.7	      11065.2
> Patch 4	       151310868.9          197.6	      11397.0
> Patch 5	       142311252.8           99.3	       9580.8
> Patch 6	       149044263.9	     65.5	       9922.8
>
> The whole patchset improves the benchmark score up to 115.2%.  The
> basic NUMA balancing based optimization solution (patch 2), the hot
> page selection algorithm (patch 4), and the threshold automatic
> adjustment algorithms (patch 6) improves the performance or reduce the
> overhead (promotion MB/s) greatly.
>
> Changelog:
>
> v9:
>
> - Rebased on v5.15-rc4
>
> - Make "add promotion counter" the first patch per Yang's comments

In this new version, all dependencies have been merged by the latest
upstream kernel.  So it is easier to be reviewed than the previous
version you have reviewed part of them.  Do you have time to take a look
at this new series now?

Best Regards,
Huang, Ying