diff mbox series

[v2] mm/slub: improve performance by skipping checked node in get_any_partial()

Message ID 20181120033119.30013-1-richard.weiyang@gmail.com (mailing list archive)
State New, archived
Headers show
Series [v2] mm/slub: improve performance by skipping checked node in get_any_partial() | expand

Commit Message

Wei Yang Nov. 20, 2018, 3:31 a.m. UTC
1. Background

  Current slub has three layers:

    * cpu_slab
    * percpu_partial
    * per node partial list

  Slub allocator tries to get an object from top to bottom. When it can't
  get an object from the upper two layers, it will search the per node
  partial list. The is done in get_partial().

  The abstraction of get_partial() may looks like this:

      get_partial()
          get_partial_node()
          get_any_partial()
              for_each_zone_zonelist()

  The idea behind this is: it first try a local node, then try other nodes
  if caller doesn't specify a node.

2. Room for Improvement

  When we look one step deeper in get_any_partial(), it tries to get a
  proper node by for_each_zone_zonelist(), which iterates on the
  node_zonelists.

  This behavior would introduce some redundant check on the same node.
  Because:

    * the local node is already checked in get_partial_node()
    * one node may have several zones on node_zonelists

3. Solution Proposed in Patch

  We could reduce these redundant check by record the last unsuccessful
  node and then skip it.

4. Tests & Result

  After some tests, the result shows this may improve the system a little,
  especially on a machine with only one node.

4.1 Test Description

  There are two cases for two system configurations.

  Test Cases:

    1. counter comparison
    2. kernel build test

  System Configuration:

    1. One node machine with 4G
    2. Four node machine with 8G

4.2 Result for Test 1

  Test 1: counter comparison

  This is a test with hacked kernel to record times function
  get_any_partial() is invoked and times the inner loop iterates. By
  comparing the ratio of two counters, we get to know how many inner
  loops we skipped.

  Here is a snip of the test patch.

  ---
  static void *get_any_partial() {

	get_partial_count++;

        do {
		for_each_zone_zonelist() {
			get_partial_try_count++;
		}
	} while();

	return NULL;
  }
  ---

  The result of (get_partial_count / get_partial_try_count):

   +----------+----------------+------------+-------------+
   |          |       Base     |    Patched |  Improvement|
   +----------+----------------+------------+-------------+
   |One Node  |       1:3      |    1:0     |      - 100% |
   +----------+----------------+------------+-------------+
   |Four Nodes|       1:5.8    |    1:2.5   |      -  56% |
   +----------+----------------+------------+-------------+

4.3 Result for Test 2

  Test 2: kernel build

   Command used:

   > time make -j8 bzImage

   Each version/system configuration combination has four round kernel
   build tests. Take the average result of real to compare.

   +----------+----------------+------------+-------------+
   |          |       Base     |   Patched  |  Improvement|
   +----------+----------------+------------+-------------+
   |One Node  |      4m41s     |   4m32s    |     - 4.47% |
   +----------+----------------+------------+-------------+
   |Four Nodes|      4m45s     |   4m39s    |     - 2.92% |
   +----------+----------------+------------+-------------+

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>

---
v3:
  * replace nmask with except to reduce potential stack overflow and copy
    overhead
  * test this in two cases and two system configurations and list the result

v2:
  * rewrite the changelog and add a comment based on Andrew's comment

---
 mm/slub.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

Comments

Andrew Morton Nov. 22, 2018, 3:05 a.m. UTC | #1
On Tue, 20 Nov 2018 11:31:19 +0800 Wei Yang <richard.weiyang@gmail.com> wrote:

> 1. Background
> 
>   Current slub has three layers:
> 
>     * cpu_slab
>     * percpu_partial
>     * per node partial list
> 
>   Slub allocator tries to get an object from top to bottom. When it can't
>   get an object from the upper two layers, it will search the per node
>   partial list. The is done in get_partial().
> 
>   The abstraction of get_partial() may looks like this:
> 
>       get_partial()
>           get_partial_node()
>           get_any_partial()
>               for_each_zone_zonelist()
> 
>   The idea behind this is: it first try a local node, then try other nodes
>   if caller doesn't specify a node.
> 
> 2. Room for Improvement
> 
>   When we look one step deeper in get_any_partial(), it tries to get a
>   proper node by for_each_zone_zonelist(), which iterates on the
>   node_zonelists.
> 
>   This behavior would introduce some redundant check on the same node.
>   Because:
> 
>     * the local node is already checked in get_partial_node()
>     * one node may have several zones on node_zonelists
> 
> 3. Solution Proposed in Patch
> 
>   We could reduce these redundant check by record the last unsuccessful
>   node and then skip it.
> 
> 4. Tests & Result
> 
>   After some tests, the result shows this may improve the system a little,
>   especially on a machine with only one node.
> 
> 4.1 Test Description
> 
>   There are two cases for two system configurations.
> 
>   Test Cases:
> 
>     1. counter comparison
>     2. kernel build test
> 
>   System Configuration:
> 
>     1. One node machine with 4G
>     2. Four node machine with 8G
> 
> 4.2 Result for Test 1
> 
>   Test 1: counter comparison
> 
>   This is a test with hacked kernel to record times function
>   get_any_partial() is invoked and times the inner loop iterates. By
>   comparing the ratio of two counters, we get to know how many inner
>   loops we skipped.
> 
>   Here is a snip of the test patch.
> 
>   ---
>   static void *get_any_partial() {
> 
> 	get_partial_count++;
> 
>         do {
> 		for_each_zone_zonelist() {
> 			get_partial_try_count++;
> 		}
> 	} while();
> 
> 	return NULL;
>   }
>   ---
> 
>   The result of (get_partial_count / get_partial_try_count):
> 
>    +----------+----------------+------------+-------------+
>    |          |       Base     |    Patched |  Improvement|
>    +----------+----------------+------------+-------------+
>    |One Node  |       1:3      |    1:0     |      - 100% |
>    +----------+----------------+------------+-------------+
>    |Four Nodes|       1:5.8    |    1:2.5   |      -  56% |
>    +----------+----------------+------------+-------------+
> 
> 4.3 Result for Test 2
> 
>   Test 2: kernel build
> 
>    Command used:
> 
>    > time make -j8 bzImage
> 
>    Each version/system configuration combination has four round kernel
>    build tests. Take the average result of real to compare.
> 
>    +----------+----------------+------------+-------------+
>    |          |       Base     |   Patched  |  Improvement|
>    +----------+----------------+------------+-------------+
>    |One Node  |      4m41s     |   4m32s    |     - 4.47% |
>    +----------+----------------+------------+-------------+
>    |Four Nodes|      4m45s     |   4m39s    |     - 2.92% |
>    +----------+----------------+------------+-------------+
> 
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> 

Looks good to me, but I'll await input from the slab maintainers before
proceeding any further.

I didn't like the variable name much, and the comment could be
improved.  Please review:


--- a/mm/slub.c~mm-slub-improve-performance-by-skipping-checked-node-in-get_any_partial-fix
+++ a/mm/slub.c
@@ -1873,7 +1873,7 @@ static void *get_partial_node(struct kme
  * Get a page from somewhere. Search in increasing NUMA distances.
  */
 static void *get_any_partial(struct kmem_cache *s, gfp_t flags,
-		struct kmem_cache_cpu *c, int except)
+		struct kmem_cache_cpu *c, int exclude_nid)
 {
 #ifdef CONFIG_NUMA
 	struct zonelist *zonelist;
@@ -1911,7 +1911,7 @@ static void *get_any_partial(struct kmem
 		for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) {
 			struct kmem_cache_node *n;
 
-			if (except == zone_to_nid(zone))
+			if (exclude_nid == zone_to_nid(zone))
 				continue;
 
 			n = get_node(s, zone_to_nid(zone));
@@ -1931,12 +1931,13 @@ static void *get_any_partial(struct kmem
 				}
 			}
 			/*
-			 * Fail to get object from this node, either because
-			 *   1. Fails in if check
-			 *   2. NULl object returns from get_partial_node()
-			 * Skip it next time.
+			 * Failed to get an object from this node, either 
+			 * because
+			 *   1. Failure in the above if check
+			 *   2. NULL return from get_partial_node()
+			 * So skip this node next time.
 			 */
-			except = zone_to_nid(zone);
+			exclude_nid = zone_to_nid(zone);
 		}
 	} while (read_mems_allowed_retry(cpuset_mems_cookie));
 #endif
Wei Yang Nov. 22, 2018, 9:13 a.m. UTC | #2
On Wed, Nov 21, 2018 at 07:05:55PM -0800, Andrew Morton wrote:
>On Tue, 20 Nov 2018 11:31:19 +0800 Wei Yang <richard.weiyang@gmail.com> wrote:
>
>> 1. Background
>> 
>>   Current slub has three layers:
>> 
>>     * cpu_slab
>>     * percpu_partial
>>     * per node partial list
>> 
>>   Slub allocator tries to get an object from top to bottom. When it can't
>>   get an object from the upper two layers, it will search the per node
>>   partial list. The is done in get_partial().
>> 
>>   The abstraction of get_partial() may looks like this:
>> 
>>       get_partial()
>>           get_partial_node()
>>           get_any_partial()
>>               for_each_zone_zonelist()
>> 
>>   The idea behind this is: it first try a local node, then try other nodes
>>   if caller doesn't specify a node.
>> 
>> 2. Room for Improvement
>> 
>>   When we look one step deeper in get_any_partial(), it tries to get a
>>   proper node by for_each_zone_zonelist(), which iterates on the
>>   node_zonelists.
>> 
>>   This behavior would introduce some redundant check on the same node.
>>   Because:
>> 
>>     * the local node is already checked in get_partial_node()
>>     * one node may have several zones on node_zonelists
>> 
>> 3. Solution Proposed in Patch
>> 
>>   We could reduce these redundant check by record the last unsuccessful
>>   node and then skip it.
>> 
>> 4. Tests & Result
>> 
>>   After some tests, the result shows this may improve the system a little,
>>   especially on a machine with only one node.
>> 
>> 4.1 Test Description
>> 
>>   There are two cases for two system configurations.
>> 
>>   Test Cases:
>> 
>>     1. counter comparison
>>     2. kernel build test
>> 
>>   System Configuration:
>> 
>>     1. One node machine with 4G
>>     2. Four node machine with 8G
>> 
>> 4.2 Result for Test 1
>> 
>>   Test 1: counter comparison
>> 
>>   This is a test with hacked kernel to record times function
>>   get_any_partial() is invoked and times the inner loop iterates. By
>>   comparing the ratio of two counters, we get to know how many inner
>>   loops we skipped.
>> 
>>   Here is a snip of the test patch.
>> 
>>   ---
>>   static void *get_any_partial() {
>> 
>> 	get_partial_count++;
>> 
>>         do {
>> 		for_each_zone_zonelist() {
>> 			get_partial_try_count++;
>> 		}
>> 	} while();
>> 
>> 	return NULL;
>>   }
>>   ---
>> 
>>   The result of (get_partial_count / get_partial_try_count):
>> 
>>    +----------+----------------+------------+-------------+
>>    |          |       Base     |    Patched |  Improvement|
>>    +----------+----------------+------------+-------------+
>>    |One Node  |       1:3      |    1:0     |      - 100% |
>>    +----------+----------------+------------+-------------+
>>    |Four Nodes|       1:5.8    |    1:2.5   |      -  56% |
>>    +----------+----------------+------------+-------------+
>> 
>> 4.3 Result for Test 2
>> 
>>   Test 2: kernel build
>> 
>>    Command used:
>> 
>>    > time make -j8 bzImage
>> 
>>    Each version/system configuration combination has four round kernel
>>    build tests. Take the average result of real to compare.
>> 
>>    +----------+----------------+------------+-------------+
>>    |          |       Base     |   Patched  |  Improvement|
>>    +----------+----------------+------------+-------------+
>>    |One Node  |      4m41s     |   4m32s    |     - 4.47% |
>>    +----------+----------------+------------+-------------+
>>    |Four Nodes|      4m45s     |   4m39s    |     - 2.92% |
>>    +----------+----------------+------------+-------------+
>> 
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> 
>
>Looks good to me, but I'll await input from the slab maintainers before
>proceeding any further.
>
>I didn't like the variable name much, and the comment could be
>improved.  Please review:
>

Looks much better, thanks :-)

>
>--- a/mm/slub.c~mm-slub-improve-performance-by-skipping-checked-node-in-get_any_partial-fix
>+++ a/mm/slub.c
>@@ -1873,7 +1873,7 @@ static void *get_partial_node(struct kme
>  * Get a page from somewhere. Search in increasing NUMA distances.
>  */
> static void *get_any_partial(struct kmem_cache *s, gfp_t flags,
>-		struct kmem_cache_cpu *c, int except)
>+		struct kmem_cache_cpu *c, int exclude_nid)
> {
> #ifdef CONFIG_NUMA
> 	struct zonelist *zonelist;
>@@ -1911,7 +1911,7 @@ static void *get_any_partial(struct kmem
> 		for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) {
> 			struct kmem_cache_node *n;
> 
>-			if (except == zone_to_nid(zone))
>+			if (exclude_nid == zone_to_nid(zone))
> 				continue;
> 
> 			n = get_node(s, zone_to_nid(zone));
>@@ -1931,12 +1931,13 @@ static void *get_any_partial(struct kmem
> 				}
> 			}
> 			/*
>-			 * Fail to get object from this node, either because
>-			 *   1. Fails in if check
>-			 *   2. NULl object returns from get_partial_node()
>-			 * Skip it next time.
>+			 * Failed to get an object from this node, either 
>+			 * because
>+			 *   1. Failure in the above if check
>+			 *   2. NULL return from get_partial_node()
>+			 * So skip this node next time.
> 			 */
>-			except = zone_to_nid(zone);
>+			exclude_nid = zone_to_nid(zone);
> 		}
> 	} while (read_mems_allowed_retry(cpuset_mems_cookie));
> #endif
>_
Wei Yang Nov. 22, 2018, 11:41 p.m. UTC | #3
On Wed, Nov 21, 2018 at 07:05:55PM -0800, Andrew Morton wrote:
>On Tue, 20 Nov 2018 11:31:19 +0800 Wei Yang <richard.weiyang@gmail.com> wrote:
>
>> 1. Background
>> 
>>   Current slub has three layers:
>> 
>>     * cpu_slab
>>     * percpu_partial
>>     * per node partial list
>> 
>>   Slub allocator tries to get an object from top to bottom. When it can't
>>   get an object from the upper two layers, it will search the per node
>>   partial list. The is done in get_partial().
>> 
>>   The abstraction of get_partial() may looks like this:
>> 
>>       get_partial()
>>           get_partial_node()
>>           get_any_partial()
>>               for_each_zone_zonelist()
>> 
>>   The idea behind this is: it first try a local node, then try other nodes
>>   if caller doesn't specify a node.
>> 
>> 2. Room for Improvement
>> 
>>   When we look one step deeper in get_any_partial(), it tries to get a
>>   proper node by for_each_zone_zonelist(), which iterates on the
>>   node_zonelists.
>> 
>>   This behavior would introduce some redundant check on the same node.
>>   Because:
>> 
>>     * the local node is already checked in get_partial_node()
>>     * one node may have several zones on node_zonelists
>> 
>> 3. Solution Proposed in Patch
>> 
>>   We could reduce these redundant check by record the last unsuccessful
>>   node and then skip it.
>> 
>> 4. Tests & Result
>> 
>>   After some tests, the result shows this may improve the system a little,
>>   especially on a machine with only one node.
>> 
>> 4.1 Test Description
>> 
>>   There are two cases for two system configurations.
>> 
>>   Test Cases:
>> 
>>     1. counter comparison
>>     2. kernel build test
>> 
>>   System Configuration:
>> 
>>     1. One node machine with 4G
>>     2. Four node machine with 8G
>> 
>> 4.2 Result for Test 1
>> 
>>   Test 1: counter comparison
>> 
>>   This is a test with hacked kernel to record times function
>>   get_any_partial() is invoked and times the inner loop iterates. By
>>   comparing the ratio of two counters, we get to know how many inner
>>   loops we skipped.
>> 
>>   Here is a snip of the test patch.
>> 
>>   ---
>>   static void *get_any_partial() {
>> 
>> 	get_partial_count++;
>> 
>>         do {
>> 		for_each_zone_zonelist() {
>> 			get_partial_try_count++;
>> 		}
>> 	} while();
>> 
>> 	return NULL;
>>   }
>>   ---
>> 
>>   The result of (get_partial_count / get_partial_try_count):
>> 
>>    +----------+----------------+------------+-------------+
>>    |          |       Base     |    Patched |  Improvement|
>>    +----------+----------------+------------+-------------+
>>    |One Node  |       1:3      |    1:0     |      - 100% |
>>    +----------+----------------+------------+-------------+
>>    |Four Nodes|       1:5.8    |    1:2.5   |      -  56% |
>>    +----------+----------------+------------+-------------+
>> 
>> 4.3 Result for Test 2
>> 
>>   Test 2: kernel build
>> 
>>    Command used:
>> 
>>    > time make -j8 bzImage
>> 
>>    Each version/system configuration combination has four round kernel
>>    build tests. Take the average result of real to compare.
>> 
>>    +----------+----------------+------------+-------------+
>>    |          |       Base     |   Patched  |  Improvement|
>>    +----------+----------------+------------+-------------+
>>    |One Node  |      4m41s     |   4m32s    |     - 4.47% |
>>    +----------+----------------+------------+-------------+
>>    |Four Nodes|      4m45s     |   4m39s    |     - 2.92% |
>>    +----------+----------------+------------+-------------+
>> 
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> 
>
>Looks good to me, but I'll await input from the slab maintainers before
>proceeding any further.
>
>I didn't like the variable name much, and the comment could be
>improved.  Please review:
>

Can I add this?

Reviewed-by: Wei Yang <richard.weiyang@gmail.com>
Michal Hocko Nov. 23, 2018, 1:39 p.m. UTC | #4
On Thu 22-11-18 23:41:59, Wei Yang wrote:
> On Wed, Nov 21, 2018 at 07:05:55PM -0800, Andrew Morton wrote:
[...]
> >> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> 
> Reviewed-by: Wei Yang <richard.weiyang@gmail.com>

Why would you want to add reviewed tag to your own patch? Isn't the
s-o-b a sufficient sign of you being and author of the patch and
therefore the one who has reviewed the change before asking for merging?

Btw. Documentation/SubmittingPatches might come handy to understand the
process some more. Feel free to ask if there is something unclear.
Michal Hocko Nov. 23, 2018, 1:49 p.m. UTC | #5
On Fri 23-11-18 14:39:02, Michal Hocko wrote:
> On Thu 22-11-18 23:41:59, Wei Yang wrote:
> > On Wed, Nov 21, 2018 at 07:05:55PM -0800, Andrew Morton wrote:
> [...]
> > >> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> > 
> > Reviewed-by: Wei Yang <richard.weiyang@gmail.com>
> 
> Why would you want to add reviewed tag to your own patch? Isn't the
> s-o-b a sufficient sign of you being and author of the patch and
> therefore the one who has reviewed the change before asking for merging?

OK, it seems I've misunderstood. Did you mean Reviewed-by to the follow
up fixes by Andrew? If yes then sorry about my response. I do not want
to speak for Andrew but he usually just wants a "looks good" and will
eventually fold his changes into the original patch.
Wei Yang Nov. 23, 2018, 3:27 p.m. UTC | #6
On Fri, Nov 23, 2018 at 02:49:24PM +0100, Michal Hocko wrote:
>On Fri 23-11-18 14:39:02, Michal Hocko wrote:
>> On Thu 22-11-18 23:41:59, Wei Yang wrote:
>> > On Wed, Nov 21, 2018 at 07:05:55PM -0800, Andrew Morton wrote:
>> [...]
>> > >> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> > 
>> > Reviewed-by: Wei Yang <richard.weiyang@gmail.com>
>> 
>> Why would you want to add reviewed tag to your own patch? Isn't the
>> s-o-b a sufficient sign of you being and author of the patch and
>> therefore the one who has reviewed the change before asking for merging?
>
>OK, it seems I've misunderstood. Did you mean Reviewed-by to the follow
>up fixes by Andrew? If yes then sorry about my response. I do not want
>to speak for Andrew but he usually just wants a "looks good" and will
>eventually fold his changes into the original patch.

Yes, Reviewed-by is for Andrew's fix.

>-- 
>Michal Hocko
>SUSE Labs
Andrew Morton Dec. 20, 2018, 10:41 p.m. UTC | #7
Could someone please review this?

Thanks.

From: Wei Yang <richard.weiyang@gmail.com>
Subject: mm/slub.c: improve performance by skipping checked node in get_any_partial()

1. Background

  Current slub has three layers:

    * cpu_slab
    * percpu_partial
    * per node partial list

  Slub allocator tries to get an object from top to bottom.  When it
  can't get an object from the upper two layers, it will search the per
  node partial list.  The is done in get_partial().

  The abstraction of get_partial() look like this:

      get_partial()
          get_partial_node()
          get_any_partial()
              for_each_zone_zonelist()

  The idea behind this is: first try a local node, then try other nodes
  if caller doesn't specify a node.

2. Room for Improvement

  When we look one step deeper in get_any_partial(), it tries to get a
  proper node by for_each_zone_zonelist(), which iterates on the
  node_zonelists.

  This behavior would introduce some redundant check on the same node. 
  Because:

    * the local node is already checked in get_partial_node()
    * one node may have several zones on node_zonelists

3. Solution Proposed in Patch

  We could reduce these redundant check by record the last unsuccessful
  node and then skip it.

4. Tests & Result

  After some tests, the result shows this may improve the system a little,
  especially on a machine with only one node.

4.1 Test Description

  There are two cases for two system configurations.

  Test Cases:

    1. counter comparison
    2. kernel build test

  System Configuration:

    1. One node machine with 4G
    2. Four node machine with 8G

4.2 Result for Test 1

  Test 1: counter comparison

  This is a test with hacked kernel to record times function
  get_any_partial() is invoked and times the inner loop iterates. By
  comparing the ratio of two counters, we get to know how many inner
  loops we skipped.

  Here is a snip of the test patch.

  ---
  static void *get_any_partial() {

	get_partial_count++;

        do {
		for_each_zone_zonelist() {
			get_partial_try_count++;
		}
	} while();

	return NULL;
  }
  ---

  The result of (get_partial_count / get_partial_try_count):

   +----------+----------------+------------+-------------+
   |          |       Base     |    Patched |  Improvement|
   +----------+----------------+------------+-------------+
   |One Node  |       1:3      |    1:0     |      - 100% |
   +----------+----------------+------------+-------------+
   |Four Nodes|       1:5.8    |    1:2.5   |      -  56% |
   +----------+----------------+------------+-------------+

4.3 Result for Test 2

  Test 2: kernel build

   Command used:

   > time make -j8 bzImage

   Each version/system configuration combination has four round kernel
   build tests. Take the average result of real to compare.

   +----------+----------------+------------+-------------+
   |          |       Base     |   Patched  |  Improvement|
   +----------+----------------+------------+-------------+
   |One Node  |      4m41s     |   4m32s    |     - 4.47% |
   +----------+----------------+------------+-------------+
   |Four Nodes|      4m45s     |   4m39s    |     - 2.92% |
   +----------+----------------+------------+-------------+

[akpm@linux-foundation.org: rename variable, tweak comment]
Link: http://lkml.kernel.org/r/20181120033119.30013-1-richard.weiyang@gmail.com
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/slub.c |   15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

--- a/mm/slub.c~mm-slub-improve-performance-by-skipping-checked-node-in-get_any_partial
+++ a/mm/slub.c
@@ -1877,7 +1877,7 @@ static void *get_partial_node(struct kme
  * Get a page from somewhere. Search in increasing NUMA distances.
  */
 static void *get_any_partial(struct kmem_cache *s, gfp_t flags,
-		struct kmem_cache_cpu *c)
+		struct kmem_cache_cpu *c, int exclude_nid)
 {
 #ifdef CONFIG_NUMA
 	struct zonelist *zonelist;
@@ -1915,6 +1915,9 @@ static void *get_any_partial(struct kmem
 		for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) {
 			struct kmem_cache_node *n;
 
+			if (exclude_nid == zone_to_nid(zone))
+				continue;
+
 			n = get_node(s, zone_to_nid(zone));
 
 			if (n && cpuset_zone_allowed(zone, flags) &&
@@ -1931,6 +1934,14 @@ static void *get_any_partial(struct kmem
 					return object;
 				}
 			}
+			/*
+			 * Failed to get an object from this node, either
+			 * because
+			 *   1. Failure in the above if check
+			 *   2. NULL return from get_partial_node()
+			 * So skip this node next time.
+			 */
+			exclude_nid = zone_to_nid(zone);
 		}
 	} while (read_mems_allowed_retry(cpuset_mems_cookie));
 #endif
@@ -1955,7 +1966,7 @@ static void *get_partial(struct kmem_cac
 	if (object || node != NUMA_NO_NODE)
 		return object;
 
-	return get_any_partial(s, flags, c);
+	return get_any_partial(s, flags, c, searchnode);
 }
 
 #ifdef CONFIG_PREEMPT
Alexander Duyck Dec. 21, 2018, 12:25 a.m. UTC | #8
On Thu, 2018-12-20 at 14:41 -0800, Andrew Morton wrote:
> Could someone please review this?
> 
> Thanks.
> 
> From: Wei Yang <richard.weiyang@gmail.com>
> Subject: mm/slub.c: improve performance by skipping checked node in get_any_partial()
> 
> 1. Background
> 
>   Current slub has three layers:
> 
>     * cpu_slab
>     * percpu_partial
>     * per node partial list
> 
>   Slub allocator tries to get an object from top to bottom.  When it
>   can't get an object from the upper two layers, it will search the per
>   node partial list.  The is done in get_partial().
> 
>   The abstraction of get_partial() look like this:
> 
>       get_partial()
>           get_partial_node()
>           get_any_partial()
>               for_each_zone_zonelist()
> 
>   The idea behind this is: first try a local node, then try other nodes
>   if caller doesn't specify a node.
> 
> 2. Room for Improvement
> 
>   When we look one step deeper in get_any_partial(), it tries to get a
>   proper node by for_each_zone_zonelist(), which iterates on the
>   node_zonelists.
> 
>   This behavior would introduce some redundant check on the same node. 
>   Because:
> 
>     * the local node is already checked in get_partial_node()
>     * one node may have several zones on node_zonelists
> 

So it seems like there can be a few different behaviors based on
mempolicy_slab_node() being used to construct the zonelist. Do you
happen to know what memory policy your test process was running under?
Also have you tried using any of the other policies to gather data?

> 3. Solution Proposed in Patch
> 
>   We could reduce these redundant check by record the last unsuccessful
>   node and then skip it.
> 
> 4. Tests & Result
> 
>   After some tests, the result shows this may improve the system a little,
>   especially on a machine with only one node.
> 
> 4.1 Test Description
> 
>   There are two cases for two system configurations.
> 
>   Test Cases:
> 
>     1. counter comparison
>     2. kernel build test
> 
>   System Configuration:
> 
>     1. One node machine with 4G
>     2. Four node machine with 8G
> 
> 4.2 Result for Test 1
> 
>   Test 1: counter comparison
> 
>   This is a test with hacked kernel to record times function
>   get_any_partial() is invoked and times the inner loop iterates. By
>   comparing the ratio of two counters, we get to know how many inner
>   loops we skipped.
> 
>   Here is a snip of the test patch.
> 
>   ---
>   static void *get_any_partial() {
> 
> 	get_partial_count++;
> 
>         do {
> 		for_each_zone_zonelist() {
> 			get_partial_try_count++;
> 		}
> 	} while();
> 
> 	return NULL;
>   }
>   ---
> 
>   The result of (get_partial_count / get_partial_try_count):
> 
>    +----------+----------------+------------+-------------+
>    |          |       Base     |    Patched |  Improvement|
>    +----------+----------------+------------+-------------+
>    |One Node  |       1:3      |    1:0     |      - 100% |
>    +----------+----------------+------------+-------------+
>    |Four Nodes|       1:5.8    |    1:2.5   |      -  56% |
>    +----------+----------------+------------+-------------+
> 
> 4.3 Result for Test 2
> 
>   Test 2: kernel build
> 
>    Command used:
> 
>    > time make -j8 bzImage
> 
>    Each version/system configuration combination has four round kernel
>    build tests. Take the average result of real to compare.
> 
>    +----------+----------------+------------+-------------+
>    |          |       Base     |   Patched  |  Improvement|
>    +----------+----------------+------------+-------------+
>    |One Node  |      4m41s     |   4m32s    |     - 4.47% |
>    +----------+----------------+------------+-------------+
>    |Four Nodes|      4m45s     |   4m39s    |     - 2.92% |
>    +----------+----------------+------------+-------------+
> 
> [akpm@linux-foundation.org: rename variable, tweak comment]
> Link: http://lkml.kernel.org/r/20181120033119.30013-1-richard.weiyang@gmail.com
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Cc: Christoph Lameter <cl@linux.com>
> Cc: Pekka Enberg <penberg@kernel.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> ---
> 
>  mm/slub.c |   15 +++++++++++++--
>  1 file changed, 13 insertions(+), 2 deletions(-)
> 
> --- a/mm/slub.c~mm-slub-improve-performance-by-skipping-checked-node-in-get_any_partial
> +++ a/mm/slub.c
> @@ -1877,7 +1877,7 @@ static void *get_partial_node(struct kme
>   * Get a page from somewhere. Search in increasing NUMA distances.
>   */
>  static void *get_any_partial(struct kmem_cache *s, gfp_t flags,
> -		struct kmem_cache_cpu *c)
> +		struct kmem_cache_cpu *c, int exclude_nid)
>  {
>  #ifdef CONFIG_NUMA
>  	struct zonelist *zonelist;
> @@ -1915,6 +1915,9 @@ static void *get_any_partial(struct kmem
>  		for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) {
>  			struct kmem_cache_node *n;
>  
> +			if (exclude_nid == zone_to_nid(zone))
> +				continue;
> +
>  			n = get_node(s, zone_to_nid(zone));
>  
>  			if (n && cpuset_zone_allowed(zone, flags) &&
> @@ -1931,6 +1934,14 @@ static void *get_any_partial(struct kmem
>  					return object;
>  				}
>  			}
> +			/*
> +			 * Failed to get an object from this node, either
> +			 * because
> +			 *   1. Failure in the above if check
> +			 *   2. NULL return from get_partial_node()
> +			 * So skip this node next time.
> +			 */
> +			exclude_nid = zone_to_nid(zone);
>  		}
>  	} while (read_mems_allowed_retry(cpuset_mems_cookie));
>  #endif

So this piece gives me some concerns. You are updating the exclude_nid,
but as a result you are no longer excluding your original nid. So it
becomes possible that you are going to go back and search your original
exlcude_nid on the next pass if the zones are interleaved between nodes
aren't you?

Would it perhaps make more sense to instead replace
for_each_zone_zonelist with for_each_zone_zonelist_nodemask and then
just mask out any of the failing nodes?

> @@ -1955,7 +1966,7 @@ static void *get_partial(struct kmem_cac
>  	if (object || node != NUMA_NO_NODE)
>  		return object;
>  
> -	return get_any_partial(s, flags, c);
> +	return get_any_partial(s, flags, c, searchnode);
>  }
>  
>  #ifdef CONFIG_PREEMPT
> _
>
Christoph Lameter (Ampere) Dec. 21, 2018, 1:37 a.m. UTC | #9
On Thu, 20 Dec 2018, Andrew Morton wrote:

>   The result of (get_partial_count / get_partial_try_count):
>
>    +----------+----------------+------------+-------------+
>    |          |       Base     |    Patched |  Improvement|
>    +----------+----------------+------------+-------------+
>    |One Node  |       1:3      |    1:0     |      - 100% |

If you have one node then you already searched all your slabs. So we could
completely skip the get_any_partial() functionality in the non NUMA case
(if nr_node_ids == 1)


>    +----------+----------------+------------+-------------+
>    |Four Nodes|       1:5.8    |    1:2.5   |      -  56% |
>    +----------+----------------+------------+-------------+

Hmm.... Ok but that is the extreme slowpath.

>    Each version/system configuration combination has four round kernel
>    build tests. Take the average result of real to compare.
>
>    +----------+----------------+------------+-------------+
>    |          |       Base     |   Patched  |  Improvement|
>    +----------+----------------+------------+-------------+
>    |One Node  |      4m41s     |   4m32s    |     - 4.47% |
>    +----------+----------------+------------+-------------+
>    |Four Nodes|      4m45s     |   4m39s    |     - 2.92% |
>    +----------+----------------+------------+-------------+

3% on the four node case? That means that the slowpath is taken
frequently. Wonder why?

Can we also see the variability? Since this is a NUMA system there is
bound to be some indeterminism in those numbers.
Wei Yang Dec. 21, 2018, 3:29 a.m. UTC | #10
On Thu, Dec 20, 2018 at 04:25:55PM -0800, Alexander Duyck wrote:
>On Thu, 2018-12-20 at 14:41 -0800, Andrew Morton wrote:
>> Could someone please review this?
>> 
>> Thanks.
>> 
>> From: Wei Yang <richard.weiyang@gmail.com>
>> Subject: mm/slub.c: improve performance by skipping checked node in get_any_partial()
>> 
>> 1. Background
>> 
>>   Current slub has three layers:
>> 
>>     * cpu_slab
>>     * percpu_partial
>>     * per node partial list
>> 
>>   Slub allocator tries to get an object from top to bottom.  When it
>>   can't get an object from the upper two layers, it will search the per
>>   node partial list.  The is done in get_partial().
>> 
>>   The abstraction of get_partial() look like this:
>> 
>>       get_partial()
>>           get_partial_node()
>>           get_any_partial()
>>               for_each_zone_zonelist()
>> 
>>   The idea behind this is: first try a local node, then try other nodes
>>   if caller doesn't specify a node.
>> 
>> 2. Room for Improvement
>> 
>>   When we look one step deeper in get_any_partial(), it tries to get a
>>   proper node by for_each_zone_zonelist(), which iterates on the
>>   node_zonelists.
>> 
>>   This behavior would introduce some redundant check on the same node. 
>>   Because:
>> 
>>     * the local node is already checked in get_partial_node()
>>     * one node may have several zones on node_zonelists
>> 
>
>So it seems like there can be a few different behaviors based on
>mempolicy_slab_node() being used to construct the zonelist. Do you
>happen to know what memory policy your test process was running under?
>Also have you tried using any of the other policies to gather data?
>

I have little knowledge about the mempolicy so the test is based on a
"default" configuration.


>> 3. Solution Proposed in Patch
>> 
>>   We could reduce these redundant check by record the last unsuccessful
>>   node and then skip it.
>> 
>> 4. Tests & Result
>> 
>>   After some tests, the result shows this may improve the system a little,
>>   especially on a machine with only one node.
>> 
>> 4.1 Test Description
>> 
>>   There are two cases for two system configurations.
>> 
>>   Test Cases:
>> 
>>     1. counter comparison
>>     2. kernel build test
>> 
>>   System Configuration:
>> 
>>     1. One node machine with 4G
>>     2. Four node machine with 8G
>> 
>> 4.2 Result for Test 1
>> 
>>   Test 1: counter comparison
>> 
>>   This is a test with hacked kernel to record times function
>>   get_any_partial() is invoked and times the inner loop iterates. By
>>   comparing the ratio of two counters, we get to know how many inner
>>   loops we skipped.
>> 
>>   Here is a snip of the test patch.
>> 
>>   ---
>>   static void *get_any_partial() {
>> 
>> 	get_partial_count++;
>> 
>>         do {
>> 		for_each_zone_zonelist() {
>> 			get_partial_try_count++;
>> 		}
>> 	} while();
>> 
>> 	return NULL;
>>   }
>>   ---
>> 
>>   The result of (get_partial_count / get_partial_try_count):
>> 
>>    +----------+----------------+------------+-------------+
>>    |          |       Base     |    Patched |  Improvement|
>>    +----------+----------------+------------+-------------+
>>    |One Node  |       1:3      |    1:0     |      - 100% |
>>    +----------+----------------+------------+-------------+
>>    |Four Nodes|       1:5.8    |    1:2.5   |      -  56% |
>>    +----------+----------------+------------+-------------+
>> 
>> 4.3 Result for Test 2
>> 
>>   Test 2: kernel build
>> 
>>    Command used:
>> 
>>    > time make -j8 bzImage
>> 
>>    Each version/system configuration combination has four round kernel
>>    build tests. Take the average result of real to compare.
>> 
>>    +----------+----------------+------------+-------------+
>>    |          |       Base     |   Patched  |  Improvement|
>>    +----------+----------------+------------+-------------+
>>    |One Node  |      4m41s     |   4m32s    |     - 4.47% |
>>    +----------+----------------+------------+-------------+
>>    |Four Nodes|      4m45s     |   4m39s    |     - 2.92% |
>>    +----------+----------------+------------+-------------+
>> 
>> [akpm@linux-foundation.org: rename variable, tweak comment]
>> Link: http://lkml.kernel.org/r/20181120033119.30013-1-richard.weiyang@gmail.com
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> Cc: Christoph Lameter <cl@linux.com>
>> Cc: Pekka Enberg <penberg@kernel.org>
>> Cc: David Rientjes <rientjes@google.com>
>> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
>> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
>> ---
>> 
>>  mm/slub.c |   15 +++++++++++++--
>>  1 file changed, 13 insertions(+), 2 deletions(-)
>> 
>> --- a/mm/slub.c~mm-slub-improve-performance-by-skipping-checked-node-in-get_any_partial
>> +++ a/mm/slub.c
>> @@ -1877,7 +1877,7 @@ static void *get_partial_node(struct kme
>>   * Get a page from somewhere. Search in increasing NUMA distances.
>>   */
>>  static void *get_any_partial(struct kmem_cache *s, gfp_t flags,
>> -		struct kmem_cache_cpu *c)
>> +		struct kmem_cache_cpu *c, int exclude_nid)
>>  {
>>  #ifdef CONFIG_NUMA
>>  	struct zonelist *zonelist;
>> @@ -1915,6 +1915,9 @@ static void *get_any_partial(struct kmem
>>  		for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) {
>>  			struct kmem_cache_node *n;
>>  
>> +			if (exclude_nid == zone_to_nid(zone))
>> +				continue;
>> +
>>  			n = get_node(s, zone_to_nid(zone));
>>  
>>  			if (n && cpuset_zone_allowed(zone, flags) &&
>> @@ -1931,6 +1934,14 @@ static void *get_any_partial(struct kmem
>>  					return object;
>>  				}
>>  			}
>> +			/*
>> +			 * Failed to get an object from this node, either
>> +			 * because
>> +			 *   1. Failure in the above if check
>> +			 *   2. NULL return from get_partial_node()
>> +			 * So skip this node next time.
>> +			 */
>> +			exclude_nid = zone_to_nid(zone);
>>  		}
>>  	} while (read_mems_allowed_retry(cpuset_mems_cookie));
>>  #endif
>
>So this piece gives me some concerns. You are updating the exclude_nid,
>but as a result you are no longer excluding your original nid. So it
>becomes possible that you are going to go back and search your original
>exlcude_nid on the next pass if the zones are interleaved between nodes
>aren't you?

After Michal's change, current zonelist just have node order.

This means once we have a exclude_nid, we will skip all zones on this
node.

>
>Would it perhaps make more sense to instead replace
>for_each_zone_zonelist with for_each_zone_zonelist_nodemask and then
>just mask out any of the failing nodes?

My first version is implemented in this way. While Michal mentioned it
would be heavy on stack and took much time to copy nodemask_t.

>
>> @@ -1955,7 +1966,7 @@ static void *get_partial(struct kmem_cac
>>  	if (object || node != NUMA_NO_NODE)
>>  		return object;
>>  
>> -	return get_any_partial(s, flags, c);
>> +	return get_any_partial(s, flags, c, searchnode);
>>  }
>>  
>>  #ifdef CONFIG_PREEMPT
>> _
>>
Wei Yang Dec. 21, 2018, 3:33 a.m. UTC | #11
On Fri, Dec 21, 2018 at 01:37:38AM +0000, Christopher Lameter wrote:
>On Thu, 20 Dec 2018, Andrew Morton wrote:
>
>>   The result of (get_partial_count / get_partial_try_count):
>>
>>    +----------+----------------+------------+-------------+
>>    |          |       Base     |    Patched |  Improvement|
>>    +----------+----------------+------------+-------------+
>>    |One Node  |       1:3      |    1:0     |      - 100% |
>
>If you have one node then you already searched all your slabs. So we could
>completely skip the get_any_partial() functionality in the non NUMA case
>(if nr_node_ids == 1)
>

Yes, agree.

>
>>    +----------+----------------+------------+-------------+
>>    |Four Nodes|       1:5.8    |    1:2.5   |      -  56% |
>>    +----------+----------------+------------+-------------+
>
>Hmm.... Ok but that is the extreme slowpath.
>
>>    Each version/system configuration combination has four round kernel
>>    build tests. Take the average result of real to compare.
>>
>>    +----------+----------------+------------+-------------+
>>    |          |       Base     |   Patched  |  Improvement|
>>    +----------+----------------+------------+-------------+
>>    |One Node  |      4m41s     |   4m32s    |     - 4.47% |
>>    +----------+----------------+------------+-------------+
>>    |Four Nodes|      4m45s     |   4m39s    |     - 2.92% |
>>    +----------+----------------+------------+-------------+
>
>3% on the four node case? That means that the slowpath is taken
>frequently. Wonder why?

Hmm... not sure.

>
>Can we also see the variability? Since this is a NUMA system there is
>bound to be some indeterminism in those numbers.

Oops, I have deleted those raw data. I need to retest this.
Wei Yang Dec. 24, 2018, 10:03 p.m. UTC | #12
On Fri, Dec 21, 2018 at 01:37:38AM +0000, Christopher Lameter wrote:
>On Thu, 20 Dec 2018, Andrew Morton wrote:
>
>>   The result of (get_partial_count / get_partial_try_count):
>>
>>    +----------+----------------+------------+-------------+
>>    |          |       Base     |    Patched |  Improvement|
>>    +----------+----------------+------------+-------------+
>>    |One Node  |       1:3      |    1:0     |      - 100% |
>
>If you have one node then you already searched all your slabs. So we could
>completely skip the get_any_partial() functionality in the non NUMA case
>(if nr_node_ids == 1)
>
>
>>    +----------+----------------+------------+-------------+
>>    |Four Nodes|       1:5.8    |    1:2.5   |      -  56% |
>>    +----------+----------------+------------+-------------+
>
>Hmm.... Ok but that is the extreme slowpath.
>
>>    Each version/system configuration combination has four round kernel
>>    build tests. Take the average result of real to compare.
>>
>>    +----------+----------------+------------+-------------+
>>    |          |       Base     |   Patched  |  Improvement|
>>    +----------+----------------+------------+-------------+
>>    |One Node  |      4m41s     |   4m32s    |     - 4.47% |
>>    +----------+----------------+------------+-------------+
>>    |Four Nodes|      4m45s     |   4m39s    |     - 2.92% |
>>    +----------+----------------+------------+-------------+
>
>3% on the four node case? That means that the slowpath is taken
>frequently. Wonder why?
>
>Can we also see the variability? Since this is a NUMA system there is
>bound to be some indeterminism in those numbers.

Hmm... I rebuilt the kernel and try the experiment again, but found I
can't reproduce this statistics. The data show it is worse than base
line and shakes heavily...

Base                    Patched 
                        
real    5m49.652s       real    8m9.515s
user    19m0.581s       user    17m30.296s
sys     2m31.906s       sys     2m21.445s
                        
real    5m47.145s       real    6m47.437s
user    19m17.445s      user    18m33.461s
sys     2m41.931s       sys     2m43.249s
                        
real    7m2.043s        real    5m38.539s
user    18m11.723s      user    19m40.552s
sys     2m46.443s       sys     2m43.771s
                        
real    5m31.797s       real    12m59.936s
user    19m13.984s      user    15m47.602s
sys     2m34.727s       sys     2m20.385s
diff mbox series

Patch

diff --git a/mm/slub.c b/mm/slub.c
index e3629cd7aff1..3d93a07d86d9 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1873,7 +1873,7 @@  static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n,
  * Get a page from somewhere. Search in increasing NUMA distances.
  */
 static void *get_any_partial(struct kmem_cache *s, gfp_t flags,
-		struct kmem_cache_cpu *c)
+		struct kmem_cache_cpu *c, int except)
 {
 #ifdef CONFIG_NUMA
 	struct zonelist *zonelist;
@@ -1911,6 +1911,9 @@  static void *get_any_partial(struct kmem_cache *s, gfp_t flags,
 		for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) {
 			struct kmem_cache_node *n;
 
+			if (except == zone_to_nid(zone))
+				continue;
+
 			n = get_node(s, zone_to_nid(zone));
 
 			if (n && cpuset_zone_allowed(zone, flags) &&
@@ -1927,6 +1930,13 @@  static void *get_any_partial(struct kmem_cache *s, gfp_t flags,
 					return object;
 				}
 			}
+			/*
+			 * Fail to get object from this node, either because
+			 *   1. Fails in if check
+			 *   2. NULl object returns from get_partial_node()
+			 * Skip it next time.
+			 */
+			except = zone_to_nid(zone);
 		}
 	} while (read_mems_allowed_retry(cpuset_mems_cookie));
 #endif
@@ -1951,7 +1961,7 @@  static void *get_partial(struct kmem_cache *s, gfp_t flags, int node,
 	if (object || node != NUMA_NO_NODE)
 		return object;
 
-	return get_any_partial(s, flags, c);
+	return get_any_partial(s, flags, c, searchnode);
 }
 
 #ifdef CONFIG_PREEMPT