diff mbox series

mm: compaction: improve /proc trigger for full node memory compaction

Message ID 1619098678-8501-1-git-send-email-charante@codeaurora.org (mailing list archive)
State New, archived
Headers show
Series mm: compaction: improve /proc trigger for full node memory compaction | expand

Commit Message

Charan Teja Kalla April 22, 2021, 1:37 p.m. UTC
The existing /proc/sys/vm/compact_memory interface do the full node
compaction when user writes an arbitrary value to it and is targeted for
the usecases like an app launcher prepares the system before the target
application runs. The downside of it is that even if there are
sufficient higher order pages left in the system for the targeted
application to run, full node compaction will still be triggered thus
wasting few CPU cycles. This problem can be solved if it is known when
the sufficient higher order pages are available in the system thus full
node compaction can be stopped in the middle. The proactive
compaction[1] can give these details about the availability of higher
order pages in the system(it checks for COMPACTION_HPAGE_ORDER pages,
which usually be order-9) thus can be used to trigger for full node
compaction.

This patch adds a new /proc interface,
/proc/sys/vm/proactive_compact_memory, and on write of an arbitrary
value triggers the full node compaction but can be stopped in the middle
if sufficient higher order(COMPACTION_HPAGE_ORDER) pages available in
the system. The availability of pages that a user looking for can be
given as input through /proc/sys/vm/compaction_proactiveness.

[1]https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit?id=facdaa917c4d5a376d09d25865f5a863f906234a

Signed-off-by: Charan Teja Reddy <charante@codeaurora.org>
---
 include/linux/compaction.h |  3 +++
 kernel/sysctl.c            |  7 +++++++
 mm/compaction.c            | 25 ++++++++++++++++++++++---
 3 files changed, 32 insertions(+), 3 deletions(-)

Comments

Mel Gorman April 27, 2021, 8:09 a.m. UTC | #1
On Thu, Apr 22, 2021 at 07:07:58PM +0530, Charan Teja Reddy wrote:
> The existing /proc/sys/vm/compact_memory interface do the full node
> compaction when user writes an arbitrary value to it and is targeted for
> the usecases like an app launcher prepares the system before the target
> application runs.

The intent behind compact_memory was a debugging interface to tell
the difference between an application failing to allocate a huge page
prematurely and the inability of compaction to find a free page.

> The downside of it is that even if there are
> sufficient higher order pages left in the system for the targeted
> application to run, full node compaction will still be triggered thus
> wasting few CPU cycles. This problem can be solved if it is known when
> the sufficient higher order pages are available in the system thus full
> node compaction can be stopped in the middle. The proactive
> compaction[1] can give these details about the availability of higher
> order pages in the system(it checks for COMPACTION_HPAGE_ORDER pages,
> which usually be order-9) thus can be used to trigger for full node
> compaction.
> 
> This patch adds a new /proc interface,
> /proc/sys/vm/proactive_compact_memory, and on write of an arbitrary
> value triggers the full node compaction but can be stopped in the middle
> if sufficient higher order(COMPACTION_HPAGE_ORDER) pages available in
> the system. The availability of pages that a user looking for can be
> given as input through /proc/sys/vm/compaction_proactiveness.
> 
> [1]https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit?id=facdaa917c4d5a376d09d25865f5a863f906234a
> 
> Signed-off-by: Charan Teja Reddy <charante@codeaurora.org>

Hence, while I do not object to the patch as-such, I'm wary of the trend
towards improving explicit out-of-band compaction via proc interfaces. I
would have preferred if the focus was on reducing the cost of compaction
so that direct allocation requests succeed quickly or improving background
compaction via kcompactd when there has been recent failures.
Charan Teja Kalla April 28, 2021, 3:32 p.m. UTC | #2
Thanks Mel for your comments!!

On 4/27/2021 1:39 PM, Mel Gorman wrote:
>> The existing /proc/sys/vm/compact_memory interface do the full node
>> compaction when user writes an arbitrary value to it and is targeted for
>> the usecases like an app launcher prepares the system before the target
>> application runs.
> The intent behind compact_memory was a debugging interface to tell
> the difference between an application failing to allocate a huge page
> prematurely and the inability of compaction to find a free page.
> 

Thanks for clarifying this.

>> This patch adds a new /proc interface,
>> /proc/sys/vm/proactive_compact_memory, and on write of an arbitrary
>> value triggers the full node compaction but can be stopped in the middle
>> if sufficient higher order(COMPACTION_HPAGE_ORDER) pages available in
>> the system. The availability of pages that a user looking for can be
>> given as input through /proc/sys/vm/compaction_proactiveness.
>>
>> [1]https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit?id=facdaa917c4d5a376d09d25865f5a863f906234a
>>
>> Signed-off-by: Charan Teja Reddy <charante@codeaurora.org>
> Hence, while I do not object to the patch as-such, I'm wary of the trend
> towards improving explicit out-of-band compaction via proc interfaces. I

I think people relying on this /proc/../compact_memory for reasons of on
demand compaction effects the performance and the kcompactd returns when
 even a single page of the order we are looking for is available. Say
that If an app launching completion is relied on the memory
fragmentation, meaning that lesser the system fragmented, lesser it
needs to spend time on allocation as it gets more higher order pages.
With the current compaction methods we may get just one higher order
page at a time (as compaction stops run after that) thus can effect its
launch completion time. The compact_memory node can help in these
situation where the system administrator can defragment system whenever
is required by writing to the compact_node. This is just a theoretical
example.

Although it is intended for debugging interface, it got a lot of other
applications too.

This patch aims to improve this interface by taking help from tunables
provided by the proactive compaction.

> would have preferred if the focus was on reducing the cost of compaction
> so that direct allocation requests succeed quickly or improving background
> compaction via kcompactd when there has been recent failures.
Charan Teja Kalla May 3, 2021, 11:37 a.m. UTC | #3
Hello,

A gentle ping to get your review comments. They will be of great help to me.

Explained below that though the compact_memory node is intended for
debug purpose, it got other applications too. This patch just aims to
improve that by taking help of proactive compaction.

Also triggering proactive compaction for every 500msec is not always
required (say that I mostly need higher order pages in the systems only
at while launching a set of apps, then the work done by the proactive
compaction for every 500msec is not going to be useful in other times).
Thus users will disable the proactive
compaction(sysctl.compaction_proactiveness = 0) and when required can do
the out-of-band compaction using the provided interface.

If a separate /proc node shouldn't be present just for this, then the
other solution I am thinking of is:
1) Trigger the proactive compaction on every write to
sysctl.compaction_proactiveness, instead of waiting for 500msec wakeup,
thus users can immediately turn on/off the proactive compaction when
required.

--Thanks

On 4/28/2021 9:02 PM, Charan Teja Kalla wrote:
> Thanks Mel for your comments!!
> 
> On 4/27/2021 1:39 PM, Mel Gorman wrote:
>>> The existing /proc/sys/vm/compact_memory interface do the full node
>>> compaction when user writes an arbitrary value to it and is targeted for
>>> the usecases like an app launcher prepares the system before the target
>>> application runs.
>> The intent behind compact_memory was a debugging interface to tell
>> the difference between an application failing to allocate a huge page
>> prematurely and the inability of compaction to find a free page.
>>
> 
> Thanks for clarifying this.
> 
>>> This patch adds a new /proc interface,
>>> /proc/sys/vm/proactive_compact_memory, and on write of an arbitrary
>>> value triggers the full node compaction but can be stopped in the middle
>>> if sufficient higher order(COMPACTION_HPAGE_ORDER) pages available in
>>> the system. The availability of pages that a user looking for can be
>>> given as input through /proc/sys/vm/compaction_proactiveness.
>>>
>>> [1]https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit?id=facdaa917c4d5a376d09d25865f5a863f906234a
>>>
>>> Signed-off-by: Charan Teja Reddy <charante@codeaurora.org>
>> Hence, while I do not object to the patch as-such, I'm wary of the trend
>> towards improving explicit out-of-band compaction via proc interfaces. I
> 
> I think people relying on this /proc/../compact_memory for reasons of on
> demand compaction effects the performance and the kcompactd returns when
>  even a single page of the order we are looking for is available. Say
> that If an app launching completion is relied on the memory
> fragmentation, meaning that lesser the system fragmented, lesser it
> needs to spend time on allocation as it gets more higher order pages.
> With the current compaction methods we may get just one higher order
> page at a time (as compaction stops run after that) thus can effect its
> launch completion time. The compact_memory node can help in these
> situation where the system administrator can defragment system whenever
> is required by writing to the compact_node. This is just a theoretical
> example.
> 
> Although it is intended for debugging interface, it got a lot of other
> applications too.
> 
> This patch aims to improve this interface by taking help from tunables
> provided by the proactive compaction.
> 
>> would have preferred if the focus was on reducing the cost of compaction
>> so that direct allocation requests succeed quickly or improving background
>> compaction via kcompactd when there has been recent failures.
>
diff mbox series

Patch

diff --git a/include/linux/compaction.h b/include/linux/compaction.h
index ed4070e..af8f6c5 100644
--- a/include/linux/compaction.h
+++ b/include/linux/compaction.h
@@ -82,9 +82,12 @@  static inline unsigned long compact_gap(unsigned int order)
 
 #ifdef CONFIG_COMPACTION
 extern int sysctl_compact_memory;
+extern int sysctl_proactive_compact_memory;
 extern unsigned int sysctl_compaction_proactiveness;
 extern int sysctl_compaction_handler(struct ctl_table *table, int write,
 			void *buffer, size_t *length, loff_t *ppos);
+extern int sysctl_proactive_compaction_handler(struct ctl_table *table,
+		int write, void *buffer, size_t *length, loff_t *ppos);
 extern int sysctl_extfrag_threshold;
 extern int sysctl_compact_unevictable_allowed;
 
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 62fbd09..ceb5c61 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -2862,6 +2862,13 @@  static struct ctl_table vm_table[] = {
 		.proc_handler	= sysctl_compaction_handler,
 	},
 	{
+		.procname       = "proactive_compact_memory",
+		.data           = &sysctl_proactive_compact_memory,
+		.maxlen         = sizeof(int),
+		.mode           = 0200,
+		.proc_handler   = sysctl_proactive_compaction_handler,
+	},
+	{
 		.procname	= "compaction_proactiveness",
 		.data		= &sysctl_compaction_proactiveness,
 		.maxlen		= sizeof(sysctl_compaction_proactiveness),
diff --git a/mm/compaction.c b/mm/compaction.c
index e04f447..2b40b03 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -2588,13 +2588,13 @@  enum compact_result try_to_compact_pages(gfp_t gfp_mask, unsigned int order,
  * due to various back-off conditions, such as, contention on per-node or
  * per-zone locks.
  */
-static void proactive_compact_node(pg_data_t *pgdat)
+static void proactive_compact_node(pg_data_t *pgdat, enum migrate_mode mode)
 {
 	int zoneid;
 	struct zone *zone;
 	struct compact_control cc = {
 		.order = -1,
-		.mode = MIGRATE_SYNC_LIGHT,
+		.mode = mode,
 		.ignore_skip_hint = true,
 		.whole_zone = true,
 		.gfp_mask = GFP_KERNEL,
@@ -2657,6 +2657,17 @@  static void compact_nodes(void)
 		compact_node(nid);
 }
 
+static void proactive_compact_nodes(void)
+{
+	int nid;
+
+	/* Flush pending updates to the LRU lists */
+	lru_add_drain_all();
+	for_each_online_node(nid)
+		proactive_compact_node(NODE_DATA(nid), MIGRATE_SYNC);
+}
+
+int sysctl_proactive_compact_memory;
 /* The written value is actually unused, all memory is compacted */
 int sysctl_compact_memory;
 
@@ -2680,6 +2691,14 @@  int sysctl_compaction_handler(struct ctl_table *table, int write,
 	return 0;
 }
 
+int sysctl_proactive_compaction_handler(struct ctl_table *table, int write,
+			void *buffer, size_t *length, loff_t *ppos)
+{
+	if (write)
+		proactive_compact_nodes();
+
+	return 0;
+}
 #if defined(CONFIG_SYSFS) && defined(CONFIG_NUMA)
 static ssize_t sysfs_compact_node(struct device *dev,
 			struct device_attribute *attr,
@@ -2881,7 +2900,7 @@  static int kcompactd(void *p)
 				continue;
 			}
 			prev_score = fragmentation_score_node(pgdat);
-			proactive_compact_node(pgdat);
+			proactive_compact_node(pgdat, MIGRATE_SYNC_LIGHT);
 			score = fragmentation_score_node(pgdat);
 			/*
 			 * Defer proactive compaction if the fragmentation