diff mbox series

[RFC,v2] ksm: Assist buddy allocator to assemble 1-order pages

Message ID 153959597844.26723.5798112367236156151.stgit@localhost.localdomain (mailing list archive)
State New, archived
Headers show
Series [RFC,v2] ksm: Assist buddy allocator to assemble 1-order pages | expand

Commit Message

Kirill Tkhai Oct. 15, 2018, 9:33 a.m. UTC
v2: Style improvements

try_to_merge_two_pages() merges two pages, one of them
is a page of currently scanned mm, the second is a page
with identical hash from unstable tree. Currently, we
merge the page from unstable tree into the first one,
and then free it.

The idea of this patch is to prefer freeing that page
of them, which has a free neighbour (i.e., neighbour
with zero page_count()). This allows buddy allocator
to assemble at least 1-order set from the freed page
and its neighbour; this is a kind of cheep passive
compaction.

AFAIK, 1-order pages set consists of pages with PFNs
[2n, 2n+1] (odd, even), so the neighbour's pfn is
calculated via XOR with 1. We check the result pfn
is valid and its page_count(), and prefer merging
into @tree_page if neighbour's usage count is zero.

There a is small difference with current behavior
in case of error path. In case of the second
try_to_merge_with_ksm_page() is failed, we return
from try_to_merge_two_pages() with @tree_page
removed from unstable tree. It does not seem to matter,
but if we do not want a change at all, it's not
a problem to move remove_rmap_item_from_tree() from
try_to_merge_with_ksm_page() to its callers.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 mm/ksm.c |   16 ++++++++++++++++
 1 file changed, 16 insertions(+)

Comments

Claudio Imbrenda Oct. 15, 2018, 10:38 a.m. UTC | #1
I don't have objections to this patch, but I wonder how much impact it
would have. Have you performed any tests? does it really have such a big
impact on the availability of order-1 page blocks? 


Claudio

On Mon, 15 Oct 2018 12:33:36 +0300
Kirill Tkhai <ktkhai@virtuozzo.com> wrote:

> v2: Style improvements
> 
> try_to_merge_two_pages() merges two pages, one of them
> is a page of currently scanned mm, the second is a page
> with identical hash from unstable tree. Currently, we
> merge the page from unstable tree into the first one,
> and then free it.
> 
> The idea of this patch is to prefer freeing that page
> of them, which has a free neighbour (i.e., neighbour
> with zero page_count()). This allows buddy allocator
> to assemble at least 1-order set from the freed page
> and its neighbour; this is a kind of cheep passive
> compaction.
> 
> AFAIK, 1-order pages set consists of pages with PFNs
> [2n, 2n+1] (odd, even), so the neighbour's pfn is
> calculated via XOR with 1. We check the result pfn
> is valid and its page_count(), and prefer merging
> into @tree_page if neighbour's usage count is zero.
> 
> There a is small difference with current behavior
> in case of error path. In case of the second
> try_to_merge_with_ksm_page() is failed, we return
> from try_to_merge_two_pages() with @tree_page
> removed from unstable tree. It does not seem to matter,
> but if we do not want a change at all, it's not
> a problem to move remove_rmap_item_from_tree() from
> try_to_merge_with_ksm_page() to its callers.
> 
> Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
> ---
>  mm/ksm.c |   16 ++++++++++++++++
>  1 file changed, 16 insertions(+)
> 
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 5b0894b45ee5..005508c86d0a 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -1321,6 +1321,22 @@ static struct page
> *try_to_merge_two_pages(struct rmap_item *rmap_item, {
>  	int err;
> 
> +	if (IS_ENABLED(CONFIG_COMPACTION)) {
> +		unsigned long pfn;
> +
> +		/*
> +		 * Find neighbour of @page containing 1-order pair
> +		 * in buddy-allocator and check whether it is free.
> +		 * If it is so, try to use @tree_page as ksm page
> +		 * and to free @page.
> +		 */
> +		pfn = page_to_pfn(page) ^ 1;
> +		if (pfn_valid(pfn) && page_count(pfn_to_page(pfn))
> == 0) {
> +			swap(rmap_item, tree_rmap_item);
> +			swap(page, tree_page);
> +		}
> +	}
> +
>  	err = try_to_merge_with_ksm_page(rmap_item, page, NULL);
>  	if (!err) {
>  		err = try_to_merge_with_ksm_page(tree_rmap_item,
>
Kirill Tkhai Oct. 15, 2018, 11:56 a.m. UTC | #2
On 15.10.2018 13:38, Claudio Imbrenda wrote:
> I don't have objections to this patch, but I wonder how much impact it
> would have. Have you performed any tests? does it really have such a big
> impact on the availability of order-1 page blocks? 

I have no synthetic tests on compaction, so this patch is RFC. Maybe you
suggest something? In my test machine I added debug patch on top of this,
which adds a counter of such tree_page preferred pages, and the counter
increments as well. Order-1 page is just a brick of a bigger order pages,
so this patch cares about them.

> 
> On Mon, 15 Oct 2018 12:33:36 +0300
> Kirill Tkhai <ktkhai@virtuozzo.com> wrote:
> 
>> v2: Style improvements
>>
>> try_to_merge_two_pages() merges two pages, one of them
>> is a page of currently scanned mm, the second is a page
>> with identical hash from unstable tree. Currently, we
>> merge the page from unstable tree into the first one,
>> and then free it.
>>
>> The idea of this patch is to prefer freeing that page
>> of them, which has a free neighbour (i.e., neighbour
>> with zero page_count()). This allows buddy allocator
>> to assemble at least 1-order set from the freed page
>> and its neighbour; this is a kind of cheep passive
>> compaction.
>>
>> AFAIK, 1-order pages set consists of pages with PFNs
>> [2n, 2n+1] (odd, even), so the neighbour's pfn is
>> calculated via XOR with 1. We check the result pfn
>> is valid and its page_count(), and prefer merging
>> into @tree_page if neighbour's usage count is zero.
>>
>> There a is small difference with current behavior
>> in case of error path. In case of the second
>> try_to_merge_with_ksm_page() is failed, we return
>> from try_to_merge_two_pages() with @tree_page
>> removed from unstable tree. It does not seem to matter,
>> but if we do not want a change at all, it's not
>> a problem to move remove_rmap_item_from_tree() from
>> try_to_merge_with_ksm_page() to its callers.
>>
>> Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
>> ---
>>  mm/ksm.c |   16 ++++++++++++++++
>>  1 file changed, 16 insertions(+)
>>
>> diff --git a/mm/ksm.c b/mm/ksm.c
>> index 5b0894b45ee5..005508c86d0a 100644
>> --- a/mm/ksm.c
>> +++ b/mm/ksm.c
>> @@ -1321,6 +1321,22 @@ static struct page
>> *try_to_merge_two_pages(struct rmap_item *rmap_item, {
>>  	int err;
>>
>> +	if (IS_ENABLED(CONFIG_COMPACTION)) {
>> +		unsigned long pfn;
>> +
>> +		/*
>> +		 * Find neighbour of @page containing 1-order pair
>> +		 * in buddy-allocator and check whether it is free.
>> +		 * If it is so, try to use @tree_page as ksm page
>> +		 * and to free @page.
>> +		 */
>> +		pfn = page_to_pfn(page) ^ 1;
>> +		if (pfn_valid(pfn) && page_count(pfn_to_page(pfn))
>> == 0) {
>> +			swap(rmap_item, tree_rmap_item);
>> +			swap(page, tree_page);
>> +		}
>> +	}
>> +
>>  	err = try_to_merge_with_ksm_page(rmap_item, page, NULL);
>>  	if (!err) {
>>  		err = try_to_merge_with_ksm_page(tree_rmap_item,
>>
>
diff mbox series

Patch

diff --git a/mm/ksm.c b/mm/ksm.c
index 5b0894b45ee5..005508c86d0a 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1321,6 +1321,22 @@  static struct page *try_to_merge_two_pages(struct rmap_item *rmap_item,
 {
 	int err;
 
+	if (IS_ENABLED(CONFIG_COMPACTION)) {
+		unsigned long pfn;
+
+		/*
+		 * Find neighbour of @page containing 1-order pair
+		 * in buddy-allocator and check whether it is free.
+		 * If it is so, try to use @tree_page as ksm page
+		 * and to free @page.
+		 */
+		pfn = page_to_pfn(page) ^ 1;
+		if (pfn_valid(pfn) && page_count(pfn_to_page(pfn)) == 0) {
+			swap(rmap_item, tree_rmap_item);
+			swap(page, tree_page);
+		}
+	}
+
 	err = try_to_merge_with_ksm_page(rmap_item, page, NULL);
 	if (!err) {
 		err = try_to_merge_with_ksm_page(tree_rmap_item,