diff mbox series

[6/6] mm/z3fold: use release_z3fold_page_locked() to release locked z3fold page

Message ID 20210619093151.1492174-7-linmiaohe@huawei.com (mailing list archive)
State New
Headers show
Series Cleanup and fixup for z3fold | expand

Commit Message

Miaohe Lin June 19, 2021, 9:31 a.m. UTC
We should use release_z3fold_page_locked() to release z3fold page when it's
locked, although it looks harmless to use release_z3fold_page() now.

Fixes: dcf5aedb24f8 ("z3fold: stricter locking and more careful reclaim")
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/z3fold.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Hillf Danton June 20, 2021, 12:26 a.m. UTC | #1
On Sat, 19 Jun 2021 17:31:51 +0800 Miaohe Lin wrote:
> We should use release_z3fold_page_locked() to release z3fold page when it's
> locked, although it looks harmless to use release_z3fold_page() now.
> 
> Fixes: dcf5aedb24f8 ("z3fold: stricter locking and more careful reclaim")
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
>  mm/z3fold.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/z3fold.c b/mm/z3fold.c
> index 196d886a3436..b3c0577b8095 100644
> --- a/mm/z3fold.c
> +++ b/mm/z3fold.c
> @@ -1372,7 +1372,7 @@ static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries)
>  			if (zhdr->foreign_handles ||
>  			    test_and_set_bit(PAGE_CLAIMED, &page->private)) {
>  				if (kref_put(&zhdr->refcount,
> -						release_z3fold_page))
> +						release_z3fold_page_locked))
>  					atomic64_dec(&pool->pages_nr);

LGTM. JFYI other issue in z3fold was reported [1] and if the fix proposed there
makes any sense to you feel free to pick it up and ask Mike for his tests.

[1] https://lore.kernel.org/linux-mm/20210316061351.1649-1-hdanton@sina.com/
Miaohe Lin June 22, 2021, 1:49 p.m. UTC | #2
On 2021/6/20 8:26, Hillf Danton wrote:
> On Sat, 19 Jun 2021 17:31:51 +0800 Miaohe Lin wrote:
>> We should use release_z3fold_page_locked() to release z3fold page when it's
>> locked, although it looks harmless to use release_z3fold_page() now.
>>
>> Fixes: dcf5aedb24f8 ("z3fold: stricter locking and more careful reclaim")
>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>> ---
>>  mm/z3fold.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/mm/z3fold.c b/mm/z3fold.c
>> index 196d886a3436..b3c0577b8095 100644
>> --- a/mm/z3fold.c
>> +++ b/mm/z3fold.c
>> @@ -1372,7 +1372,7 @@ static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries)
>>  			if (zhdr->foreign_handles ||
>>  			    test_and_set_bit(PAGE_CLAIMED, &page->private)) {
>>  				if (kref_put(&zhdr->refcount,
>> -						release_z3fold_page))
>> +						release_z3fold_page_locked))
>>  					atomic64_dec(&pool->pages_nr);
> 
> LGTM. JFYI other issue in z3fold was reported [1] and if the fix proposed there
> makes any sense to you feel free to pick it up and ask Mike for his tests.
> 

Thank you for review and reply.

I browsed [1] several times but I failed to figure out what's the root cause. And I found
some bugs and possible race windows from previous code inspection. I think we can try fix
these first and see whether [1] is (hopefully) fixed. :)
Thanks again.

> [1] https://lore.kernel.org/linux-mm/20210316061351.1649-1-hdanton@sina.com/
> .
>
diff mbox series

Patch

diff --git a/mm/z3fold.c b/mm/z3fold.c
index 196d886a3436..b3c0577b8095 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -1372,7 +1372,7 @@  static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries)
 			if (zhdr->foreign_handles ||
 			    test_and_set_bit(PAGE_CLAIMED, &page->private)) {
 				if (kref_put(&zhdr->refcount,
-						release_z3fold_page))
+						release_z3fold_page_locked))
 					atomic64_dec(&pool->pages_nr);
 				else
 					z3fold_page_unlock(zhdr);