diff mbox series

[2/3] mm/zsmalloc.c: combine two atomic ops in zs_pool_dec_isolated()

Message ID 20210624123930.1769093-3-linmiaohe@huawei.com (mailing list archive)
State New
Headers show
Series Cleanup for zsmalloc | expand

Commit Message

Miaohe Lin June 24, 2021, 12:39 p.m. UTC
atomic_long_dec_and_test() is equivalent to atomic_long_dec() and
atomic_long_read() == 0. Use it to make code more succinct.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/zsmalloc.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

Comments

Muchun Song June 25, 2021, 5:01 a.m. UTC | #1
On Thu, Jun 24, 2021 at 8:40 PM Miaohe Lin <linmiaohe@huawei.com> wrote:
>
> atomic_long_dec_and_test() is equivalent to atomic_long_dec() and
> atomic_long_read() == 0. Use it to make code more succinct.

Actually, they are not equal. atomic_long_dec_and_test implies a
full memory barrier around it but atomic_long_dec and atomic_long_read
don't.

That RMW operations that have a return value is equal to the following.

smp_mb__before_atomic()
non-RMW operations or RMW operations that have no return value
smp_mb__after_atomic()

Thanks.

>
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
>  mm/zsmalloc.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 1476289b619f..0b4b23740d78 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1828,13 +1828,12 @@ static void putback_zspage_deferred(struct zs_pool *pool,
>  static inline void zs_pool_dec_isolated(struct zs_pool *pool)
>  {
>         VM_BUG_ON(atomic_long_read(&pool->isolated_pages) <= 0);
> -       atomic_long_dec(&pool->isolated_pages);
>         /*
>          * There's no possibility of racing, since wait_for_isolated_drain()
>          * checks the isolated count under &class->lock after enqueuing
>          * on migration_wait.
>          */
> -       if (atomic_long_read(&pool->isolated_pages) == 0 && pool->destroying)
> +       if (atomic_long_dec_and_test(&pool->isolated_pages) && pool->destroying)
>                 wake_up_all(&pool->migration_wait);
>  }
>
> --
> 2.23.0
>
Miaohe Lin June 25, 2021, 6:32 a.m. UTC | #2
On 2021/6/25 13:01, Muchun Song wrote:
> On Thu, Jun 24, 2021 at 8:40 PM Miaohe Lin <linmiaohe@huawei.com> wrote:
>>
>> atomic_long_dec_and_test() is equivalent to atomic_long_dec() and
>> atomic_long_read() == 0. Use it to make code more succinct.
> 
> Actually, they are not equal. atomic_long_dec_and_test implies a
> full memory barrier around it but atomic_long_dec and atomic_long_read
> don't.
> 

Many thanks for comment. They are indeed not completely equal as you said.
What I mean is they can do the same things we want in this specified context.
Thanks again.

> That RMW operations that have a return value is equal to the following.
> 
> smp_mb__before_atomic()
> non-RMW operations or RMW operations that have no return value
> smp_mb__after_atomic()
> 
> Thanks.
> 
>>
>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>> ---
>>  mm/zsmalloc.c | 3 +--
>>  1 file changed, 1 insertion(+), 2 deletions(-)
>>
>> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
>> index 1476289b619f..0b4b23740d78 100644
>> --- a/mm/zsmalloc.c
>> +++ b/mm/zsmalloc.c
>> @@ -1828,13 +1828,12 @@ static void putback_zspage_deferred(struct zs_pool *pool,
>>  static inline void zs_pool_dec_isolated(struct zs_pool *pool)
>>  {
>>         VM_BUG_ON(atomic_long_read(&pool->isolated_pages) <= 0);
>> -       atomic_long_dec(&pool->isolated_pages);
>>         /*
>>          * There's no possibility of racing, since wait_for_isolated_drain()
>>          * checks the isolated count under &class->lock after enqueuing
>>          * on migration_wait.
>>          */
>> -       if (atomic_long_read(&pool->isolated_pages) == 0 && pool->destroying)
>> +       if (atomic_long_dec_and_test(&pool->isolated_pages) && pool->destroying)
>>                 wake_up_all(&pool->migration_wait);
>>  }
>>
>> --
>> 2.23.0
>>
> .
>
Muchun Song June 25, 2021, 7:29 a.m. UTC | #3
On Fri, Jun 25, 2021 at 2:32 PM Miaohe Lin <linmiaohe@huawei.com> wrote:
>
> On 2021/6/25 13:01, Muchun Song wrote:
> > On Thu, Jun 24, 2021 at 8:40 PM Miaohe Lin <linmiaohe@huawei.com> wrote:
> >>
> >> atomic_long_dec_and_test() is equivalent to atomic_long_dec() and
> >> atomic_long_read() == 0. Use it to make code more succinct.
> >
> > Actually, they are not equal. atomic_long_dec_and_test implies a
> > full memory barrier around it but atomic_long_dec and atomic_long_read
> > don't.
> >
>
> Many thanks for comment. They are indeed not completely equal as you said.
> What I mean is they can do the same things we want in this specified context.
> Thanks again.

I don't think so. Using individual operations can eliminate memory barriers.
We will pay for the barrier if we use atomic_long_dec_and_test here.

>
> > That RMW operations that have a return value is equal to the following.
> >
> > smp_mb__before_atomic()
> > non-RMW operations or RMW operations that have no return value
> > smp_mb__after_atomic()
> >
> > Thanks.
> >
> >>
> >> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> >> ---
> >>  mm/zsmalloc.c | 3 +--
> >>  1 file changed, 1 insertion(+), 2 deletions(-)
> >>
> >> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> >> index 1476289b619f..0b4b23740d78 100644
> >> --- a/mm/zsmalloc.c
> >> +++ b/mm/zsmalloc.c
> >> @@ -1828,13 +1828,12 @@ static void putback_zspage_deferred(struct zs_pool *pool,
> >>  static inline void zs_pool_dec_isolated(struct zs_pool *pool)
> >>  {
> >>         VM_BUG_ON(atomic_long_read(&pool->isolated_pages) <= 0);
> >> -       atomic_long_dec(&pool->isolated_pages);
> >>         /*
> >>          * There's no possibility of racing, since wait_for_isolated_drain()
> >>          * checks the isolated count under &class->lock after enqueuing
> >>          * on migration_wait.
> >>          */
> >> -       if (atomic_long_read(&pool->isolated_pages) == 0 && pool->destroying)
> >> +       if (atomic_long_dec_and_test(&pool->isolated_pages) && pool->destroying)
> >>                 wake_up_all(&pool->migration_wait);
> >>  }
> >>
> >> --
> >> 2.23.0
> >>
> > .
> >
>
Miaohe Lin June 25, 2021, 8:46 a.m. UTC | #4
On 2021/6/25 15:29, Muchun Song wrote:
> On Fri, Jun 25, 2021 at 2:32 PM Miaohe Lin <linmiaohe@huawei.com> wrote:
>>
>> On 2021/6/25 13:01, Muchun Song wrote:
>>> On Thu, Jun 24, 2021 at 8:40 PM Miaohe Lin <linmiaohe@huawei.com> wrote:
>>>>
>>>> atomic_long_dec_and_test() is equivalent to atomic_long_dec() and
>>>> atomic_long_read() == 0. Use it to make code more succinct.
>>>
>>> Actually, they are not equal. atomic_long_dec_and_test implies a
>>> full memory barrier around it but atomic_long_dec and atomic_long_read
>>> don't.
>>>
>>
>> Many thanks for comment. They are indeed not completely equal as you said.
>> What I mean is they can do the same things we want in this specified context.
>> Thanks again.
> 
> I don't think so. Using individual operations can eliminate memory barriers.
> We will pay for the barrier if we use atomic_long_dec_and_test here.

The combination of atomic_long_dec and atomic_long_read usecase is rare and looks somehow
weird. I think it's worth to do this with the cost of barrier.

> 
>>
>>> That RMW operations that have a return value is equal to the following.
>>>
>>> smp_mb__before_atomic()
>>> non-RMW operations or RMW operations that have no return value
>>> smp_mb__after_atomic()
>>>
>>> Thanks.
>>>
>>>>
>>>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>>>> ---
>>>>  mm/zsmalloc.c | 3 +--
>>>>  1 file changed, 1 insertion(+), 2 deletions(-)
>>>>
>>>> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
>>>> index 1476289b619f..0b4b23740d78 100644
>>>> --- a/mm/zsmalloc.c
>>>> +++ b/mm/zsmalloc.c
>>>> @@ -1828,13 +1828,12 @@ static void putback_zspage_deferred(struct zs_pool *pool,
>>>>  static inline void zs_pool_dec_isolated(struct zs_pool *pool)
>>>>  {
>>>>         VM_BUG_ON(atomic_long_read(&pool->isolated_pages) <= 0);
>>>> -       atomic_long_dec(&pool->isolated_pages);
>>>>         /*
>>>>          * There's no possibility of racing, since wait_for_isolated_drain()
>>>>          * checks the isolated count under &class->lock after enqueuing
>>>>          * on migration_wait.
>>>>          */
>>>> -       if (atomic_long_read(&pool->isolated_pages) == 0 && pool->destroying)
>>>> +       if (atomic_long_dec_and_test(&pool->isolated_pages) && pool->destroying)
>>>>                 wake_up_all(&pool->migration_wait);
>>>>  }
>>>>
>>>> --
>>>> 2.23.0
>>>>
>>> .
>>>
>>
> .
>
Miaohe Lin June 25, 2021, 9:32 a.m. UTC | #5
On 2021/6/25 16:46, Miaohe Lin wrote:
> On 2021/6/25 15:29, Muchun Song wrote:
>> On Fri, Jun 25, 2021 at 2:32 PM Miaohe Lin <linmiaohe@huawei.com> wrote:
>>>
>>> On 2021/6/25 13:01, Muchun Song wrote:
>>>> On Thu, Jun 24, 2021 at 8:40 PM Miaohe Lin <linmiaohe@huawei.com> wrote:
>>>>>
>>>>> atomic_long_dec_and_test() is equivalent to atomic_long_dec() and
>>>>> atomic_long_read() == 0. Use it to make code more succinct.
>>>>
>>>> Actually, they are not equal. atomic_long_dec_and_test implies a
>>>> full memory barrier around it but atomic_long_dec and atomic_long_read
>>>> don't.
>>>>
>>>
>>> Many thanks for comment. They are indeed not completely equal as you said.
>>> What I mean is they can do the same things we want in this specified context.
>>> Thanks again.
>>
>> I don't think so. Using individual operations can eliminate memory barriers.
>> We will pay for the barrier if we use atomic_long_dec_and_test here.
> 
> The combination of atomic_long_dec and atomic_long_read usecase is rare and looks somehow
> weird. I think it's worth to do this with the cost of barrier.
> 

It seems there is race between zs_pool_dec_isolated and zs_unregister_migration if pool->destroying
is reordered before the atomic_long_dec and atomic_long_read ops. So this memory barrier is necessary:

zs_pool_dec_isolated				zs_unregister_migration
  pool->destroying != true
						  pool->destroying = true;
						  smp_mb();
						  wait_for_isolated_drain
						    wait_event with atomic_long_read(&pool->isolated_pages) != 0
  atomic_long_dec(&pool->isolated_pages);
  atomic_long_read(&pool->isolated_pages) == 0

Thus wake_up_all is missed.
And the comment in zs_pool_dec_isolated() said:
/*
 * There's no possibility of racing, since wait_for_isolated_drain()
 * checks the isolated count under &class->lock after enqueuing
 * on migration_wait.
 */

But I found &class->lock is indeed not acquired for wait_for_isolated_drain(). So I think the above race
is possible. Does this make senses for you ?
Thanks.

>>
>>>
>>>> That RMW operations that have a return value is equal to the following.
>>>>
>>>> smp_mb__before_atomic()
>>>> non-RMW operations or RMW operations that have no return value
>>>> smp_mb__after_atomic()
>>>>
>>>> Thanks.
>>>>
>>>>>
>>>>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>>>>> ---
>>>>>  mm/zsmalloc.c | 3 +--
>>>>>  1 file changed, 1 insertion(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
>>>>> index 1476289b619f..0b4b23740d78 100644
>>>>> --- a/mm/zsmalloc.c
>>>>> +++ b/mm/zsmalloc.c
>>>>> @@ -1828,13 +1828,12 @@ static void putback_zspage_deferred(struct zs_pool *pool,
>>>>>  static inline void zs_pool_dec_isolated(struct zs_pool *pool)
>>>>>  {
>>>>>         VM_BUG_ON(atomic_long_read(&pool->isolated_pages) <= 0);
>>>>> -       atomic_long_dec(&pool->isolated_pages);
>>>>>         /*
>>>>>          * There's no possibility of racing, since wait_for_isolated_drain()
>>>>>          * checks the isolated count under &class->lock after enqueuing
>>>>>          * on migration_wait.
>>>>>          */
>>>>> -       if (atomic_long_read(&pool->isolated_pages) == 0 && pool->destroying)
>>>>> +       if (atomic_long_dec_and_test(&pool->isolated_pages) && pool->destroying)
>>>>>                 wake_up_all(&pool->migration_wait);
>>>>>  }
>>>>>
>>>>> --
>>>>> 2.23.0
>>>>>
>>>> .
>>>>
>>>
>> .
>>
>
Muchun Song June 25, 2021, 10:40 a.m. UTC | #6
On Fri, Jun 25, 2021 at 5:32 PM Miaohe Lin <linmiaohe@huawei.com> wrote:
>
> On 2021/6/25 16:46, Miaohe Lin wrote:
> > On 2021/6/25 15:29, Muchun Song wrote:
> >> On Fri, Jun 25, 2021 at 2:32 PM Miaohe Lin <linmiaohe@huawei.com> wrote:
> >>>
> >>> On 2021/6/25 13:01, Muchun Song wrote:
> >>>> On Thu, Jun 24, 2021 at 8:40 PM Miaohe Lin <linmiaohe@huawei.com> wrote:
> >>>>>
> >>>>> atomic_long_dec_and_test() is equivalent to atomic_long_dec() and
> >>>>> atomic_long_read() == 0. Use it to make code more succinct.
> >>>>
> >>>> Actually, they are not equal. atomic_long_dec_and_test implies a
> >>>> full memory barrier around it but atomic_long_dec and atomic_long_read
> >>>> don't.
> >>>>
> >>>
> >>> Many thanks for comment. They are indeed not completely equal as you said.
> >>> What I mean is they can do the same things we want in this specified context.
> >>> Thanks again.
> >>
> >> I don't think so. Using individual operations can eliminate memory barriers.
> >> We will pay for the barrier if we use atomic_long_dec_and_test here.
> >
> > The combination of atomic_long_dec and atomic_long_read usecase is rare and looks somehow
> > weird. I think it's worth to do this with the cost of barrier.
> >
>
> It seems there is race between zs_pool_dec_isolated and zs_unregister_migration if pool->destroying
> is reordered before the atomic_long_dec and atomic_long_read ops. So this memory barrier is necessary:
>
> zs_pool_dec_isolated                            zs_unregister_migration
>   pool->destroying != true
>                                                   pool->destroying = true;
>                                                   smp_mb();
>                                                   wait_for_isolated_drain
>                                                     wait_event with atomic_long_read(&pool->isolated_pages) != 0
>   atomic_long_dec(&pool->isolated_pages);
>   atomic_long_read(&pool->isolated_pages) == 0

I am not familiar with zsmalloc. So I do not know whether the race
that you mentioned above exists. But If it exists, the fix also does
not make sense to me. If there should be inserted a smp_mb between
atomic_long_dec and atomic_long_read, you should insert
smp_mb__after_atomic instead of using atomic_long_dec_and_test.
Because smp_mb__after_atomic can be optimized on certain architecture
(e.g. x86_64).

Thanks.

>
> Thus wake_up_all is missed.
> And the comment in zs_pool_dec_isolated() said:
> /*
>  * There's no possibility of racing, since wait_for_isolated_drain()
>  * checks the isolated count under &class->lock after enqueuing
>  * on migration_wait.
>  */
>
> But I found &class->lock is indeed not acquired for wait_for_isolated_drain(). So I think the above race
> is possible. Does this make senses for you ?
> Thanks.
>
> >>
> >>>
> >>>> That RMW operations that have a return value is equal to the following.
> >>>>
> >>>> smp_mb__before_atomic()
> >>>> non-RMW operations or RMW operations that have no return value
> >>>> smp_mb__after_atomic()
> >>>>
> >>>> Thanks.
> >>>>
> >>>>>
> >>>>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> >>>>> ---
> >>>>>  mm/zsmalloc.c | 3 +--
> >>>>>  1 file changed, 1 insertion(+), 2 deletions(-)
> >>>>>
> >>>>> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> >>>>> index 1476289b619f..0b4b23740d78 100644
> >>>>> --- a/mm/zsmalloc.c
> >>>>> +++ b/mm/zsmalloc.c
> >>>>> @@ -1828,13 +1828,12 @@ static void putback_zspage_deferred(struct zs_pool *pool,
> >>>>>  static inline void zs_pool_dec_isolated(struct zs_pool *pool)
> >>>>>  {
> >>>>>         VM_BUG_ON(atomic_long_read(&pool->isolated_pages) <= 0);
> >>>>> -       atomic_long_dec(&pool->isolated_pages);
> >>>>>         /*
> >>>>>          * There's no possibility of racing, since wait_for_isolated_drain()
> >>>>>          * checks the isolated count under &class->lock after enqueuing
> >>>>>          * on migration_wait.
> >>>>>          */
> >>>>> -       if (atomic_long_read(&pool->isolated_pages) == 0 && pool->destroying)
> >>>>> +       if (atomic_long_dec_and_test(&pool->isolated_pages) && pool->destroying)
> >>>>>                 wake_up_all(&pool->migration_wait);
> >>>>>  }
> >>>>>
> >>>>> --
> >>>>> 2.23.0
> >>>>>
> >>>> .
> >>>>
> >>>
> >> .
> >>
> >
>
Miaohe Lin July 1, 2021, 2:43 a.m. UTC | #7
On 2021/6/25 18:40, Muchun Song wrote:
> On Fri, Jun 25, 2021 at 5:32 PM Miaohe Lin <linmiaohe@huawei.com> wrote:
>>
>> On 2021/6/25 16:46, Miaohe Lin wrote:
>>> On 2021/6/25 15:29, Muchun Song wrote:
>>>> On Fri, Jun 25, 2021 at 2:32 PM Miaohe Lin <linmiaohe@huawei.com> wrote:
>>>>>
>>>>> On 2021/6/25 13:01, Muchun Song wrote:
>>>>>> On Thu, Jun 24, 2021 at 8:40 PM Miaohe Lin <linmiaohe@huawei.com> wrote:
>>>>>>>
>>>>>>> atomic_long_dec_and_test() is equivalent to atomic_long_dec() and
>>>>>>> atomic_long_read() == 0. Use it to make code more succinct.
>>>>>>
>>>>>> Actually, they are not equal. atomic_long_dec_and_test implies a
>>>>>> full memory barrier around it but atomic_long_dec and atomic_long_read
>>>>>> don't.
>>>>>>
>>>>>
>>>>> Many thanks for comment. They are indeed not completely equal as you said.
>>>>> What I mean is they can do the same things we want in this specified context.
>>>>> Thanks again.
>>>>
>>>> I don't think so. Using individual operations can eliminate memory barriers.
>>>> We will pay for the barrier if we use atomic_long_dec_and_test here.
>>>
>>> The combination of atomic_long_dec and atomic_long_read usecase is rare and looks somehow
>>> weird. I think it's worth to do this with the cost of barrier.
>>>
>>
>> It seems there is race between zs_pool_dec_isolated and zs_unregister_migration if pool->destroying
>> is reordered before the atomic_long_dec and atomic_long_read ops. So this memory barrier is necessary:
>>
>> zs_pool_dec_isolated                            zs_unregister_migration
>>   pool->destroying != true
>>                                                   pool->destroying = true;
>>                                                   smp_mb();
>>                                                   wait_for_isolated_drain
>>                                                     wait_event with atomic_long_read(&pool->isolated_pages) != 0
>>   atomic_long_dec(&pool->isolated_pages);
>>   atomic_long_read(&pool->isolated_pages) == 0
> 
> I am not familiar with zsmalloc. So I do not know whether the race
> that you mentioned above exists. But If it exists, the fix also does
> not make sense to me. If there should be inserted a smp_mb between
> atomic_long_dec and atomic_long_read, you should insert
> smp_mb__after_atomic instead of using atomic_long_dec_and_test.
> Because smp_mb__after_atomic can be optimized on certain architecture
> (e.g. x86_64).
> 

Sorry for the delay.

I think there is two options:

atomic_long_dec(&pool->isolated_pages);
smp_mb__after_atomic
atomic_long_read(&pool->isolated_pages) == 0

We have two atomic ops with one smp_mb.

vs

atomic_long_dec_and_test while implies __smp_mb__before_atomic + atomic_long_ops + smp_mb__after_atomic.

We have one atomic ops with two smp_mb.

I think either one works but prefer later one. What do you think?

Thanks.

> Thanks.
> 
>>
>> Thus wake_up_all is missed.
>> And the comment in zs_pool_dec_isolated() said:
>> /*
>>  * There's no possibility of racing, since wait_for_isolated_drain()
>>  * checks the isolated count under &class->lock after enqueuing
>>  * on migration_wait.
>>  */
>>
>> But I found &class->lock is indeed not acquired for wait_for_isolated_drain(). So I think the above race
>> is possible. Does this make senses for you ?
>> Thanks.
>>
>>>>
>>>>>
>>>>>> That RMW operations that have a return value is equal to the following.
>>>>>>
>>>>>> smp_mb__before_atomic()
>>>>>> non-RMW operations or RMW operations that have no return value
>>>>>> smp_mb__after_atomic()
>>>>>>
>>>>>> Thanks.
>>>>>>
>>>>>>>
>>>>>>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>>>>>>> ---
>>>>>>>  mm/zsmalloc.c | 3 +--
>>>>>>>  1 file changed, 1 insertion(+), 2 deletions(-)
>>>>>>>
>>>>>>> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
>>>>>>> index 1476289b619f..0b4b23740d78 100644
>>>>>>> --- a/mm/zsmalloc.c
>>>>>>> +++ b/mm/zsmalloc.c
>>>>>>> @@ -1828,13 +1828,12 @@ static void putback_zspage_deferred(struct zs_pool *pool,
>>>>>>>  static inline void zs_pool_dec_isolated(struct zs_pool *pool)
>>>>>>>  {
>>>>>>>         VM_BUG_ON(atomic_long_read(&pool->isolated_pages) <= 0);
>>>>>>> -       atomic_long_dec(&pool->isolated_pages);
>>>>>>>         /*
>>>>>>>          * There's no possibility of racing, since wait_for_isolated_drain()
>>>>>>>          * checks the isolated count under &class->lock after enqueuing
>>>>>>>          * on migration_wait.
>>>>>>>          */
>>>>>>> -       if (atomic_long_read(&pool->isolated_pages) == 0 && pool->destroying)
>>>>>>> +       if (atomic_long_dec_and_test(&pool->isolated_pages) && pool->destroying)
>>>>>>>                 wake_up_all(&pool->migration_wait);
>>>>>>>  }
>>>>>>>
>>>>>>> --
>>>>>>> 2.23.0
>>>>>>>
>>>>>> .
>>>>>>
>>>>>
>>>> .
>>>>
>>>
>>
> .
>
diff mbox series

Patch

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 1476289b619f..0b4b23740d78 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1828,13 +1828,12 @@  static void putback_zspage_deferred(struct zs_pool *pool,
 static inline void zs_pool_dec_isolated(struct zs_pool *pool)
 {
 	VM_BUG_ON(atomic_long_read(&pool->isolated_pages) <= 0);
-	atomic_long_dec(&pool->isolated_pages);
 	/*
 	 * There's no possibility of racing, since wait_for_isolated_drain()
 	 * checks the isolated count under &class->lock after enqueuing
 	 * on migration_wait.
 	 */
-	if (atomic_long_read(&pool->isolated_pages) == 0 && pool->destroying)
+	if (atomic_long_dec_and_test(&pool->isolated_pages) && pool->destroying)
 		wake_up_all(&pool->migration_wait);
 }