diff mbox series

zsmalloc: Fix races between modifications of fullness and isolated

Message ID 20230721063705.11455-1-andrew.yang@mediatek.com (mailing list archive)
State New, archived
Headers show
Series zsmalloc: Fix races between modifications of fullness and isolated | expand

Commit Message

Andrew Yang July 21, 2023, 6:37 a.m. UTC
Since fullness and isolated share the same unsigned int,
modifications of them should be protected by the same lock.

Signed-off-by: Andrew Yang <andrew.yang@mediatek.com>
Fixes: c4549b871102 ("zsmalloc: remove zspage isolation for migration")
---
 mm/zsmalloc.c | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

Comments

Sergey Senozhatsky July 26, 2023, 2:31 a.m. UTC | #1
On (23/07/21 14:37), Andrew Yang wrote:
> 
> Since fullness and isolated share the same unsigned int,
> modifications of them should be protected by the same lock.

Sorry, I don't think I follow. Can you please elaborate?
What is fullness in this context? What is the race condition
exactly? Can I please have something like

	CPU0		CPU1

	foo		bar
Sergey Senozhatsky July 26, 2023, 2:57 a.m. UTC | #2
On (23/07/26 11:31), Sergey Senozhatsky wrote:
> On (23/07/21 14:37), Andrew Yang wrote:
> > 
> > Since fullness and isolated share the same unsigned int,
> > modifications of them should be protected by the same lock.
> 
> Sorry, I don't think I follow. Can you please elaborate?
> What is fullness in this context?

Oh, my bad, so that's zspage's fullness:FULLNESS_BITS and
isolated:ISOLATED_BITS.  I somehow thought about something
very different (page isolated, not zspage isolated).
Sergey Senozhatsky July 26, 2023, 3:18 a.m. UTC | #3
On (23/07/21 14:37), Andrew Yang wrote:
> 
> Since fullness and isolated share the same unsigned int,
> modifications of them should be protected by the same lock.
> 
> Signed-off-by: Andrew Yang <andrew.yang@mediatek.com>
> Fixes: c4549b871102 ("zsmalloc: remove zspage isolation for migration")

Have you observed issues in real life? That commit is more than a year
and a half old, so I wonder.

> @@ -1858,8 +1860,8 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
>  	 * Since we complete the data copy and set up new zspage structure,
>  	 * it's okay to release the pool's lock.
>  	 */

This comment should be moved too, because this is not where we unlock the
pool anymore.

> -	spin_unlock(&pool->lock);
>  	dec_zspage_isolation(zspage);
> +	spin_unlock(&pool->lock);
>  	migrate_write_unlock(zspage);
Andrew Yang July 26, 2023, 6:59 a.m. UTC | #4
On Wed, 2023-07-26 at 12:18 +0900, Sergey Senozhatsky wrote:
>  	 
> External email : Please do not click links or open attachments until
> you have verified the sender or the content.
>  On (23/07/21 14:37), Andrew Yang wrote:
> > 
> > Since fullness and isolated share the same unsigned int,
> > modifications of them should be protected by the same lock.
> > 
> > Signed-off-by: Andrew Yang <andrew.yang@mediatek.com>
> > Fixes: c4549b871102 ("zsmalloc: remove zspage isolation for
> migration")
> 
> Have you observed issues in real life? That commit is more than a
> year
> and a half old, so I wonder.
> 
Yes, we encountered many kernel exceptions of
VM_BUG_ON(zspage->isolated == 0) in dec_zspage_isolation() and
BUG_ON(!pages[1]) in zs_unmap_object() lately.
This issue only occurs when migration and reclamation occur at the
same time. With our memory stress test, we can reproduce this issue
several times a day. We have no idea why no one else encountered
this issue. BTW, we switched to the new kernel version with this
defect a few months ago.
> > @@ -1858,8 +1860,8 @@ static int zs_page_migrate(struct page
> *newpage, struct page *page,
> >   * Since we complete the data copy and set up new zspage
> structure,
> >   * it's okay to release the pool's lock.
> >   */
> 
> This comment should be moved too, because this is not where we unlock
> the
> pool anymore.
> 
Okay, I will submit a new patch later.
> > -spin_unlock(&pool->lock);
> >  dec_zspage_isolation(zspage);
> > +spin_unlock(&pool->lock);
> >  migrate_write_unlock(zspage);
Sergey Senozhatsky July 26, 2023, 11:31 a.m. UTC | #5
On (23/07/26 06:59), Andrew Yang (楊智強) wrote:
> On Wed, 2023-07-26 at 12:18 +0900, Sergey Senozhatsky wrote:
> >
> > External email : Please do not click links or open attachments until
> > you have verified the sender or the content.
> >  On (23/07/21 14:37), Andrew Yang wrote:
> > >
> > > Since fullness and isolated share the same unsigned int,
> > > modifications of them should be protected by the same lock.
> > >
> > > Signed-off-by: Andrew Yang <andrew.yang@mediatek.com>
> > > Fixes: c4549b871102 ("zsmalloc: remove zspage isolation for
> > migration")
> >
> > Have you observed issues in real life? That commit is more than a
> > year
> > and a half old, so I wonder.
> >
> Yes, we encountered many kernel exceptions of
> VM_BUG_ON(zspage->isolated == 0) in dec_zspage_isolation() and
> BUG_ON(!pages[1]) in zs_unmap_object() lately.

Got it.

> This issue only occurs when migration and reclamation occur at the
> same time. With our memory stress test, we can reproduce this issue
> several times a day. We have no idea why no one else encountered
> this issue. BTW, we switched to the new kernel version with this
> defect a few months ago.

Yeah, pretty curious myself.

> > > @@ -1858,8 +1860,8 @@ static int zs_page_migrate(struct page
> > *newpage, struct page *page,
> > >   * Since we complete the data copy and set up new zspage
> > structure,
> > >   * it's okay to release the pool's lock.
> > >   */
> >
> > This comment should be moved too, because this is not where we unlock
> > the
> > pool anymore.
> >
> Okay, I will submit a new patch later.

Thank you!
Andrew Morton July 26, 2023, 8:18 p.m. UTC | #6
On Wed, 26 Jul 2023 06:59:20 +0000 Andrew Yang (楊智強) <Andrew.Yang@mediatek.com> wrote:

> > Have you observed issues in real life? That commit is more than a
> > year
> > and a half old, so I wonder.
> > 
> Yes, we encountered many kernel exceptions of
> VM_BUG_ON(zspage->isolated == 0) in dec_zspage_isolation() and
> BUG_ON(!pages[1]) in zs_unmap_object() lately.
> This issue only occurs when migration and reclamation occur at the
> same time. With our memory stress test, we can reproduce this issue
> several times a day. We have no idea why no one else encountered
> this issue. BTW, we switched to the new kernel version with this
> defect a few months ago.

Ah.  It's important that such information be in the changelog!

I have put this info into my copy of the v1 patch's changelog.

I have moved the v1 patch from the mm-unstable branch into
mm-hotfixes-unstable, so it is staged for merging in this -rc cycle.

I have also added a cc:stable so that the fix gets backported into
kernels which contain c4549b871102.

I have added a note-to-self that a v2 patch is expected.
diff mbox series

Patch

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 32f5bc4074df..b96230402a8d 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1777,6 +1777,7 @@  static void replace_sub_page(struct size_class *class, struct zspage *zspage,
 
 static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
 {
+	struct zs_pool *pool;
 	struct zspage *zspage;
 
 	/*
@@ -1786,9 +1787,10 @@  static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
 	VM_BUG_ON_PAGE(PageIsolated(page), page);
 
 	zspage = get_zspage(page);
-	migrate_write_lock(zspage);
+	pool = zspage->pool;
+	spin_lock(&pool->lock);
 	inc_zspage_isolation(zspage);
-	migrate_write_unlock(zspage);
+	spin_unlock(&pool->lock);
 
 	return true;
 }
@@ -1858,8 +1860,8 @@  static int zs_page_migrate(struct page *newpage, struct page *page,
 	 * Since we complete the data copy and set up new zspage structure,
 	 * it's okay to release the pool's lock.
 	 */
-	spin_unlock(&pool->lock);
 	dec_zspage_isolation(zspage);
+	spin_unlock(&pool->lock);
 	migrate_write_unlock(zspage);
 
 	get_page(newpage);
@@ -1876,14 +1878,16 @@  static int zs_page_migrate(struct page *newpage, struct page *page,
 
 static void zs_page_putback(struct page *page)
 {
+	struct zs_pool *pool;
 	struct zspage *zspage;
 
 	VM_BUG_ON_PAGE(!PageIsolated(page), page);
 
 	zspage = get_zspage(page);
-	migrate_write_lock(zspage);
+	pool = zspage->pool;
+	spin_lock(&pool->lock);
 	dec_zspage_isolation(zspage);
-	migrate_write_unlock(zspage);
+	spin_unlock(&pool->lock);
 }
 
 static const struct movable_operations zsmalloc_mops = {