diff mbox series

[PATCHv2,1/6] zsmalloc: remove insert_zspage() ->inuse optimization

Message ID 20230223030451.543162-2-senozhatsky@chromium.org (mailing list archive)
State New
Headers show
Series zsmalloc: fine-grained fullness and new compaction algorithm | expand

Commit Message

Sergey Senozhatsky Feb. 23, 2023, 3:04 a.m. UTC
This optimization has no effect. It only ensures that
when a page was added to its corresponding fullness
list, its "inuse" counter was higher or lower than the
"inuse" counter of the page at the head of the list.
The intention was to keep busy pages at the head, so
they could be filled up and moved to the ZS_FULL
fullness group more quickly. However, this doesn't work
as the "inuse" counter of a page can be modified by
obj_free() but the page may still belong to the same
fullness list. So, fix_fullness_group() won't change
the page's position in relation to the head's "inuse"
counter, leading to a largely random order of pages
within the fullness list.

For instance, consider a printout of the "inuse"
counters of the first 10 pages in a class that holds
93 objects per zspage:

 ZS_ALMOST_EMPTY:  36  67  68  64  35  54  63  52

As we can see the page with the lowest "inuse" counter
is actually the head of the fullness list.

Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org>
---
 mm/zsmalloc.c | 29 ++++++++---------------------
 1 file changed, 8 insertions(+), 21 deletions(-)

Comments

Minchan Kim Feb. 23, 2023, 11:09 p.m. UTC | #1
On Thu, Feb 23, 2023 at 12:04:46PM +0900, Sergey Senozhatsky wrote:
> This optimization has no effect. It only ensures that
> when a page was added to its corresponding fullness
> list, its "inuse" counter was higher or lower than the
> "inuse" counter of the page at the head of the list.
> The intention was to keep busy pages at the head, so
> they could be filled up and moved to the ZS_FULL
> fullness group more quickly. However, this doesn't work
> as the "inuse" counter of a page can be modified by

                              zspage

Let's use term zspage instead of page to prevent confusing.

> obj_free() but the page may still belong to the same
> fullness list. So, fix_fullness_group() won't change

Yes. I didn't expect it should be perfect from the beginning
but would help just little optimization.

> the page's position in relation to the head's "inuse"
> counter, leading to a largely random order of pages
> within the fullness list.

Good point.

> 
> For instance, consider a printout of the "inuse"
> counters of the first 10 pages in a class that holds
> 93 objects per zspage:
> 
>  ZS_ALMOST_EMPTY:  36  67  68  64  35  54  63  52
> 
> As we can see the page with the lowest "inuse" counter
> is actually the head of the fullness list.

Let's write what the patch is doing cleary

"So, let's remove the pointless optimization" or something better word.

> 
> Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org>
> ---
>  mm/zsmalloc.c | 29 ++++++++---------------------
>  1 file changed, 8 insertions(+), 21 deletions(-)
> 
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 3aed46ab7e6c..b57a89ed6f30 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -753,37 +753,24 @@ static enum fullness_group get_fullness_group(struct size_class *class,
>  }
>  
>  /*
> - * Each size class maintains various freelists and zspages are assigned
> - * to one of these freelists based on the number of live objects they
> - * have. This functions inserts the given zspage into the freelist
> - * identified by <class, fullness_group>.
> + * This function adds the given zspage to the fullness list identified
> + * by <class, fullness_group>.
>   */
>  static void insert_zspage(struct size_class *class,
> -				struct zspage *zspage,
> -				enum fullness_group fullness)
> +			  struct zspage *zspage,
> +			  enum fullness_group fullness)

Unnecessary changes

>  {
> -	struct zspage *head;
> -
>  	class_stat_inc(class, fullness, 1);
> -	head = list_first_entry_or_null(&class->fullness_list[fullness],
> -					struct zspage, list);
> -	/*
> -	 * We want to see more ZS_FULL pages and less almost empty/full.
> -	 * Put pages with higher ->inuse first.
> -	 */
> -	if (head && get_zspage_inuse(zspage) < get_zspage_inuse(head))
> -		list_add(&zspage->list, &head->list);
> -	else
> -		list_add(&zspage->list, &class->fullness_list[fullness]);
> +	list_add(&zspage->list, &class->fullness_list[fullness]);
>  }
>  
>  /*
> - * This function removes the given zspage from the freelist identified
> + * This function removes the given zspage from the fullness list identified
>   * by <class, fullness_group>.
>   */
>  static void remove_zspage(struct size_class *class,
> -				struct zspage *zspage,
> -				enum fullness_group fullness)
> +			  struct zspage *zspage,
> +			  enum fullness_group fullness)

Ditto.

Other than that, looks good to me.
Sergey Senozhatsky Feb. 26, 2023, 4:40 a.m. UTC | #2
On (23/02/23 15:09), Minchan Kim wrote:
> 
> On Thu, Feb 23, 2023 at 12:04:46PM +0900, Sergey Senozhatsky wrote:
> > This optimization has no effect. It only ensures that
> > when a page was added to its corresponding fullness
> > list, its "inuse" counter was higher or lower than the
> > "inuse" counter of the page at the head of the list.
> > The intention was to keep busy pages at the head, so
> > they could be filled up and moved to the ZS_FULL
> > fullness group more quickly. However, this doesn't work
> > as the "inuse" counter of a page can be modified by
> 
>                               zspage
> 
> Let's use term zspage instead of page to prevent confusing.
> 
> > obj_free() but the page may still belong to the same
> > fullness list. So, fix_fullness_group() won't change
> 
> Yes. I didn't expect it should be perfect from the beginning
> but would help just little optimization.
> 
> > the page's position in relation to the head's "inuse"
> > counter, leading to a largely random order of pages
> > within the fullness list.
> 
> Good point.
> 
> > 
> > For instance, consider a printout of the "inuse"
> > counters of the first 10 pages in a class that holds
> > 93 objects per zspage:
> > 
> >  ZS_ALMOST_EMPTY:  36  67  68  64  35  54  63  52
> > 
> > As we can see the page with the lowest "inuse" counter
> > is actually the head of the fullness list.
> 
> Let's write what the patch is doing cleary
> 
> "So, let's remove the pointless optimization" or something better word.

ACK to all feedback (for all the patches). Thanks!
diff mbox series

Patch

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 3aed46ab7e6c..b57a89ed6f30 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -753,37 +753,24 @@  static enum fullness_group get_fullness_group(struct size_class *class,
 }
 
 /*
- * Each size class maintains various freelists and zspages are assigned
- * to one of these freelists based on the number of live objects they
- * have. This functions inserts the given zspage into the freelist
- * identified by <class, fullness_group>.
+ * This function adds the given zspage to the fullness list identified
+ * by <class, fullness_group>.
  */
 static void insert_zspage(struct size_class *class,
-				struct zspage *zspage,
-				enum fullness_group fullness)
+			  struct zspage *zspage,
+			  enum fullness_group fullness)
 {
-	struct zspage *head;
-
 	class_stat_inc(class, fullness, 1);
-	head = list_first_entry_or_null(&class->fullness_list[fullness],
-					struct zspage, list);
-	/*
-	 * We want to see more ZS_FULL pages and less almost empty/full.
-	 * Put pages with higher ->inuse first.
-	 */
-	if (head && get_zspage_inuse(zspage) < get_zspage_inuse(head))
-		list_add(&zspage->list, &head->list);
-	else
-		list_add(&zspage->list, &class->fullness_list[fullness]);
+	list_add(&zspage->list, &class->fullness_list[fullness]);
 }
 
 /*
- * This function removes the given zspage from the freelist identified
+ * This function removes the given zspage from the fullness list identified
  * by <class, fullness_group>.
  */
 static void remove_zspage(struct size_class *class,
-				struct zspage *zspage,
-				enum fullness_group fullness)
+			  struct zspage *zspage,
+			  enum fullness_group fullness)
 {
 	VM_BUG_ON(list_empty(&class->fullness_list[fullness]));