diff mbox series

[v3,3/5] zsmalloc: Add a LRU to zs_pool to keep track of zspages in LRU order

Message ID 20221108193207.3297327-4-nphamcs@gmail.com (mailing list archive)
State New
Headers show
Series Implement writeback for zsmalloc | expand

Commit Message

Nhat Pham Nov. 8, 2022, 7:32 p.m. UTC
This helps determines the coldest zspages as candidates for writeback.

Signed-off-by: Nhat Pham <nphamcs@gmail.com>
---
 mm/zsmalloc.c | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

--
2.30.2

Comments

Minchan Kim Nov. 9, 2022, 9:55 p.m. UTC | #1
On Tue, Nov 08, 2022 at 11:32:05AM -0800, Nhat Pham wrote:
> This helps determines the coldest zspages as candidates for writeback.
> 
> Signed-off-by: Nhat Pham <nphamcs@gmail.com>
> ---
>  mm/zsmalloc.c | 27 +++++++++++++++++++++++++++
>  1 file changed, 27 insertions(+)
> 
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 326faa751f0a..600c40121544 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -239,6 +239,9 @@ struct zs_pool {
>  	/* Compact classes */
>  	struct shrinker shrinker;
> 
> +	/* List tracking the zspages in LRU order by most recently added object */
> +	struct list_head lru;
> +
>  #ifdef CONFIG_ZSMALLOC_STAT
>  	struct dentry *stat_dentry;
>  #endif
> @@ -260,6 +263,10 @@ struct zspage {
>  	unsigned int freeobj;
>  	struct page *first_page;
>  	struct list_head list; /* fullness list */
> +
> +	/* links the zspage to the lru list in the pool */
> +	struct list_head lru;

Please put the LRU logic under config ZSMALLOC_LRU since we don't need
the additional logic to others.
Nhat Pham Nov. 10, 2022, 5:18 p.m. UTC | #2
> Please put the LRU logic under config ZSMALLOC_LRU since we don't need the
> additional logic to others.

I think the existing CONFIG_ZPOOL would be a good option for this purpose. It
should disable the LRU behavior for non-zswap use case (zram for e.g). The
eviction logic is also currently defined under this. What do you think,
Minchan?
Minchan Kim Nov. 10, 2022, 10:48 p.m. UTC | #3
On Thu, Nov 10, 2022 at 09:18:31AM -0800, Nhat Pham wrote:
> > Please put the LRU logic under config ZSMALLOC_LRU since we don't need the
> > additional logic to others.
> 
> I think the existing CONFIG_ZPOOL would be a good option for this purpose. It
> should disable the LRU behavior for non-zswap use case (zram for e.g). The
> eviction logic is also currently defined under this. What do you think,
> Minchan?

That sounds good.

Sergey and I are working to change zsmalloc zspage size.
https://lore.kernel.org/linux-mm/20221031054108.541190-1-senozhatsky@chromium.org/

Could you send a new version once we settle those change down
in Andrew's tree to minimize conflict?
(Feel free to join the review/discussion if you are also interested ;-))

Thanks.
Johannes Weiner Nov. 16, 2022, 10:02 p.m. UTC | #4
On Thu, Nov 10, 2022 at 02:48:32PM -0800, Minchan Kim wrote:
> On Thu, Nov 10, 2022 at 09:18:31AM -0800, Nhat Pham wrote:
> > > Please put the LRU logic under config ZSMALLOC_LRU since we don't need the
> > > additional logic to others.
> > 
> > I think the existing CONFIG_ZPOOL would be a good option for this purpose. It
> > should disable the LRU behavior for non-zswap use case (zram for e.g). The
> > eviction logic is also currently defined under this. What do you think,
> > Minchan?
> 
> That sounds good.
> 
> Sergey and I are working to change zsmalloc zspage size.
> https://lore.kernel.org/linux-mm/20221031054108.541190-1-senozhatsky@chromium.org/
> 
> Could you send a new version once we settle those change down
> in Andrew's tree to minimize conflict?
> (Feel free to join the review/discussion if you are also interested ;-))

I've been reading through that thread, and it doesn't look like it'll
be ready for the upcoming merge window. (I've tried to contribute
something useful to it, but it's a fairly difficult tuning problem,
and I don't know if a sysfs knob is the best answer, either...)

Would you have any objections to putting Nhat's patches here into 6.2?

It doesn't sound like there was any more feedback (except the trivial
ifdef around the LRU), and the patches are otherwise ready to go.
Minchan Kim Nov. 16, 2022, 11:49 p.m. UTC | #5
On Wed, Nov 16, 2022 at 05:02:57PM -0500, Johannes Weiner wrote:
> On Thu, Nov 10, 2022 at 02:48:32PM -0800, Minchan Kim wrote:
> > On Thu, Nov 10, 2022 at 09:18:31AM -0800, Nhat Pham wrote:
> > > > Please put the LRU logic under config ZSMALLOC_LRU since we don't need the
> > > > additional logic to others.
> > > 
> > > I think the existing CONFIG_ZPOOL would be a good option for this purpose. It
> > > should disable the LRU behavior for non-zswap use case (zram for e.g). The
> > > eviction logic is also currently defined under this. What do you think,
> > > Minchan?
> > 
> > That sounds good.
> > 
> > Sergey and I are working to change zsmalloc zspage size.
> > https://lore.kernel.org/linux-mm/20221031054108.541190-1-senozhatsky@chromium.org/
> > 
> > Could you send a new version once we settle those change down
> > in Andrew's tree to minimize conflict?
> > (Feel free to join the review/discussion if you are also interested ;-))
> 
> I've been reading through that thread, and it doesn't look like it'll
> be ready for the upcoming merge window. (I've tried to contribute

Depending on the discussion status :)

> something useful to it, but it's a fairly difficult tuning problem,
> and I don't know if a sysfs knob is the best answer, either...)

That's the point.

> 
> Would you have any objections to putting Nhat's patches here into 6.2?

I don't want to block due to other issues so no objection from my side.

> 
> It doesn't sound like there was any more feedback (except the trivial
> ifdef around the LRU), and the patches are otherwise ready to go.

In fact, I didn't start the review yet so please post it unless
Sergey objects it.

Thank you.
Sergey Senozhatsky Nov. 17, 2022, 12:37 a.m. UTC | #6
On (22/11/16 15:49), Minchan Kim wrote:
> > It doesn't sound like there was any more feedback (except the trivial
> > ifdef around the LRU), and the patches are otherwise ready to go.
> 
> In fact, I didn't start the review yet so please post it unless
> Sergey objects it.

I didn't start review yet, but if you have a new version then
we can take a look.
diff mbox series

Patch

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 326faa751f0a..600c40121544 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -239,6 +239,9 @@  struct zs_pool {
 	/* Compact classes */
 	struct shrinker shrinker;

+	/* List tracking the zspages in LRU order by most recently added object */
+	struct list_head lru;
+
 #ifdef CONFIG_ZSMALLOC_STAT
 	struct dentry *stat_dentry;
 #endif
@@ -260,6 +263,10 @@  struct zspage {
 	unsigned int freeobj;
 	struct page *first_page;
 	struct list_head list; /* fullness list */
+
+	/* links the zspage to the lru list in the pool */
+	struct list_head lru;
+
 	struct zs_pool *pool;
 #ifdef CONFIG_COMPACTION
 	rwlock_t lock;
@@ -352,6 +359,16 @@  static void cache_free_zspage(struct zs_pool *pool, struct zspage *zspage)
 	kmem_cache_free(pool->zspage_cachep, zspage);
 }

+/* Moves the zspage to the front of the zspool's LRU */
+static void move_to_front(struct zs_pool *pool, struct zspage *zspage)
+{
+	assert_spin_locked(&pool->lock);
+
+	if (!list_empty(&zspage->lru))
+		list_del(&zspage->lru);
+	list_add(&zspage->lru, &pool->lru);
+}
+
 /* pool->lock(which owns the handle) synchronizes races */
 static void record_obj(unsigned long handle, unsigned long obj)
 {
@@ -953,6 +970,7 @@  static void free_zspage(struct zs_pool *pool, struct size_class *class,
 	}

 	remove_zspage(class, zspage, ZS_EMPTY);
+	list_del(&zspage->lru);
 	__free_zspage(pool, class, zspage);
 }

@@ -998,6 +1016,8 @@  static void init_zspage(struct size_class *class, struct zspage *zspage)
 		off %= PAGE_SIZE;
 	}

+	INIT_LIST_HEAD(&zspage->lru);
+
 	set_freeobj(zspage, 0);
 }

@@ -1418,6 +1438,8 @@  unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp)
 		fix_fullness_group(class, zspage);
 		record_obj(handle, obj);
 		class_stat_inc(class, OBJ_USED, 1);
+		/* Move the zspage to front of pool's LRU */
+		move_to_front(pool, zspage);
 		spin_unlock(&pool->lock);

 		return handle;
@@ -1444,6 +1466,8 @@  unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp)

 	/* We completely set up zspage so mark them as movable */
 	SetZsPageMovable(pool, zspage);
+	/* Move the zspage to front of pool's LRU */
+	move_to_front(pool, zspage);
 	spin_unlock(&pool->lock);

 	return handle;
@@ -1967,6 +1991,7 @@  static void async_free_zspage(struct work_struct *work)
 		VM_BUG_ON(fullness != ZS_EMPTY);
 		class = pool->size_class[class_idx];
 		spin_lock(&pool->lock);
+		list_del(&zspage->lru);
 		__free_zspage(pool, class, zspage);
 		spin_unlock(&pool->lock);
 	}
@@ -2278,6 +2303,8 @@  struct zs_pool *zs_create_pool(const char *name)
 	 */
 	zs_register_shrinker(pool);

+	INIT_LIST_HEAD(&pool->lru);
+
 	return pool;

 err: