diff mbox series

[v4,09/17] mm: workingset: use xas_set_lru() to pass shadow_nodes

Message ID 20211213165342.74704-10-songmuchun@bytedance.com (mailing list archive)
State New, archived
Headers show
Series Optimize list lru memory consumption | expand

Commit Message

Muchun Song Dec. 13, 2021, 4:53 p.m. UTC
The workingset will add the xa_node to shadow_nodes, so we should use
xas_set_lru() to pass the list_lru which we want to insert xa_node
into to set up the xa_node reclaim context correctly.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 include/linux/swap.h | 5 ++++-
 mm/workingset.c      | 2 +-
 2 files changed, 5 insertions(+), 2 deletions(-)

Comments

Johannes Weiner Dec. 14, 2021, 2:09 p.m. UTC | #1
On Tue, Dec 14, 2021 at 12:53:34AM +0800, Muchun Song wrote:
> The workingset will add the xa_node to shadow_nodes, so we should use
> xas_set_lru() to pass the list_lru which we want to insert xa_node
> into to set up the xa_node reclaim context correctly.
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>

Ah, you can't instantiate the list on-demand in list_lru_add() because
that's happening in an atomic context. So you need the lru available
in the broader xa update context and group the lru setup in with the
other pre-atomic node allocation bits. Fair enough. I think it would
be a bit easier to read if this patch and the previous one were
squashed (workingset is the only user of xa_lru anyway) and you added
that explanation. But other than that, the changes make sense to me;
to a combined patch, please add:

Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Muchun Song Dec. 15, 2021, 12:36 p.m. UTC | #2
On Tue, Dec 14, 2021 at 10:09 PM Johannes Weiner <hannes@cmpxchg.org> wrote:
>
> On Tue, Dec 14, 2021 at 12:53:34AM +0800, Muchun Song wrote:
> > The workingset will add the xa_node to shadow_nodes, so we should use
> > xas_set_lru() to pass the list_lru which we want to insert xa_node
> > into to set up the xa_node reclaim context correctly.
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
>
> Ah, you can't instantiate the list on-demand in list_lru_add() because
> that's happening in an atomic context. So you need the lru available
> in the broader xa update context and group the lru setup in with the
> other pre-atomic node allocation bits. Fair enough. I think it would
> be a bit easier to read if this patch and the previous one were
> squashed (workingset is the only user of xa_lru anyway) and you added
> that explanation. But other than that, the changes make sense to me;
> to a combined patch, please add:
>

Great. I'll do it. Thanks for your review.

> Acked-by: Johannes Weiner <hannes@cmpxchg.org>
diff mbox series

Patch

diff --git a/include/linux/swap.h b/include/linux/swap.h
index d1ea44b31f19..1ae9d3473c02 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -334,9 +334,12 @@  void workingset_activation(struct folio *folio);
 
 /* Only track the nodes of mappings with shadow entries */
 void workingset_update_node(struct xa_node *node);
+extern struct list_lru shadow_nodes;
 #define mapping_set_update(xas, mapping) do {				\
-	if (!dax_mapping(mapping) && !shmem_mapping(mapping))		\
+	if (!dax_mapping(mapping) && !shmem_mapping(mapping)) {		\
 		xas_set_update(xas, workingset_update_node);		\
+		xas_set_lru(xas, &shadow_nodes);			\
+	}								\
 } while (0)
 
 /* linux/mm/page_alloc.c */
diff --git a/mm/workingset.c b/mm/workingset.c
index 8c03afe1d67c..979c7130c266 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -429,7 +429,7 @@  void workingset_activation(struct folio *folio)
  * point where they would still be useful.
  */
 
-static struct list_lru shadow_nodes;
+struct list_lru shadow_nodes;
 
 void workingset_update_node(struct xa_node *node)
 {