diff mbox series

[v3,2/3] mm: zswap: add zswap_never_enabled()

Message ID 20240611024516.1375191-2-yosryahmed@google.com (mailing list archive)
State New
Headers show
Series [v3,1/3] mm: zswap: rename is_zswap_enabled() to zswap_is_enabled() | expand

Commit Message

Yosry Ahmed June 11, 2024, 2:45 a.m. UTC
Add zswap_never_enabled() to skip the xarray lookup in zswap_load() if
zswap was never enabled on the system. It is implemented using static
branches for efficiency, as enabling zswap should be a rare event. This
could shave some cycles off zswap_load() when CONFIG_ZSWAP is used but
zswap is never enabled.

However, the real motivation behind this patch is two-fold:
- Incoming large folio swapin work will need to fallback to order-0
  folios if zswap was ever enabled, because any part of the folio could
  be in zswap, until proper handling of large folios with zswap is
  added.

- A warning and recovery attempt will be added in a following change in
  case the above was not done incorrectly. Zswap will fail the read if
  the folio is large and it was ever enabled.

Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
---
 mm/zswap.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

Comments

Nhat Pham June 11, 2024, 4:32 p.m. UTC | #1
On Mon, Jun 10, 2024 at 7:45 PM Yosry Ahmed <yosryahmed@google.com> wrote:
>
> Add zswap_never_enabled() to skip the xarray lookup in zswap_load() if
> zswap was never enabled on the system. It is implemented using static
> branches for efficiency, as enabling zswap should be a rare event. This
> could shave some cycles off zswap_load() when CONFIG_ZSWAP is used but
> zswap is never enabled.
>
> However, the real motivation behind this patch is two-fold:
> - Incoming large folio swapin work will need to fallback to order-0
>   folios if zswap was ever enabled, because any part of the folio could
>   be in zswap, until proper handling of large folios with zswap is
>   added.
>
> - A warning and recovery attempt will be added in a following change in
>   case the above was not done incorrectly. Zswap will fail the read if
>   the folio is large and it was ever enabled.
>
> Signed-off-by: Yosry Ahmed <yosryahmed@google.com>

This LGTM.
Reviewed-by: Nhat Pham <nphamcs@gmail.com>
Barry Song June 11, 2024, 9:53 p.m. UTC | #2
On Tue, Jun 11, 2024 at 2:45 PM Yosry Ahmed <yosryahmed@google.com> wrote:
>
> Add zswap_never_enabled() to skip the xarray lookup in zswap_load() if
> zswap was never enabled on the system. It is implemented using static
> branches for efficiency, as enabling zswap should be a rare event. This
> could shave some cycles off zswap_load() when CONFIG_ZSWAP is used but
> zswap is never enabled.
>
> However, the real motivation behind this patch is two-fold:
> - Incoming large folio swapin work will need to fallback to order-0
>   folios if zswap was ever enabled, because any part of the folio could
>   be in zswap, until proper handling of large folios with zswap is
>   added.
>
> - A warning and recovery attempt will be added in a following change in
>   case the above was not done incorrectly. Zswap will fail the read if
>   the folio is large and it was ever enabled.
>
> Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
> ---
>  mm/zswap.c | 10 ++++++++++
>  1 file changed, 10 insertions(+)
>
> diff --git a/mm/zswap.c b/mm/zswap.c
> index a8c8dd8cfe6f5..7fcd751e847d6 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -83,6 +83,7 @@ static bool zswap_pool_reached_full;
>  static int zswap_setup(void);
>
>  /* Enable/disable zswap */
> +static DEFINE_STATIC_KEY_MAYBE(CONFIG_ZSWAP_DEFAULT_ON, zswap_ever_enabled);
>  static bool zswap_enabled = IS_ENABLED(CONFIG_ZSWAP_DEFAULT_ON);
>  static int zswap_enabled_param_set(const char *,
>                                    const struct kernel_param *);
> @@ -136,6 +137,11 @@ bool zswap_is_enabled(void)
>         return zswap_enabled;
>  }
>
> +static bool zswap_never_enabled(void)
> +{
> +       return !static_branch_maybe(CONFIG_ZSWAP_DEFAULT_ON, &zswap_ever_enabled);
> +}

Will we "extern" this one so that mm-core can use it to fallback
to small folios?
or you prefer this to be done within the coming swapin series?

> +
>  /*********************************
>  * data structures
>  **********************************/
> @@ -1557,6 +1563,9 @@ bool zswap_load(struct folio *folio)
>
>         VM_WARN_ON_ONCE(!folio_test_locked(folio));
>
> +       if (zswap_never_enabled())
> +               return false;
> +
>         /*
>          * When reading into the swapcache, invalidate our entry. The
>          * swapcache can be the authoritative owner of the page and
> @@ -1735,6 +1744,7 @@ static int zswap_setup(void)
>                         zpool_get_type(pool->zpools[0]));
>                 list_add(&pool->list, &zswap_pools);
>                 zswap_has_pool = true;
> +               static_branch_enable(&zswap_ever_enabled);
>         } else {
>                 pr_err("pool creation failed\n");
>                 zswap_enabled = false;
> --
> 2.45.2.505.gda0bf45e8d-goog
>

Thanks
Barry
Barry Song June 11, 2024, 10:19 p.m. UTC | #3
On Wed, Jun 12, 2024 at 9:55 AM Yosry Ahmed <yosryahmed@google.com> wrote:
>
> On Tue, Jun 11, 2024 at 2:53 PM Barry Song <21cnbao@gmail.com> wrote:
> >
> > On Tue, Jun 11, 2024 at 2:45 PM Yosry Ahmed <yosryahmed@google.com> wrote:
> > >
> > > Add zswap_never_enabled() to skip the xarray lookup in zswap_load() if
> > > zswap was never enabled on the system. It is implemented using static
> > > branches for efficiency, as enabling zswap should be a rare event. This
> > > could shave some cycles off zswap_load() when CONFIG_ZSWAP is used but
> > > zswap is never enabled.
> > >
> > > However, the real motivation behind this patch is two-fold:
> > > - Incoming large folio swapin work will need to fallback to order-0
> > >   folios if zswap was ever enabled, because any part of the folio could
> > >   be in zswap, until proper handling of large folios with zswap is
> > >   added.
> > >
> > > - A warning and recovery attempt will be added in a following change in
> > >   case the above was not done incorrectly. Zswap will fail the read if
> > >   the folio is large and it was ever enabled.
> > >
> > > Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
> > > ---
> > >  mm/zswap.c | 10 ++++++++++
> > >  1 file changed, 10 insertions(+)
> > >
> > > diff --git a/mm/zswap.c b/mm/zswap.c
> > > index a8c8dd8cfe6f5..7fcd751e847d6 100644
> > > --- a/mm/zswap.c
> > > +++ b/mm/zswap.c
> > > @@ -83,6 +83,7 @@ static bool zswap_pool_reached_full;
> > >  static int zswap_setup(void);
> > >
> > >  /* Enable/disable zswap */
> > > +static DEFINE_STATIC_KEY_MAYBE(CONFIG_ZSWAP_DEFAULT_ON, zswap_ever_enabled);
> > >  static bool zswap_enabled = IS_ENABLED(CONFIG_ZSWAP_DEFAULT_ON);
> > >  static int zswap_enabled_param_set(const char *,
> > >                                    const struct kernel_param *);
> > > @@ -136,6 +137,11 @@ bool zswap_is_enabled(void)
> > >         return zswap_enabled;
> > >  }
> > >
> > > +static bool zswap_never_enabled(void)
> > > +{
> > > +       return !static_branch_maybe(CONFIG_ZSWAP_DEFAULT_ON, &zswap_ever_enabled);
> > > +}
> >
> > Will we "extern" this one so that mm-core can use it to fallback
> > to small folios?
> > or you prefer this to be done within the coming swapin series?
>
> My intention was to keep it static for now, and expose it in the
> header when needed (in the swapin series). If others think it's better
> to do this now to avoid the churn I am happy to do it as well.

Personally, I'd vote for exposing it now to avoid one more patch which might
come shortly. And this patchset serves the clear purpose of drawing attention
from mm-core to fallback to small folios.

Thanks
Barry
Yosry Ahmed June 11, 2024, 11:37 p.m. UTC | #4
On Wed, Jun 12, 2024 at 10:19:58AM +1200, Barry Song wrote:
> On Wed, Jun 12, 2024 at 9:55 AM Yosry Ahmed <yosryahmed@google.com> wrote:
> >
> > On Tue, Jun 11, 2024 at 2:53 PM Barry Song <21cnbao@gmail.com> wrote:
> > >
> > > On Tue, Jun 11, 2024 at 2:45 PM Yosry Ahmed <yosryahmed@google.com> wrote:
> > > >
> > > > Add zswap_never_enabled() to skip the xarray lookup in zswap_load() if
> > > > zswap was never enabled on the system. It is implemented using static
> > > > branches for efficiency, as enabling zswap should be a rare event. This
> > > > could shave some cycles off zswap_load() when CONFIG_ZSWAP is used but
> > > > zswap is never enabled.
> > > >
> > > > However, the real motivation behind this patch is two-fold:
> > > > - Incoming large folio swapin work will need to fallback to order-0
> > > >   folios if zswap was ever enabled, because any part of the folio could
> > > >   be in zswap, until proper handling of large folios with zswap is
> > > >   added.
> > > >
> > > > - A warning and recovery attempt will be added in a following change in
> > > >   case the above was not done incorrectly. Zswap will fail the read if
> > > >   the folio is large and it was ever enabled.
> > > >
> > > > Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
> > > > ---
> > > >  mm/zswap.c | 10 ++++++++++
> > > >  1 file changed, 10 insertions(+)
> > > >
> > > > diff --git a/mm/zswap.c b/mm/zswap.c
> > > > index a8c8dd8cfe6f5..7fcd751e847d6 100644
> > > > --- a/mm/zswap.c
> > > > +++ b/mm/zswap.c
> > > > @@ -83,6 +83,7 @@ static bool zswap_pool_reached_full;
> > > >  static int zswap_setup(void);
> > > >
> > > >  /* Enable/disable zswap */
> > > > +static DEFINE_STATIC_KEY_MAYBE(CONFIG_ZSWAP_DEFAULT_ON, zswap_ever_enabled);
> > > >  static bool zswap_enabled = IS_ENABLED(CONFIG_ZSWAP_DEFAULT_ON);
> > > >  static int zswap_enabled_param_set(const char *,
> > > >                                    const struct kernel_param *);
> > > > @@ -136,6 +137,11 @@ bool zswap_is_enabled(void)
> > > >         return zswap_enabled;
> > > >  }
> > > >
> > > > +static bool zswap_never_enabled(void)
> > > > +{
> > > > +       return !static_branch_maybe(CONFIG_ZSWAP_DEFAULT_ON, &zswap_ever_enabled);
> > > > +}
> > >
> > > Will we "extern" this one so that mm-core can use it to fallback
> > > to small folios?
> > > or you prefer this to be done within the coming swapin series?
> >
> > My intention was to keep it static for now, and expose it in the
> > header when needed (in the swapin series). If others think it's better
> > to do this now to avoid the churn I am happy to do it as well.
> 
> Personally, I'd vote for exposing it now to avoid one more patch which might
> come shortly. And this patchset serves the clear purpose of drawing attention
> from mm-core to fallback to small folios.

Sure. Andrew, unless anyone objects, could you please squash the
following diff and add the following sentence to the commit log:

"Expose zswap_never_enabled() in the header for the swapin work to use
it later."

diff --git a/include/linux/zswap.h b/include/linux/zswap.h
index ce5e7bfe8f1ec..bf83ae5e285d4 100644
--- a/include/linux/zswap.h
+++ b/include/linux/zswap.h
@@ -36,6 +36,7 @@ void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg);
 void zswap_lruvec_state_init(struct lruvec *lruvec);
 void zswap_folio_swapin(struct folio *folio);
 bool zswap_is_enabled(void);
+bool zswap_never_enabled(void);
 #else
 
 struct zswap_lruvec_state {};
@@ -65,6 +66,11 @@ static inline bool zswap_is_enabled(void)
 	return false;
 }
 
+static inline bool zswap_never_enabled(void)
+{
+	return false;
+}
+
 #endif
 
 #endif /* _LINUX_ZSWAP_H */
diff --git a/mm/zswap.c b/mm/zswap.c
index 505f4b9812891..a546c01602aaf 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -137,7 +137,7 @@ bool zswap_is_enabled(void)
 	return zswap_enabled;
 }
 
-static bool zswap_never_enabled(void)
+bool zswap_never_enabled(void)
 {
 	return !static_branch_maybe(CONFIG_ZSWAP_DEFAULT_ON, &zswap_ever_enabled);
 }

> 
> Thanks
> Barry
diff mbox series

Patch

diff --git a/mm/zswap.c b/mm/zswap.c
index a8c8dd8cfe6f5..7fcd751e847d6 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -83,6 +83,7 @@  static bool zswap_pool_reached_full;
 static int zswap_setup(void);
 
 /* Enable/disable zswap */
+static DEFINE_STATIC_KEY_MAYBE(CONFIG_ZSWAP_DEFAULT_ON, zswap_ever_enabled);
 static bool zswap_enabled = IS_ENABLED(CONFIG_ZSWAP_DEFAULT_ON);
 static int zswap_enabled_param_set(const char *,
 				   const struct kernel_param *);
@@ -136,6 +137,11 @@  bool zswap_is_enabled(void)
 	return zswap_enabled;
 }
 
+static bool zswap_never_enabled(void)
+{
+	return !static_branch_maybe(CONFIG_ZSWAP_DEFAULT_ON, &zswap_ever_enabled);
+}
+
 /*********************************
 * data structures
 **********************************/
@@ -1557,6 +1563,9 @@  bool zswap_load(struct folio *folio)
 
 	VM_WARN_ON_ONCE(!folio_test_locked(folio));
 
+	if (zswap_never_enabled())
+		return false;
+
 	/*
 	 * When reading into the swapcache, invalidate our entry. The
 	 * swapcache can be the authoritative owner of the page and
@@ -1735,6 +1744,7 @@  static int zswap_setup(void)
 			zpool_get_type(pool->zpools[0]));
 		list_add(&pool->list, &zswap_pools);
 		zswap_has_pool = true;
+		static_branch_enable(&zswap_ever_enabled);
 	} else {
 		pr_err("pool creation failed\n");
 		zswap_enabled = false;