Message ID | 20241106192105.6731-13-kanchana.p.sridhar@intel.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | zswap IAA compress batching | expand |
On Wed, 6 Nov 2024 11:21:04 -0800 Kanchana P Sridhar <kanchana.p.sridhar@intel.com> wrote: > extern int sysctl_legacy_va_layout; > +extern unsigned int compress_batching; nit: I suggest calling this "sysctl_compress_batching". See how we treated sysctl_legacy_va_layout. > --- a/mm/swap.c > +++ b/mm/swap.c > @@ -47,6 +47,9 @@ > int page_cluster; > const int page_cluster_max = 31; > > +/* Enable/disable compress batching during swapout. */ > +unsigned int compress_batching; > + > struct cpu_fbatches { > /* > * The following folio batches are grouped together because they are protected > @@ -1074,4 +1077,7 @@ void __init swap_setup(void) > * Right now other parts of the system means that we > * _really_ don't want to cluster much more > */ > + > + /* Disable compress batching during swapout by default. */ > + compress_batching = 0; Not really needed? The compiler already did that. > }
Hi Andrew, > -----Original Message----- > From: Andrew Morton <akpm@linux-foundation.org> > Sent: Wednesday, November 6, 2024 12:18 PM > To: Sridhar, Kanchana P <kanchana.p.sridhar@intel.com> > Cc: linux-kernel@vger.kernel.org; linux-mm@kvack.org; > hannes@cmpxchg.org; yosryahmed@google.com; nphamcs@gmail.com; > chengming.zhou@linux.dev; usamaarif642@gmail.com; > ryan.roberts@arm.com; Huang, Ying <ying.huang@intel.com>; > 21cnbao@gmail.com; linux-crypto@vger.kernel.org; > herbert@gondor.apana.org.au; davem@davemloft.net; > clabbe@baylibre.com; ardb@kernel.org; ebiggers@google.com; > surenb@google.com; Accardi, Kristen C <kristen.c.accardi@intel.com>; > zanussi@kernel.org; Feghali, Wajdi K <wajdi.k.feghali@intel.com>; Gopal, > Vinodh <vinodh.gopal@intel.com> > Subject: Re: [PATCH v3 12/13] mm: Add sysctl vm.compress-batching switch > for compress batching during swapout. > > On Wed, 6 Nov 2024 11:21:04 -0800 Kanchana P Sridhar > <kanchana.p.sridhar@intel.com> wrote: > > > extern int sysctl_legacy_va_layout; > > +extern unsigned int compress_batching; > > nit: I suggest calling this "sysctl_compress_batching". See how we > treated sysctl_legacy_va_layout. Thanks for the code review comments. Sure, I will incorporate this in v4. > > > --- a/mm/swap.c > > +++ b/mm/swap.c > > @@ -47,6 +47,9 @@ > > int page_cluster; > > const int page_cluster_max = 31; > > > > +/* Enable/disable compress batching during swapout. */ > > +unsigned int compress_batching; > > + > > struct cpu_fbatches { > > /* > > * The following folio batches are grouped together because they are > protected > > @@ -1074,4 +1077,7 @@ void __init swap_setup(void) > > * Right now other parts of the system means that we > > * _really_ don't want to cluster much more > > */ > > + > > + /* Disable compress batching during swapout by default. */ > > + compress_batching = 0; > > Not really needed? The compiler already did that. Sure, will address this in v4. Thanks, Kanchana > > > }
diff --git a/include/linux/mm.h b/include/linux/mm.h index fecd47239fa9..f61915aa2f37 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -82,8 +82,10 @@ extern const int page_cluster_max; #ifdef CONFIG_SYSCTL extern int sysctl_legacy_va_layout; +extern unsigned int compress_batching; #else #define sysctl_legacy_va_layout 0 +#define compress_batching 0 #endif #ifdef CONFIG_HAVE_ARCH_MMAP_RND_BITS diff --git a/kernel/sysctl.c b/kernel/sysctl.c index 79e6cb1d5c48..e298857595b4 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -2064,6 +2064,15 @@ static struct ctl_table vm_table[] = { .extra1 = SYSCTL_ZERO, .extra2 = (void *)&page_cluster_max, }, + { + .procname = "compress-batching", + .data = &compress_batching, + .maxlen = sizeof(unsigned int), + .mode = 0644, + .proc_handler = proc_douintvec_minmax, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, + }, { .procname = "dirtytime_expire_seconds", .data = &dirtytime_expire_interval, diff --git a/mm/swap.c b/mm/swap.c index 638a3f001676..bc4c9079769e 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -47,6 +47,9 @@ int page_cluster; const int page_cluster_max = 31; +/* Enable/disable compress batching during swapout. */ +unsigned int compress_batching; + struct cpu_fbatches { /* * The following folio batches are grouped together because they are protected @@ -1074,4 +1077,7 @@ void __init swap_setup(void) * Right now other parts of the system means that we * _really_ don't want to cluster much more */ + + /* Disable compress batching during swapout by default. */ + compress_batching = 0; }
The sysctl vm.compress-batching parameter is 0 by default. If the platform has Intel IAA, the user can run experiments with IAA compress batching of large folios in zswap_store() as follows: sysctl vm.compress-batching=1 echo deflate-iaa > /sys/module/zswap/parameters/compressor This is expected to significantly improve zswap_store() latency of swapping out large folios due to parallel compression of 8 pages in the large folio at a time, in hardware. Setting vm.compress-batching to "1" takes effect only if the zswap compression algorithm's crypto_acomp registers implementations for the batch_compress() and batch_decompress() API. In other words, compress batching works only with the iaa_crypto driver, that does register these new batching API. It is a no-op for compressors that do not register the batching API. The sysctl vm.compress-batching acts as a switch because it takes effect upon future zswap_store() calls on any given core. If the switch is "1", large folios will use parallel batched compression of the folio's pages. If the switch is "0", zswap_store() will use sequential compression for storing every page in a large folio. Signed-off-by: Kanchana P Sridhar <kanchana.p.sridhar@intel.com> --- include/linux/mm.h | 2 ++ kernel/sysctl.c | 9 +++++++++ mm/swap.c | 6 ++++++ 3 files changed, 17 insertions(+)