diff mbox series

btrfs: wait between incomplete batch allocations

Message ID 07d6dbf34243b562287e953c44a70cbb6fca15a1.1649268923.git.sweettea-kernel@dorminy.me (mailing list archive)
State New, archived
Headers show
Series btrfs: wait between incomplete batch allocations | expand

Commit Message

Sweet Tea Dorminy April 6, 2022, 6:24 p.m. UTC
When allocating memory in a loop, each iteration should call
memalloc_retry_wait() in order to prevent starving memory-freeing
processes (and to mark where allcoation loops are). ext4, f2fs, and xfs
all use this function at present for their allocation loops; btrfs ought
also.

The bulk page allocation is the only place in btrfs with an allocation
retry loop, so add an appropriate call to it.

Suggested-by: David Sterba <dsterba@suse.cz>
Signed-off-by: Sweet Tea Dorminy <sweettea-kernel@dorminy.me>
---
 fs/btrfs/extent_io.c | 3 +++
 1 file changed, 3 insertions(+)

Comments

David Sterba April 7, 2022, 2:52 p.m. UTC | #1
On Wed, Apr 06, 2022 at 02:24:18PM -0400, Sweet Tea Dorminy wrote:
> When allocating memory in a loop, each iteration should call
> memalloc_retry_wait() in order to prevent starving memory-freeing
> processes (and to mark where allcoation loops are). ext4, f2fs, and xfs
> all use this function at present for their allocation loops; btrfs ought
> also.
> 
> The bulk page allocation is the only place in btrfs with an allocation
> retry loop, so add an appropriate call to it.
> 
> Suggested-by: David Sterba <dsterba@suse.cz>
> Signed-off-by: Sweet Tea Dorminy <sweettea-kernel@dorminy.me>

Added to misc-next, thanks.
Naohiro Aota April 11, 2022, 7:11 a.m. UTC | #2
On Wed, Apr 06, 2022 at 02:24:18PM -0400, Sweet Tea Dorminy wrote:
> When allocating memory in a loop, each iteration should call
> memalloc_retry_wait() in order to prevent starving memory-freeing
> processes (and to mark where allcoation loops are). ext4, f2fs, and xfs
> all use this function at present for their allocation loops; btrfs ought
> also.
> 
> The bulk page allocation is the only place in btrfs with an allocation
> retry loop, so add an appropriate call to it.
> 
> Suggested-by: David Sterba <dsterba@suse.cz>
> Signed-off-by: Sweet Tea Dorminy <sweettea-kernel@dorminy.me>

The fstests btrfs/187 becomes incredibly slow with this patch applied.

For example, on a nvme ZNS SSD (zoned) device, it takes over 10 hours to
finish the test case. It only takes 765 seconds if I revert this commit
from the misc-next branch.

I also confirmed the same slowdown occurs on regular btrfs. For the
baseline, with this commit reverted, it takes 335 seconds on 8GB ZRAM
device running on QEMU (8GB RAM), and takes 768 seconds on a (non-zoned)
HDD running on a real machine (128GB RAM). The tests on misc-next with the
same setup above is still running, but it already took 2 hours.

The test case runs full btrfs sending 5 times and incremental btrfs sending
10 times at the same time. Also, dedupe loop and balance loop is running
simultaneously while all the send commands finish.

The slowdown of the test case basically comes from slow "btrfs send"
command. On the HDD run, it takes 25 minutes to run a full btrfs sending
command and 1 hour 18 minutes to run a incremental btrfs sending
command. Thus, we will need 78 minutes x 5 = 6.5 hours to finish all the
send commands, making the test case incredibly slow.

> ---
>  fs/btrfs/extent_io.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
> index 9f2ada809dea..4bcc182744e4 100644
> --- a/fs/btrfs/extent_io.c
> +++ b/fs/btrfs/extent_io.c
> @@ -6,6 +6,7 @@
>  #include <linux/mm.h>
>  #include <linux/pagemap.h>
>  #include <linux/page-flags.h>
> +#include <linux/sched/mm.h>
>  #include <linux/spinlock.h>
>  #include <linux/blkdev.h>
>  #include <linux/swap.h>
> @@ -3159,6 +3160,8 @@ int btrfs_alloc_page_array(unsigned int nr_pages, struct page **page_array)
>  		 */
>  		if (allocated == last)
>  			return -ENOMEM;
> +
> +		memalloc_retry_wait(GFP_NOFS);

And, I just noticed this is because we are waiting for the retry even if we
successfully allocated all the pages. We should exit the loop if (allocated
== nr_pages).

>  	}
>  	return 0;
>  }
> -- 
> 2.35.1
>
David Sterba April 11, 2022, 1:33 p.m. UTC | #3
On Mon, Apr 11, 2022 at 07:11:24AM +0000, Naohiro Aota wrote:
> On Wed, Apr 06, 2022 at 02:24:18PM -0400, Sweet Tea Dorminy wrote:
> > When allocating memory in a loop, each iteration should call
> > memalloc_retry_wait() in order to prevent starving memory-freeing
> > processes (and to mark where allcoation loops are). ext4, f2fs, and xfs
> > all use this function at present for their allocation loops; btrfs ought
> > also.
> > 
> > The bulk page allocation is the only place in btrfs with an allocation
> > retry loop, so add an appropriate call to it.
> > 
> > Suggested-by: David Sterba <dsterba@suse.cz>
> > Signed-off-by: Sweet Tea Dorminy <sweettea-kernel@dorminy.me>
> 
> The fstests btrfs/187 becomes incredibly slow with this patch applied.
> 
> For example, on a nvme ZNS SSD (zoned) device, it takes over 10 hours to
> finish the test case. It only takes 765 seconds if I revert this commit
> from the misc-next branch.
> 
> I also confirmed the same slowdown occurs on regular btrfs. For the
> baseline, with this commit reverted, it takes 335 seconds on 8GB ZRAM
> device running on QEMU (8GB RAM), and takes 768 seconds on a (non-zoned)
> HDD running on a real machine (128GB RAM). The tests on misc-next with the
> same setup above is still running, but it already took 2 hours.
> 
> The test case runs full btrfs sending 5 times and incremental btrfs sending
> 10 times at the same time. Also, dedupe loop and balance loop is running
> simultaneously while all the send commands finish.
> 
> The slowdown of the test case basically comes from slow "btrfs send"
> command. On the HDD run, it takes 25 minutes to run a full btrfs sending
> command and 1 hour 18 minutes to run a incremental btrfs sending
> command. Thus, we will need 78 minutes x 5 = 6.5 hours to finish all the
> send commands, making the test case incredibly slow.
> 
> > ---
> >  fs/btrfs/extent_io.c | 3 +++
> >  1 file changed, 3 insertions(+)
> > 
> > diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
> > index 9f2ada809dea..4bcc182744e4 100644
> > --- a/fs/btrfs/extent_io.c
> > +++ b/fs/btrfs/extent_io.c
> > @@ -6,6 +6,7 @@
> >  #include <linux/mm.h>
> >  #include <linux/pagemap.h>
> >  #include <linux/page-flags.h>
> > +#include <linux/sched/mm.h>
> >  #include <linux/spinlock.h>
> >  #include <linux/blkdev.h>
> >  #include <linux/swap.h>
> > @@ -3159,6 +3160,8 @@ int btrfs_alloc_page_array(unsigned int nr_pages, struct page **page_array)
> >  		 */
> >  		if (allocated == last)
> >  			return -ENOMEM;
> > +
> > +		memalloc_retry_wait(GFP_NOFS);
> 
> And, I just noticed this is because we are waiting for the retry even if we
> successfully allocated all the pages. We should exit the loop if (allocated
> == nr_pages).

Can you please test if the fixup restores the run time? This looks like
a mistake and the delays are not something we'd observe otherwise. If it
does not fix the problem then the last option is to revert the patch.
diff mbox series

Patch

diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 9f2ada809dea..4bcc182744e4 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -6,6 +6,7 @@ 
 #include <linux/mm.h>
 #include <linux/pagemap.h>
 #include <linux/page-flags.h>
+#include <linux/sched/mm.h>
 #include <linux/spinlock.h>
 #include <linux/blkdev.h>
 #include <linux/swap.h>
@@ -3159,6 +3160,8 @@  int btrfs_alloc_page_array(unsigned int nr_pages, struct page **page_array)
 		 */
 		if (allocated == last)
 			return -ENOMEM;
+
+		memalloc_retry_wait(GFP_NOFS);
 	}
 	return 0;
 }