diff mbox series

[10/21] btrfs: return bool from lock_extent_buffer_for_io

Message ID 20230503152441.1141019-11-hch@lst.de (mailing list archive)
State New, archived
Headers show
Series [01/21] btrfs: mark extent_buffer_under_io static | expand

Commit Message

Christoph Hellwig May 3, 2023, 3:24 p.m. UTC
lock_extent_buffer_for_io never returns a negative error value, so switch
the return value to a simple bool.  Also remove the noinline_for_stack
annotation given that nothing in lock_extent_buffer_for_io or its callers
is particularly stack hungry.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
---
 fs/btrfs/extent_io.c | 39 ++++++++++++---------------------------
 1 file changed, 12 insertions(+), 27 deletions(-)

Comments

David Sterba May 23, 2023, 6:43 p.m. UTC | #1
On Wed, May 03, 2023 at 05:24:30PM +0200, Christoph Hellwig wrote:
> lock_extent_buffer_for_io never returns a negative error value, so switch
> the return value to a simple bool.  Also remove the noinline_for_stack
> annotation given that nothing in lock_extent_buffer_for_io or its callers
> is particularly stack hungry.

AFAIK the reason for noinline_for_stack is not because of a function is
stack hungry but because we want to prevent inlining it so we can see it
on stack in case there's an implied waiting. This makes it easier to
debug when IO is stuck or there's some deadlock.

This is not consistent in btrfs code though, quick search shows lots of
historical noinline_for_stack everywhere without an obvious reason.
Christoph Hellwig May 24, 2023, 5:44 a.m. UTC | #2
On Tue, May 23, 2023 at 08:43:17PM +0200, David Sterba wrote:
> On Wed, May 03, 2023 at 05:24:30PM +0200, Christoph Hellwig wrote:
> > lock_extent_buffer_for_io never returns a negative error value, so switch
> > the return value to a simple bool.  Also remove the noinline_for_stack
> > annotation given that nothing in lock_extent_buffer_for_io or its callers
> > is particularly stack hungry.
> 
> AFAIK the reason for noinline_for_stack is not because of a function is
> stack hungry but because we want to prevent inlining it so we can see it
> on stack in case there's an implied waiting. This makes it easier to
> debug when IO is stuck or there's some deadlock.
> 
> This is not consistent in btrfs code though, quick search shows lots of
> historical noinline_for_stack everywhere without an obvious reason.

Hmm.  noinline_for_stack is explicitly documented to only exist as an
annotation that noinline is used for stack usage.  So this is very odd.

If you want a normal noinline here I can add one, but to be honest
I don't really see the point even for stack traces.
David Sterba May 26, 2023, 1:34 p.m. UTC | #3
On Wed, May 24, 2023 at 07:44:49AM +0200, Christoph Hellwig wrote:
> On Tue, May 23, 2023 at 08:43:17PM +0200, David Sterba wrote:
> > On Wed, May 03, 2023 at 05:24:30PM +0200, Christoph Hellwig wrote:
> > > lock_extent_buffer_for_io never returns a negative error value, so switch
> > > the return value to a simple bool.  Also remove the noinline_for_stack
> > > annotation given that nothing in lock_extent_buffer_for_io or its callers
> > > is particularly stack hungry.
> > 
> > AFAIK the reason for noinline_for_stack is not because of a function is
> > stack hungry but because we want to prevent inlining it so we can see it
> > on stack in case there's an implied waiting. This makes it easier to
> > debug when IO is stuck or there's some deadlock.
> > 
> > This is not consistent in btrfs code though, quick search shows lots of
> > historical noinline_for_stack everywhere without an obvious reason.
> 
> Hmm.  noinline_for_stack is explicitly documented to only exist as an
> annotation that noinline is used for stack usage.  So this is very odd.

Yes that's the documented way, I found one commit fixing the stack
problem, 8ddc319706e5 ("btrfs: reduce stack usage for
btrfsic_process_written_block").

> If you want a normal noinline here I can add one, but to be honest
> I don't really see the point even for stack traces.

What I had in mind was based on 6939f667247e ("Btrfs: fix confusing
worker helper info in stacktrace"), but digging in the mail archives the
patch was sent with noinline
(https://lore.kernel.org/linux-btrfs/20170908213445.1601-1-bo.li.liu@oracle.com/).
I don't remember where I read about the noinline_for_stack use for the
stack trace.

We can audit and remove some of the attributes but this tends to break
only on non-x86 builds so verification would need to go via linux-next
and let build bots report.
diff mbox series

Patch

diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 4692aca5d42c15..74bf3715fe0c50 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -1629,18 +1629,17 @@  static void end_extent_buffer_writeback(struct extent_buffer *eb)
  *
  * May try to flush write bio if we can't get the lock.
  *
- * Return  0 if the extent buffer doesn't need to be submitted.
- *           (E.g. the extent buffer is not dirty)
- * Return >0 is the extent buffer is submitted to bio.
- * Return <0 if something went wrong, no page is locked.
+ * Return %false if the extent buffer doesn't need to be submitted (e.g. the
+ * extent buffer is not dirty)
+ * Return %true is the extent buffer is submitted to bio.
  */
-static noinline_for_stack int lock_extent_buffer_for_io(struct extent_buffer *eb,
-			  struct btrfs_bio_ctrl *bio_ctrl)
+static bool lock_extent_buffer_for_io(struct extent_buffer *eb,
+				      struct btrfs_bio_ctrl *bio_ctrl)
 {
 	struct btrfs_fs_info *fs_info = eb->fs_info;
 	int i, num_pages;
 	int flush = 0;
-	int ret = 0;
+	bool ret = false;
 
 	if (!btrfs_try_tree_write_lock(eb)) {
 		submit_write_bio(bio_ctrl, 0);
@@ -1651,7 +1650,7 @@  static noinline_for_stack int lock_extent_buffer_for_io(struct extent_buffer *eb
 	if (test_bit(EXTENT_BUFFER_WRITEBACK, &eb->bflags)) {
 		btrfs_tree_unlock(eb);
 		if (bio_ctrl->wbc->sync_mode != WB_SYNC_ALL)
-			return 0;
+			return false;
 		if (!flush) {
 			submit_write_bio(bio_ctrl, 0);
 			flush = 1;
@@ -1678,7 +1677,7 @@  static noinline_for_stack int lock_extent_buffer_for_io(struct extent_buffer *eb
 		percpu_counter_add_batch(&fs_info->dirty_metadata_bytes,
 					 -eb->len,
 					 fs_info->dirty_metadata_batch);
-		ret = 1;
+		ret = true;
 	} else {
 		spin_unlock(&eb->refs_lock);
 	}
@@ -2012,7 +2011,6 @@  static int submit_eb_subpage(struct page *page, struct btrfs_bio_ctrl *bio_ctrl)
 	u64 page_start = page_offset(page);
 	int bit_start = 0;
 	int sectors_per_node = fs_info->nodesize >> fs_info->sectorsize_bits;
-	int ret;
 
 	/* Lock and write each dirty extent buffers in the range */
 	while (bit_start < fs_info->subpage_info->bitmap_nr_bits) {
@@ -2058,25 +2056,13 @@  static int submit_eb_subpage(struct page *page, struct btrfs_bio_ctrl *bio_ctrl)
 		if (!eb)
 			continue;
 
-		ret = lock_extent_buffer_for_io(eb, bio_ctrl);
-		if (ret == 0) {
-			free_extent_buffer(eb);
-			continue;
+		if (lock_extent_buffer_for_io(eb, bio_ctrl)) {
+			write_one_subpage_eb(eb, bio_ctrl);
+			submitted++;
 		}
-		if (ret < 0) {
-			free_extent_buffer(eb);
-			goto cleanup;
-		}
-		write_one_subpage_eb(eb, bio_ctrl);
 		free_extent_buffer(eb);
-		submitted++;
 	}
 	return submitted;
-
-cleanup:
-	/* We hit error, end bio for the submitted extent buffers */
-	submit_write_bio(bio_ctrl, ret);
-	return ret;
 }
 
 /*
@@ -2155,8 +2141,7 @@  static int submit_eb_page(struct page *page, struct btrfs_bio_ctrl *bio_ctrl,
 
 	*eb_context = eb;
 
-	ret = lock_extent_buffer_for_io(eb, bio_ctrl);
-	if (ret <= 0) {
+	if (!lock_extent_buffer_for_io(eb, bio_ctrl)) {
 		btrfs_revert_meta_write_pointer(cache, eb);
 		if (cache)
 			btrfs_put_block_group(cache);