diff mbox series

[3/5] fs/buffer: add a for_each_bh() for block_read_full_folio()

Message ID 20241218022626.3668119-4-mcgrof@kernel.org (mailing list archive)
State New
Headers show
Series fs/buffer: strack reduction on async read | expand

Commit Message

Luis Chamberlain Dec. 18, 2024, 2:26 a.m. UTC
We want to be able to work through all buffer heads on a folio
for an async read, but in the future we want to support the option
to stop before we've processed all linked buffer heads. To make
code easier to read and follow adopt a for_each_bh(tmp, head) loop
instead of using a do { ... } while () to make the code easier to
read and later be expanded in subsequent patches.

This introduces no functional changes.

Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
 fs/buffer.c | 26 +++++++++++++++++++++-----
 1 file changed, 21 insertions(+), 5 deletions(-)

Comments

Matthew Wilcox (Oracle) Dec. 18, 2024, 7:20 p.m. UTC | #1
On Tue, Dec 17, 2024 at 06:26:24PM -0800, Luis Chamberlain wrote:
>  	/* Stage one - collect buffer heads we need issue a read for */
> -	do {
> -		if (buffer_uptodate(bh))
> +	for_each_bh(bh, head) {
> +		if (buffer_uptodate(bh)) {
> +			iblock++;
>  			continue;
> +		}

I'm not loving this.  It's fragile to have to put 'iblock++' before each
continue.
diff mbox series

Patch

diff --git a/fs/buffer.c b/fs/buffer.c
index 8baf87db110d..1aeef7dd2281 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -2397,6 +2397,17 @@  static void bh_read_batch_async(struct folio *folio,
 	}
 }
 
+#define bh_is_last(__bh, __head) ((__bh)->b_this_page == (__head))
+
+#define bh_next(__bh, __head) \
+    (bh_is_last(__bh, __head) ? NULL : (__bh)->b_this_page)
+
+/* Starts from the provided head */
+#define for_each_bh(__tmp, __head)			\
+    for ((__tmp) = (__head);				\
+         (__tmp);					\
+         (__tmp) = bh_next(__tmp, __head))
+
 /*
  * Generic "read_folio" function for block devices that have the normal
  * get_block functionality. This is most of the block device filesystems.
@@ -2426,13 +2437,14 @@  int block_read_full_folio(struct folio *folio, get_block_t *get_block)
 
 	iblock = div_u64(folio_pos(folio), blocksize);
 	lblock = div_u64(limit + blocksize - 1, blocksize);
-	bh = head;
 	nr = 0;
 
 	/* Stage one - collect buffer heads we need issue a read for */
-	do {
-		if (buffer_uptodate(bh))
+	for_each_bh(bh, head) {
+		if (buffer_uptodate(bh)) {
+			iblock++;
 			continue;
+		}
 
 		if (!buffer_mapped(bh)) {
 			int err = 0;
@@ -2449,17 +2461,21 @@  int block_read_full_folio(struct folio *folio, get_block_t *get_block)
 						blocksize);
 				if (!err)
 					set_buffer_uptodate(bh);
+				iblock++;
 				continue;
 			}
 			/*
 			 * get_block() might have updated the buffer
 			 * synchronously
 			 */
-			if (buffer_uptodate(bh))
+			if (buffer_uptodate(bh)) {
+				iblock++;
 				continue;
+			}
 		}
 		arr[nr++] = bh;
-	} while (iblock++, (bh = bh->b_this_page) != head);
+		iblock++;
+	}
 
 	bh_read_batch_async(folio, nr, arr, fully_mapped, nr == 0, page_error);