diff mbox series

[RFC,v4,07/17] iomap: Use balance_dirty_pages_ratelimited_flags in iomap_write_iter

Message ID 20220520183646.2002023-8-shr@fb.com (mailing list archive)
State New
Headers show
Series io-uring/xfs: support async buffered writes | expand

Commit Message

Stefan Roesch May 20, 2022, 6:36 p.m. UTC
This replaces the call to balance_dirty_pages_ratelimited() with the
call to balance_dirty_pages_ratelimited_flags. This allows to specify if
the write request is async or not.

In addition this also moves the above function call to the beginning of
the function. If the function call is at the end of the function and the
decision is made to throttle writes, then there is no request that
io-uring can wait on. By moving it to the beginning of the function, the
write request is not issued, but returns -EAGAIN instead. io-uring will
punt the request and process it in the io-worker.

By moving the function call to the beginning of the function, the write
throttling will happen one page later.

Signed-off-by: Stefan Roesch <shr@fb.com>
---
 fs/iomap/buffered-io.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

Comments

Christoph Hellwig May 22, 2022, 7:19 a.m. UTC | #1
On Fri, May 20, 2022 at 11:36:36AM -0700, Stefan Roesch wrote:
> @@ -765,14 +765,22 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i)
>  	do {
>  		struct folio *folio;
>  		struct page *page;
> +		struct address_space *mapping = iter->inode->i_mapping;
>  		unsigned long offset;	/* Offset into pagecache page */
>  		unsigned long bytes;	/* Bytes to write to page */
>  		size_t copied;		/* Bytes copied from user */
> +		unsigned int bdp_flags =
> +			(iter->flags & IOMAP_NOWAIT) ? BDP_ASYNC : 0;

Bot the mapping and bdp_flags don't change over the loop iterations,
so we can initialize them once at the start of the function.

Otherwise this looks good, but I think this should go into the
previous patch as it is a central part of supporting async buffered
writes.
Stefan Roesch May 25, 2022, 9:32 p.m. UTC | #2
On 5/22/22 12:19 AM, Christoph Hellwig wrote:
> On Fri, May 20, 2022 at 11:36:36AM -0700, Stefan Roesch wrote:
>> @@ -765,14 +765,22 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i)
>>  	do {
>>  		struct folio *folio;
>>  		struct page *page;
>> +		struct address_space *mapping = iter->inode->i_mapping;
>>  		unsigned long offset;	/* Offset into pagecache page */
>>  		unsigned long bytes;	/* Bytes to write to page */
>>  		size_t copied;		/* Bytes copied from user */
>> +		unsigned int bdp_flags =
>> +			(iter->flags & IOMAP_NOWAIT) ? BDP_ASYNC : 0;
> 
> Bot the mapping and bdp_flags don't change over the loop iterations,
> so we can initialize them once at the start of the function.
> 

Moved the variable definitions outside of the loop.

> Otherwise this looks good, but I think this should go into the
> previous patch as it is a central part of supporting async buffered
> writes.
>

I merged it with the previous patch.
diff mbox series

Patch

diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 187f4ddd7ba7..020452467ca8 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -765,14 +765,22 @@  static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i)
 	do {
 		struct folio *folio;
 		struct page *page;
+		struct address_space *mapping = iter->inode->i_mapping;
 		unsigned long offset;	/* Offset into pagecache page */
 		unsigned long bytes;	/* Bytes to write to page */
 		size_t copied;		/* Bytes copied from user */
+		unsigned int bdp_flags =
+			(iter->flags & IOMAP_NOWAIT) ? BDP_ASYNC : 0;
 
 		offset = offset_in_page(pos);
 		bytes = min_t(unsigned long, PAGE_SIZE - offset,
 						iov_iter_count(i));
 again:
+		status = balance_dirty_pages_ratelimited_flags(mapping,
+							       bdp_flags);
+		if (unlikely(status))
+			break;
+
 		if (bytes > length)
 			bytes = length;
 
@@ -796,7 +804,7 @@  static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i)
 			break;
 
 		page = folio_file_page(folio, pos >> PAGE_SHIFT);
-		if (mapping_writably_mapped(iter->inode->i_mapping))
+		if (mapping_writably_mapped(mapping))
 			flush_dcache_page(page);
 
 		copied = copy_page_from_iter_atomic(page, offset, bytes, i);
@@ -821,8 +829,6 @@  static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i)
 		pos += status;
 		written += status;
 		length -= status;
-
-		balance_dirty_pages_ratelimited(iter->inode->i_mapping);
 	} while (iov_iter_count(i) && length);
 
 	return written ? written : status;