diff mbox series

[PATCHv2,1/2] iomap: Fix iomap_adjust_read_range for plen calculation

Message ID a32e5f9a4fcfdb99077300c4020ed7ae61d6e0f9.1715067055.git.ritesh.list@gmail.com (mailing list archive)
State Accepted, archived
Headers show
Series iomap: Optimize read_folio | expand

Commit Message

Ritesh Harjani (IBM) May 7, 2024, 8:55 a.m. UTC
If the extent spans the block that contains i_size, we need to handle
both halves separately so that we properly zero data in the page cache
for blocks that are entirely outside of i_size. But this is needed only
when i_size is within the current folio under processing.
"orig_pos + length > isize" can be true for all folios if the mapped
extent length is greater than the folio size. That is making plen to
break for every folio instead of only the last folio.

So use orig_plen for checking if "orig_pos + orig_plen > isize".

Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
cc: Ojaswin Mujoo <ojaswin@linux.ibm.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
---
 fs/iomap/buffered-io.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

--
2.44.0

Comments

Jan Kara May 9, 2024, 10:33 a.m. UTC | #1
On Tue 07-05-24 14:25:42, Ritesh Harjani (IBM) wrote:
> If the extent spans the block that contains i_size, we need to handle
> both halves separately so that we properly zero data in the page cache
> for blocks that are entirely outside of i_size. But this is needed only
> when i_size is within the current folio under processing.
> "orig_pos + length > isize" can be true for all folios if the mapped
> extent length is greater than the folio size. That is making plen to
> break for every folio instead of only the last folio.
> 
> So use orig_plen for checking if "orig_pos + orig_plen > isize".
> 
> Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
> cc: Ojaswin Mujoo <ojaswin@linux.ibm.com>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Darrick J. Wong <djwong@kernel.org>

Looks good. Feel free to add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza


> ---
>  fs/iomap/buffered-io.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index 4e8e41c8b3c0..9f79c82d1f73 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -241,6 +241,7 @@ static void iomap_adjust_read_range(struct inode *inode, struct folio *folio,
>  	unsigned block_size = (1 << block_bits);
>  	size_t poff = offset_in_folio(folio, *pos);
>  	size_t plen = min_t(loff_t, folio_size(folio) - poff, length);
> +	size_t orig_plen = plen;
>  	unsigned first = poff >> block_bits;
>  	unsigned last = (poff + plen - 1) >> block_bits;
> 
> @@ -277,7 +278,7 @@ static void iomap_adjust_read_range(struct inode *inode, struct folio *folio,
>  	 * handle both halves separately so that we properly zero data in the
>  	 * page cache for blocks that are entirely outside of i_size.
>  	 */
> -	if (orig_pos <= isize && orig_pos + length > isize) {
> +	if (orig_pos <= isize && orig_pos + orig_plen > isize) {
>  		unsigned end = offset_in_folio(folio, isize - 1) >> block_bits;
> 
>  		if (first <= end && last > end)
> --
> 2.44.0
>
Christian Brauner June 5, 2024, 3:29 p.m. UTC | #2
On Tue, 07 May 2024 14:25:42 +0530, Ritesh Harjani (IBM) wrote:
> If the extent spans the block that contains i_size, we need to handle
> both halves separately so that we properly zero data in the page cache
> for blocks that are entirely outside of i_size. But this is needed only
> when i_size is within the current folio under processing.
> "orig_pos + length > isize" can be true for all folios if the mapped
> extent length is greater than the folio size. That is making plen to
> break for every folio instead of only the last folio.
> 
> [...]

Applied to the vfs.fixes branch of the vfs/vfs.git tree.
Patches in the vfs.fixes branch should appear in linux-next soon.

Please report any outstanding bugs that were missed during review in a
new review to the original patch series allowing us to drop it.

It's encouraged to provide Acked-bys and Reviewed-bys even though the
patch has now been applied. If possible patch trailers will be updated.

Note that commit hashes shown below are subject to change due to rebase,
trailer updates or similar. If in doubt, please check the listed branch.

tree:   https://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs.git
branch: vfs.fixes

[1/2] iomap: Fix iomap_adjust_read_range for plen calculation
      https://git.kernel.org/vfs/vfs/c/0fbe97059215
diff mbox series

Patch

diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 4e8e41c8b3c0..9f79c82d1f73 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -241,6 +241,7 @@  static void iomap_adjust_read_range(struct inode *inode, struct folio *folio,
 	unsigned block_size = (1 << block_bits);
 	size_t poff = offset_in_folio(folio, *pos);
 	size_t plen = min_t(loff_t, folio_size(folio) - poff, length);
+	size_t orig_plen = plen;
 	unsigned first = poff >> block_bits;
 	unsigned last = (poff + plen - 1) >> block_bits;

@@ -277,7 +278,7 @@  static void iomap_adjust_read_range(struct inode *inode, struct folio *folio,
 	 * handle both halves separately so that we properly zero data in the
 	 * page cache for blocks that are entirely outside of i_size.
 	 */
-	if (orig_pos <= isize && orig_pos + length > isize) {
+	if (orig_pos <= isize && orig_pos + orig_plen > isize) {
 		unsigned end = offset_in_folio(folio, isize - 1) >> block_bits;

 		if (first <= end && last > end)