Message ID | deb8991ce9aa1850b82471cf3e76cd8fdc1a9e92.1714046808.git.ritesh.list@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | ext2 iomap changes and iomap improvements | expand |
On Thu, Apr 25, 2024 at 06:58:49PM +0530, Ritesh Harjani (IBM) wrote: > If the extent spans the block that contains the i_size, we need to s/the i_size/i_size/. > handle both halves separately .. so that we properly zero data in the page cache for blocks that are entirely outside of i_size. Otherwise looks good: Reviewed-by: Christoph Hellwig <hch@lst.de>
Christoph Hellwig <hch@infradead.org> writes: > On Thu, Apr 25, 2024 at 06:58:49PM +0530, Ritesh Harjani (IBM) wrote: >> If the extent spans the block that contains the i_size, we need to > > s/the i_size/i_size/. > >> handle both halves separately > > .. so that we properly zero data in the page cache for blocks that are > entirely outside of i_size. Sure. > > Otherwise looks good: > > Reviewed-by: Christoph Hellwig <hch@lst.de> Thanks for the review. -ritesh
On Fri, Apr 26, 2024 at 02:22:25PM +0530, Ritesh Harjani wrote: > Christoph Hellwig <hch@infradead.org> writes: > > > On Thu, Apr 25, 2024 at 06:58:49PM +0530, Ritesh Harjani (IBM) wrote: > >> If the extent spans the block that contains the i_size, we need to > > > > s/the i_size/i_size/. > > > >> handle both halves separately > > > > .. so that we properly zero data in the page cache for blocks that are > > entirely outside of i_size. > > Sure. > > > > > Otherwise looks good: > > > > Reviewed-by: Christoph Hellwig <hch@lst.de> > > Thanks for the review. With Christoph's comments addressed, you can also add Reviewed-by: Darrick J. Wong <djwong@kernel.org> --D > > -ritesh >
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 4e8e41c8b3c0..9f79c82d1f73 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -241,6 +241,7 @@ static void iomap_adjust_read_range(struct inode *inode, struct folio *folio, unsigned block_size = (1 << block_bits); size_t poff = offset_in_folio(folio, *pos); size_t plen = min_t(loff_t, folio_size(folio) - poff, length); + size_t orig_plen = plen; unsigned first = poff >> block_bits; unsigned last = (poff + plen - 1) >> block_bits; @@ -277,7 +278,7 @@ static void iomap_adjust_read_range(struct inode *inode, struct folio *folio, * handle both halves separately so that we properly zero data in the * page cache for blocks that are entirely outside of i_size. */ - if (orig_pos <= isize && orig_pos + length > isize) { + if (orig_pos <= isize && orig_pos + orig_plen > isize) { unsigned end = offset_in_folio(folio, isize - 1) >> block_bits; if (first <= end && last > end)
If the extent spans the block that contains the i_size, we need to handle both halves separately but only when the i_size is within the current folio under processing. "orig_pos + length > isize" can be true for all folios if the mapped extent length is greater than the folio size. That is making plen to break for every folio instead of only the last folio. So use orig_plen for checking if "orig_pos + orig_plen > isize". Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> cc: Ojaswin Mujoo <ojaswin@linux.ibm.com> --- fs/iomap/buffered-io.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)