diff mbox series

[v2] block: Discard page cache of zone reset target range

Message ID 20210310065003.573474-1-shinichiro.kawasaki@wdc.com (mailing list archive)
State New, archived
Headers show
Series [v2] block: Discard page cache of zone reset target range | expand

Commit Message

Shin'ichiro Kawasaki March 10, 2021, 6:50 a.m. UTC
When zone reset ioctl and data read race for a same zone on zoned block
devices, the data read leaves stale page cache even though the zone
reset ioctl zero clears all the zone data on the device. To avoid
non-zero data read from the stale page cache after zone reset, discard
page cache of reset target zones. In same manner as fallocate, call the
function truncate_bdev_range() in blkdev_zone_mgmt_ioctl() before and
after zone reset to ensure the page cache discarded.

This patch can be applied back to the stable kernel version v5.10.y.
Rework is needed for older stable kernels.

Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
Fixes: 3ed05a987e0f ("blk-zoned: implement ioctls")
Cc: <stable@vger.kernel.org> # 5.10+
Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com>
Reviewed-by: Kanchan Joshi <joshi.k@samsung.com>
---
Changes from v1:
* Addressed comments on the list

 block/blk-zoned.c | 33 +++++++++++++++++++++++++++++++--
 1 file changed, 31 insertions(+), 2 deletions(-)

Comments

Christoph Hellwig March 10, 2021, 8:45 a.m. UTC | #1
>  	switch (cmd) {
>  	case BLKRESETZONE:
>  		op = REQ_OP_ZONE_RESET;
> +
> +		capacity = get_capacity(bdev->bd_disk);
> +		if (zrange.sector + zrange.nr_sectors <= zrange.sector ||
> +		    zrange.sector + zrange.nr_sectors > capacity)
> +			/* Out of range */
> +			return -EINVAL;
> +
> +		start = zrange.sector << SECTOR_SHIFT;
> +		end = ((zrange.sector + zrange.nr_sectors) << SECTOR_SHIFT) - 1;
> +
> +		/* Invalidate the page cache, including dirty pages. */
> +		ret = truncate_bdev_range(bdev, mode, start, end);
> +		if (ret)
> +			return ret;

Can we factor this out into a truncate_zone_range() helper?
Shin'ichiro Kawasaki March 11, 2021, 2:15 a.m. UTC | #2
On Mar 10, 2021 / 08:45, Christoph Hellwig wrote:
> >  	switch (cmd) {
> >  	case BLKRESETZONE:
> >  		op = REQ_OP_ZONE_RESET;
> > +
> > +		capacity = get_capacity(bdev->bd_disk);
> > +		if (zrange.sector + zrange.nr_sectors <= zrange.sector ||
> > +		    zrange.sector + zrange.nr_sectors > capacity)
> > +			/* Out of range */
> > +			return -EINVAL;
> > +
> > +		start = zrange.sector << SECTOR_SHIFT;
> > +		end = ((zrange.sector + zrange.nr_sectors) << SECTOR_SHIFT) - 1;
> > +
> > +		/* Invalidate the page cache, including dirty pages. */
> > +		ret = truncate_bdev_range(bdev, mode, start, end);
> > +		if (ret)
> > +			return ret;
> 
> Can we factor this out into a truncate_zone_range() helper?

Yes, we can. The helper will be as follows. I will rework the patch and send v3.

static int blkdev_truncate_zone_range(struct block_device *bdev, fmode_t mode,
				      const struct blk_zone_range *zrange)
{
	loff_t start, end;

	if (zrange->sector + zrange->nr_sectors <= zrange->sector ||
	    zrange->sector + zrange->nr_sectors > get_capacity(bdev->bd_disk))
		/* Out of range */
		return -EINVAL;

	start = zrange->sector << SECTOR_SHIFT;
	end = ((zrange->sector + zrange->nr_sectors) << SECTOR_SHIFT) - 1;

	return truncate_bdev_range(bdev, mode, start, end);
}
diff mbox series

Patch

diff --git a/block/blk-zoned.c b/block/blk-zoned.c
index 833978c02e60..c2357e1eda18 100644
--- a/block/blk-zoned.c
+++ b/block/blk-zoned.c
@@ -329,6 +329,9 @@  int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode,
 	struct request_queue *q;
 	struct blk_zone_range zrange;
 	enum req_opf op;
+	sector_t capacity;
+	loff_t start, end;
+	int ret;
 
 	if (!argp)
 		return -EINVAL;
@@ -352,6 +355,20 @@  int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode,
 	switch (cmd) {
 	case BLKRESETZONE:
 		op = REQ_OP_ZONE_RESET;
+
+		capacity = get_capacity(bdev->bd_disk);
+		if (zrange.sector + zrange.nr_sectors <= zrange.sector ||
+		    zrange.sector + zrange.nr_sectors > capacity)
+			/* Out of range */
+			return -EINVAL;
+
+		start = zrange.sector << SECTOR_SHIFT;
+		end = ((zrange.sector + zrange.nr_sectors) << SECTOR_SHIFT) - 1;
+
+		/* Invalidate the page cache, including dirty pages. */
+		ret = truncate_bdev_range(bdev, mode, start, end);
+		if (ret)
+			return ret;
 		break;
 	case BLKOPENZONE:
 		op = REQ_OP_ZONE_OPEN;
@@ -366,8 +383,20 @@  int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode,
 		return -ENOTTY;
 	}
 
-	return blkdev_zone_mgmt(bdev, op, zrange.sector, zrange.nr_sectors,
-				GFP_KERNEL);
+	ret = blkdev_zone_mgmt(bdev, op, zrange.sector, zrange.nr_sectors,
+			       GFP_KERNEL);
+
+	/*
+	 * Invalidate the page cache again for zone reset: writes can only be
+	 * direct for zoned devices so concurrent writes would not add any page
+	 * to the page cache after/during reset. The page cache may be filled
+	 * again due to concurrent reads though and dropping the pages for
+	 * these is fine.
+	 */
+	if (!ret && cmd == BLKRESETZONE)
+		ret = truncate_bdev_range(bdev, mode, start, end);
+
+	return ret;
 }
 
 static inline unsigned long *blk_alloc_zone_bitmap(int node,