diff mbox series

block: Discard page cache of zone reset target range

Message ID 20210308033232.448200-1-shinichiro.kawasaki@wdc.com (mailing list archive)
State New, archived
Headers show
Series block: Discard page cache of zone reset target range | expand

Commit Message

Shin'ichiro Kawasaki March 8, 2021, 3:32 a.m. UTC
When zone reset ioctl and data read race for a same zone on zoned block
devices, the data read leaves stale page cache even though the zone
reset ioctl zero clears all the zone data on the device. To avoid
non-zero data read from the stale page cache after zone reset, discard
page cache of reset target zones. In same manner as fallocate, call the
function truncate_bdev_range() in blkdev_zone_mgmt_ioctl() before and
after zone reset to ensure the page cache discarded.

This patch can be applied back to the stable kernel version v5.10.y.
Rework is needed for older stable kernels.

Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
Fixes: 3ed05a987e0f ("blk-zoned: implement ioctls")
Cc: <stable@vger.kernel.org> # 5.10+
---
 block/blk-zoned.c | 30 ++++++++++++++++++++++++++++--
 1 file changed, 28 insertions(+), 2 deletions(-)

Comments

Keith Busch March 8, 2021, 6:30 p.m. UTC | #1
On Mon, Mar 08, 2021 at 12:32:32PM +0900, Shin'ichiro Kawasaki wrote:
> When zone reset ioctl and data read race for a same zone on zoned block
> devices, the data read leaves stale page cache even though the zone
> reset ioctl zero clears all the zone data on the device. To avoid
> non-zero data read from the stale page cache after zone reset, discard
> page cache of reset target zones. In same manner as fallocate, call the
> function truncate_bdev_range() in blkdev_zone_mgmt_ioctl() before and
> after zone reset to ensure the page cache discarded.
> 
> This patch can be applied back to the stable kernel version v5.10.y.
> Rework is needed for older stable kernels.
> 
> Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
> Fixes: 3ed05a987e0f ("blk-zoned: implement ioctls")
> Cc: <stable@vger.kernel.org> # 5.10+

This looks good to me.

Reviewed-by: Keith Busch <kbusch@kernel.org>
Kanchan Joshi March 9, 2021, 5:49 a.m. UTC | #2
On Mon, Mar 8, 2021 at 2:11 PM Shin'ichiro Kawasaki
<shinichiro.kawasaki@wdc.com> wrote:
>
> When zone reset ioctl and data read race for a same zone on zoned block
> devices, the data read leaves stale page cache even though the zone
> reset ioctl zero clears all the zone data on the device. To avoid
> non-zero data read from the stale page cache after zone reset, discard
> page cache of reset target zones. In same manner as fallocate, call the
> function truncate_bdev_range() in blkdev_zone_mgmt_ioctl() before and
> after zone reset to ensure the page cache discarded.
>
> This patch can be applied back to the stable kernel version v5.10.y.
> Rework is needed for older stable kernels.
>
> Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
> Fixes: 3ed05a987e0f ("blk-zoned: implement ioctls")
> Cc: <stable@vger.kernel.org> # 5.10+
> ---
>  block/blk-zoned.c | 30 ++++++++++++++++++++++++++++--
>  1 file changed, 28 insertions(+), 2 deletions(-)
>
> diff --git a/block/blk-zoned.c b/block/blk-zoned.c
> index 833978c02e60..990a36be2927 100644
> --- a/block/blk-zoned.c
> +++ b/block/blk-zoned.c
> @@ -329,6 +329,9 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode,
>         struct request_queue *q;
>         struct blk_zone_range zrange;
>         enum req_opf op;
> +       sector_t capacity;
> +       loff_t start, end;
> +       int ret;
>
>         if (!argp)
>                 return -EINVAL;
> @@ -349,9 +352,22 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode,
>         if (copy_from_user(&zrange, argp, sizeof(struct blk_zone_range)))
>                 return -EFAULT;
>
> +       capacity = get_capacity(bdev->bd_disk);
> +       if (zrange.sector + zrange.nr_sectors <= zrange.sector ||
> +           zrange.sector + zrange.nr_sectors > capacity)
> +               /* Out of range */
> +               return -EINVAL;
> +
> +       start = zrange.sector << SECTOR_SHIFT;
> +       end = ((zrange.sector + zrange.nr_sectors) << SECTOR_SHIFT) - 1;

How about doing all this calculation only when it is applicable i.e.
only for reset-zone case, and not for other cases (open/close/finish
zone).

Also apart from "out of range" (which is covered here), there are few
more cases when blkdev_zone_mgmt() may fail it (not covered here).
Perhaps the whole pre and post truncate part can fit better inside
blkdev_zone_mgmt itself.
Damien Le Moal March 9, 2021, 11:06 a.m. UTC | #3
On 2021/03/09 14:49, Kanchan Joshi wrote:
> On Mon, Mar 8, 2021 at 2:11 PM Shin'ichiro Kawasaki
> <shinichiro.kawasaki@wdc.com> wrote:
>>
>> When zone reset ioctl and data read race for a same zone on zoned block
>> devices, the data read leaves stale page cache even though the zone
>> reset ioctl zero clears all the zone data on the device. To avoid
>> non-zero data read from the stale page cache after zone reset, discard
>> page cache of reset target zones. In same manner as fallocate, call the
>> function truncate_bdev_range() in blkdev_zone_mgmt_ioctl() before and
>> after zone reset to ensure the page cache discarded.
>>
>> This patch can be applied back to the stable kernel version v5.10.y.
>> Rework is needed for older stable kernels.
>>
>> Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
>> Fixes: 3ed05a987e0f ("blk-zoned: implement ioctls")
>> Cc: <stable@vger.kernel.org> # 5.10+
>> ---
>>  block/blk-zoned.c | 30 ++++++++++++++++++++++++++++--
>>  1 file changed, 28 insertions(+), 2 deletions(-)
>>
>> diff --git a/block/blk-zoned.c b/block/blk-zoned.c
>> index 833978c02e60..990a36be2927 100644
>> --- a/block/blk-zoned.c
>> +++ b/block/blk-zoned.c
>> @@ -329,6 +329,9 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode,
>>         struct request_queue *q;
>>         struct blk_zone_range zrange;
>>         enum req_opf op;
>> +       sector_t capacity;
>> +       loff_t start, end;
>> +       int ret;
>>
>>         if (!argp)
>>                 return -EINVAL;
>> @@ -349,9 +352,22 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode,
>>         if (copy_from_user(&zrange, argp, sizeof(struct blk_zone_range)))
>>                 return -EFAULT;
>>
>> +       capacity = get_capacity(bdev->bd_disk);
>> +       if (zrange.sector + zrange.nr_sectors <= zrange.sector ||
>> +           zrange.sector + zrange.nr_sectors > capacity)
>> +               /* Out of range */
>> +               return -EINVAL;
>> +
>> +       start = zrange.sector << SECTOR_SHIFT;
>> +       end = ((zrange.sector + zrange.nr_sectors) << SECTOR_SHIFT) - 1;
> 
> How about doing all this calculation only when it is applicable i.e.
> only for reset-zone case, and not for other cases (open/close/finish
> zone).
> 
> Also apart from "out of range" (which is covered here), there are few
> more cases when blkdev_zone_mgmt() may fail it (not covered here).
> Perhaps the whole pre and post truncate part can fit better inside
> blkdev_zone_mgmt itself.

No, I do not think so. That would add overhead for in-kernel users of zone reset
for no good reason since these would typically take care of cached pages
themselves (e.g. FS) and would not trigger page caching using the bdev inode anyway.
Damien Le Moal March 9, 2021, 11:16 a.m. UTC | #4
On 2021/03/08 12:32, Shin'ichiro Kawasaki wrote:
> When zone reset ioctl and data read race for a same zone on zoned block
> devices, the data read leaves stale page cache even though the zone
> reset ioctl zero clears all the zone data on the device. To avoid
> non-zero data read from the stale page cache after zone reset, discard
> page cache of reset target zones. In same manner as fallocate, call the
> function truncate_bdev_range() in blkdev_zone_mgmt_ioctl() before and
> after zone reset to ensure the page cache discarded.
> 
> This patch can be applied back to the stable kernel version v5.10.y.
> Rework is needed for older stable kernels.
> 
> Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
> Fixes: 3ed05a987e0f ("blk-zoned: implement ioctls")
> Cc: <stable@vger.kernel.org> # 5.10+
> ---
>  block/blk-zoned.c | 30 ++++++++++++++++++++++++++++--
>  1 file changed, 28 insertions(+), 2 deletions(-)
> 
> diff --git a/block/blk-zoned.c b/block/blk-zoned.c
> index 833978c02e60..990a36be2927 100644
> --- a/block/blk-zoned.c
> +++ b/block/blk-zoned.c
> @@ -329,6 +329,9 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode,
>  	struct request_queue *q;
>  	struct blk_zone_range zrange;
>  	enum req_opf op;
> +	sector_t capacity;
> +	loff_t start, end;
> +	int ret;
>  
>  	if (!argp)
>  		return -EINVAL;
> @@ -349,9 +352,22 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode,
>  	if (copy_from_user(&zrange, argp, sizeof(struct blk_zone_range)))
>  		return -EFAULT;
>  
> +	capacity = get_capacity(bdev->bd_disk);
> +	if (zrange.sector + zrange.nr_sectors <= zrange.sector ||
> +	    zrange.sector + zrange.nr_sectors > capacity)
> +		/* Out of range */
> +		return -EINVAL;
> +
> +	start = zrange.sector << SECTOR_SHIFT;
> +	end = ((zrange.sector + zrange.nr_sectors) << SECTOR_SHIFT) - 1;

Move these under the BLKRESETZONE case as Kanchan suggested.

> +
>  	switch (cmd) {
>  	case BLKRESETZONE:
>  		op = REQ_OP_ZONE_RESET;
> +		/* Invalidate the page cache, including dirty pages. */
> +		ret = truncate_bdev_range(bdev, mode, start, end);
> +		if (ret)
> +			return ret;
>  		break;
>  	case BLKOPENZONE:
>  		op = REQ_OP_ZONE_OPEN;
> @@ -366,8 +382,18 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode,
>  		return -ENOTTY;
>  	}
>  
> -	return blkdev_zone_mgmt(bdev, op, zrange.sector, zrange.nr_sectors,
> -				GFP_KERNEL);
> +	ret = blkdev_zone_mgmt(bdev, op, zrange.sector, zrange.nr_sectors,
> +			       GFP_KERNEL);
> +
> +	/*
> +	 * Invalidate the page cache again for zone reset; if someone wandered
> +	 * in and dirtied a page, we just discard it - userspace has no way of
> +	 * knowing whether the write happened before or after reset completing.

I think you can simplify this comment: writes can only be direct for zoned
devices so concurrent writes would not add any page to the page cache
after/during reset. The page cache may be filled again due to concurrent reads
though and dropping the pages for these is fine.

> +	 */
> +	if (!ret && cmd == BLKRESETZONE)
> +		ret = truncate_bdev_range(bdev, mode, start, end);
> +
> +	return ret;
>  }
>  
>  static inline unsigned long *blk_alloc_zone_bitmap(int node,
> 

With these fixed, looks good to me.

Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com>
Kanchan Joshi March 9, 2021, 11:58 a.m. UTC | #5
On Tue, Mar 9, 2021 at 4:36 PM Damien Le Moal <Damien.LeMoal@wdc.com> wrote:
>
> On 2021/03/09 14:49, Kanchan Joshi wrote:
> > On Mon, Mar 8, 2021 at 2:11 PM Shin'ichiro Kawasaki
> > <shinichiro.kawasaki@wdc.com> wrote:
> >>
> >> When zone reset ioctl and data read race for a same zone on zoned block
> >> devices, the data read leaves stale page cache even though the zone
> >> reset ioctl zero clears all the zone data on the device. To avoid
> >> non-zero data read from the stale page cache after zone reset, discard
> >> page cache of reset target zones. In same manner as fallocate, call the
> >> function truncate_bdev_range() in blkdev_zone_mgmt_ioctl() before and
> >> after zone reset to ensure the page cache discarded.
> >>
> >> This patch can be applied back to the stable kernel version v5.10.y.
> >> Rework is needed for older stable kernels.
> >>
> >> Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
> >> Fixes: 3ed05a987e0f ("blk-zoned: implement ioctls")
> >> Cc: <stable@vger.kernel.org> # 5.10+
> >> ---
> >>  block/blk-zoned.c | 30 ++++++++++++++++++++++++++++--
> >>  1 file changed, 28 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/block/blk-zoned.c b/block/blk-zoned.c
> >> index 833978c02e60..990a36be2927 100644
> >> --- a/block/blk-zoned.c
> >> +++ b/block/blk-zoned.c
> >> @@ -329,6 +329,9 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode,
> >>         struct request_queue *q;
> >>         struct blk_zone_range zrange;
> >>         enum req_opf op;
> >> +       sector_t capacity;
> >> +       loff_t start, end;
> >> +       int ret;
> >>
> >>         if (!argp)
> >>                 return -EINVAL;
> >> @@ -349,9 +352,22 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode,
> >>         if (copy_from_user(&zrange, argp, sizeof(struct blk_zone_range)))
> >>                 return -EFAULT;
> >>
> >> +       capacity = get_capacity(bdev->bd_disk);
> >> +       if (zrange.sector + zrange.nr_sectors <= zrange.sector ||
> >> +           zrange.sector + zrange.nr_sectors > capacity)
> >> +               /* Out of range */
> >> +               return -EINVAL;
> >> +
> >> +       start = zrange.sector << SECTOR_SHIFT;
> >> +       end = ((zrange.sector + zrange.nr_sectors) << SECTOR_SHIFT) - 1;
> >
> > How about doing all this calculation only when it is applicable i.e.
> > only for reset-zone case, and not for other cases (open/close/finish
> > zone).
> >
> > Also apart from "out of range" (which is covered here), there are few
> > more cases when blkdev_zone_mgmt() may fail it (not covered here).
> > Perhaps the whole pre and post truncate part can fit better inside
> > blkdev_zone_mgmt itself.
>
> No, I do not think so. That would add overhead for in-kernel users of zone reset
> for no good reason since these would typically take care of cached pages
> themselves (e.g. FS) and would not trigger page caching using the bdev inode anyway.

Agreed. In that case moving the pre-truncate processing from
common-path to under BLKRESETZONE will suffice.
With that refactoring in place, it looks good.

Reviewed-by: Kanchan Joshi <joshi.k@samsung.com>
Shin'ichiro Kawasaki March 10, 2021, 5:35 a.m. UTC | #6
On Mar 09, 2021 / 17:28, Kanchan Joshi wrote:
> On Tue, Mar 9, 2021 at 4:36 PM Damien Le Moal <Damien.LeMoal@wdc.com> wrote:
> >
> > On 2021/03/09 14:49, Kanchan Joshi wrote:
> > > On Mon, Mar 8, 2021 at 2:11 PM Shin'ichiro Kawasaki
> > > <shinichiro.kawasaki@wdc.com> wrote:
> > >>
> > >> When zone reset ioctl and data read race for a same zone on zoned block
> > >> devices, the data read leaves stale page cache even though the zone
> > >> reset ioctl zero clears all the zone data on the device. To avoid
> > >> non-zero data read from the stale page cache after zone reset, discard
> > >> page cache of reset target zones. In same manner as fallocate, call the
> > >> function truncate_bdev_range() in blkdev_zone_mgmt_ioctl() before and
> > >> after zone reset to ensure the page cache discarded.
> > >>
> > >> This patch can be applied back to the stable kernel version v5.10.y.
> > >> Rework is needed for older stable kernels.
> > >>
> > >> Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
> > >> Fixes: 3ed05a987e0f ("blk-zoned: implement ioctls")
> > >> Cc: <stable@vger.kernel.org> # 5.10+
> > >> ---
> > >>  block/blk-zoned.c | 30 ++++++++++++++++++++++++++++--
> > >>  1 file changed, 28 insertions(+), 2 deletions(-)
> > >>
> > >> diff --git a/block/blk-zoned.c b/block/blk-zoned.c
> > >> index 833978c02e60..990a36be2927 100644
> > >> --- a/block/blk-zoned.c
> > >> +++ b/block/blk-zoned.c
> > >> @@ -329,6 +329,9 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode,
> > >>         struct request_queue *q;
> > >>         struct blk_zone_range zrange;
> > >>         enum req_opf op;
> > >> +       sector_t capacity;
> > >> +       loff_t start, end;
> > >> +       int ret;
> > >>
> > >>         if (!argp)
> > >>                 return -EINVAL;
> > >> @@ -349,9 +352,22 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode,
> > >>         if (copy_from_user(&zrange, argp, sizeof(struct blk_zone_range)))
> > >>                 return -EFAULT;
> > >>
> > >> +       capacity = get_capacity(bdev->bd_disk);
> > >> +       if (zrange.sector + zrange.nr_sectors <= zrange.sector ||
> > >> +           zrange.sector + zrange.nr_sectors > capacity)
> > >> +               /* Out of range */
> > >> +               return -EINVAL;
> > >> +
> > >> +       start = zrange.sector << SECTOR_SHIFT;
> > >> +       end = ((zrange.sector + zrange.nr_sectors) << SECTOR_SHIFT) - 1;
> > >
> > > How about doing all this calculation only when it is applicable i.e.
> > > only for reset-zone case, and not for other cases (open/close/finish
> > > zone).
> > >
> > > Also apart from "out of range" (which is covered here), there are few
> > > more cases when blkdev_zone_mgmt() may fail it (not covered here).
> > > Perhaps the whole pre and post truncate part can fit better inside
> > > blkdev_zone_mgmt itself.
> >
> > No, I do not think so. That would add overhead for in-kernel users of zone reset
> > for no good reason since these would typically take care of cached pages
> > themselves (e.g. FS) and would not trigger page caching using the bdev inode anyway.
> 
> Agreed. In that case moving the pre-truncate processing from
> common-path to under BLKRESETZONE will suffice.
> With that refactoring in place, it looks good.
> 
> Reviewed-by: Kanchan Joshi <joshi.k@samsung.com>

Thanks. Will reflect your comment and the other comment by Damien for v2.
diff mbox series

Patch

diff --git a/block/blk-zoned.c b/block/blk-zoned.c
index 833978c02e60..990a36be2927 100644
--- a/block/blk-zoned.c
+++ b/block/blk-zoned.c
@@ -329,6 +329,9 @@  int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode,
 	struct request_queue *q;
 	struct blk_zone_range zrange;
 	enum req_opf op;
+	sector_t capacity;
+	loff_t start, end;
+	int ret;
 
 	if (!argp)
 		return -EINVAL;
@@ -349,9 +352,22 @@  int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode,
 	if (copy_from_user(&zrange, argp, sizeof(struct blk_zone_range)))
 		return -EFAULT;
 
+	capacity = get_capacity(bdev->bd_disk);
+	if (zrange.sector + zrange.nr_sectors <= zrange.sector ||
+	    zrange.sector + zrange.nr_sectors > capacity)
+		/* Out of range */
+		return -EINVAL;
+
+	start = zrange.sector << SECTOR_SHIFT;
+	end = ((zrange.sector + zrange.nr_sectors) << SECTOR_SHIFT) - 1;
+
 	switch (cmd) {
 	case BLKRESETZONE:
 		op = REQ_OP_ZONE_RESET;
+		/* Invalidate the page cache, including dirty pages. */
+		ret = truncate_bdev_range(bdev, mode, start, end);
+		if (ret)
+			return ret;
 		break;
 	case BLKOPENZONE:
 		op = REQ_OP_ZONE_OPEN;
@@ -366,8 +382,18 @@  int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode,
 		return -ENOTTY;
 	}
 
-	return blkdev_zone_mgmt(bdev, op, zrange.sector, zrange.nr_sectors,
-				GFP_KERNEL);
+	ret = blkdev_zone_mgmt(bdev, op, zrange.sector, zrange.nr_sectors,
+			       GFP_KERNEL);
+
+	/*
+	 * Invalidate the page cache again for zone reset; if someone wandered
+	 * in and dirtied a page, we just discard it - userspace has no way of
+	 * knowing whether the write happened before or after reset completing.
+	 */
+	if (!ret && cmd == BLKRESETZONE)
+		ret = truncate_bdev_range(bdev, mode, start, end);
+
+	return ret;
 }
 
 static inline unsigned long *blk_alloc_zone_bitmap(int node,