[RFC,2/2] dm: don't try to split REQ_OP_ZONE_APPEND bios
diff mbox series

Message ID 20200619065905.22228-3-johannes.thumshirn@wdc.com
State New
Headers show
Series
  • Two fixes for device-mapper with REQ_OP_ZONE_APPEND
Related show

Commit Message

Johannes Thumshirn June 19, 2020, 6:59 a.m. UTC
REQ_OP_ZONE_APPEND bios cannot be split so return EIO if we can't fit it
into one IO.

Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
---
 drivers/md/dm.c | 3 +++
 1 file changed, 3 insertions(+)

Comments

Damien Le Moal June 19, 2020, 7:54 a.m. UTC | #1
On 2020/06/19 15:59, Johannes Thumshirn wrote:
> REQ_OP_ZONE_APPEND bios cannot be split so return EIO if we can't fit it
> into one IO.
> 
> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
> ---
>  drivers/md/dm.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/drivers/md/dm.c b/drivers/md/dm.c
> index 058c34abe9d1..c720a7e3269a 100644
> --- a/drivers/md/dm.c
> +++ b/drivers/md/dm.c
> @@ -1609,6 +1609,9 @@ static int __split_and_process_non_flush(struct clone_info *ci)
>  
>  	len = min_t(sector_t, max_io_len(ci->sector, ti), ci->sector_count);
>  
> +	if (bio_op(ci->bio) == REQ_OP_ZONE_APPEND && len < ci->sector_count)
> +		return -EIO;
> +
>  	r = __clone_and_map_data_bio(ci, ti, ci->sector, &len);
>  	if (r < 0)
>  		return r;
> 

I think this is OK. The stacked max_zone_append_sectors limit should have
prevented that to happen  in the first place I think, but better safe than sorry.

Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com>
Mike Snitzer June 19, 2020, 4:26 p.m. UTC | #2
On Fri, Jun 19 2020 at  3:54am -0400,
Damien Le Moal <Damien.LeMoal@wdc.com> wrote:

> On 2020/06/19 15:59, Johannes Thumshirn wrote:
> > REQ_OP_ZONE_APPEND bios cannot be split so return EIO if we can't fit it
> > into one IO.
> > 
> > Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
> > ---
> >  drivers/md/dm.c | 3 +++
> >  1 file changed, 3 insertions(+)
> > 
> > diff --git a/drivers/md/dm.c b/drivers/md/dm.c
> > index 058c34abe9d1..c720a7e3269a 100644
> > --- a/drivers/md/dm.c
> > +++ b/drivers/md/dm.c
> > @@ -1609,6 +1609,9 @@ static int __split_and_process_non_flush(struct clone_info *ci)
> >  
> >  	len = min_t(sector_t, max_io_len(ci->sector, ti), ci->sector_count);
> >  
> > +	if (bio_op(ci->bio) == REQ_OP_ZONE_APPEND && len < ci->sector_count)
> > +		return -EIO;
> > +
> >  	r = __clone_and_map_data_bio(ci, ti, ci->sector, &len);
> >  	if (r < 0)
> >  		return r;
> > 
> 
> I think this is OK. The stacked max_zone_append_sectors limit should have
> prevented that to happen  in the first place I think, but better safe than sorry.
> 
> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com>

If stacked max_zone_append_sectors limit should prevent it then I'd
rather not sprinkle more zoned specific checks in DM core.

Thanks,
Mike
Damien Le Moal June 22, 2020, 12:27 a.m. UTC | #3
On 2020/06/20 1:27, Mike Snitzer wrote:
> On Fri, Jun 19 2020 at  3:54am -0400,
> Damien Le Moal <Damien.LeMoal@wdc.com> wrote:
> 
>> On 2020/06/19 15:59, Johannes Thumshirn wrote:
>>> REQ_OP_ZONE_APPEND bios cannot be split so return EIO if we can't fit it
>>> into one IO.
>>>
>>> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
>>> ---
>>>  drivers/md/dm.c | 3 +++
>>>  1 file changed, 3 insertions(+)
>>>
>>> diff --git a/drivers/md/dm.c b/drivers/md/dm.c
>>> index 058c34abe9d1..c720a7e3269a 100644
>>> --- a/drivers/md/dm.c
>>> +++ b/drivers/md/dm.c
>>> @@ -1609,6 +1609,9 @@ static int __split_and_process_non_flush(struct clone_info *ci)
>>>  
>>>  	len = min_t(sector_t, max_io_len(ci->sector, ti), ci->sector_count);
>>>  
>>> +	if (bio_op(ci->bio) == REQ_OP_ZONE_APPEND && len < ci->sector_count)
>>> +		return -EIO;
>>> +
>>>  	r = __clone_and_map_data_bio(ci, ti, ci->sector, &len);
>>>  	if (r < 0)
>>>  		return r;
>>>
>>
>> I think this is OK. The stacked max_zone_append_sectors limit should have
>> prevented that to happen  in the first place I think, but better safe than sorry.
>>
>> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com>
> 
> If stacked max_zone_append_sectors limit should prevent it then I'd
> rather not sprinkle more zoned specific checks in DM core.

Mike,

Just to be extra sure, I checked this again. Since for zoned block devices the
mapping of a target must be zoned aligned and a zone append command is always
fully contained within a zone, we do not need this check. The stacked limits and
submit_bio() code will fail a zone append command that is too large or that
spans zones before we get here.

That is of course assuming that the target does not expose zones that are mapped
using multiple chunks from different devices. There is currently no target doing
that, so this is OK. We can safely drop this patch.

Thanks.

> 
> Thanks,
> Mike
> 
>
Johannes Thumshirn June 22, 2020, 7:51 a.m. UTC | #4
On 22/06/2020 02:27, Damien Le Moal wrote:
[...]
> Just to be extra sure, I checked this again. Since for zoned block devices the
> mapping of a target must be zoned aligned and a zone append command is always
> fully contained within a zone, we do not need this check. The stacked limits and
> submit_bio() code will fail a zone append command that is too large or that
> spans zones before we get here.
> 
> That is of course assuming that the target does not expose zones that are mapped
> using multiple chunks from different devices. There is currently no target doing
> that, so this is OK. We can safely drop this patch.

Yeah I think we can drop that one.

Patch
diff mbox series

diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 058c34abe9d1..c720a7e3269a 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1609,6 +1609,9 @@  static int __split_and_process_non_flush(struct clone_info *ci)
 
 	len = min_t(sector_t, max_io_len(ci->sector, ti), ci->sector_count);
 
+	if (bio_op(ci->bio) == REQ_OP_ZONE_APPEND && len < ci->sector_count)
+		return -EIO;
+
 	r = __clone_and_map_data_bio(ci, ti, ci->sector, &len);
 	if (r < 0)
 		return r;