diff mbox series

block: fix 32 bit overflow in __blkdev_issue_discard()

Message ID 20181113214337.20581-1-david@fromorbit.com (mailing list archive)
State Not Applicable
Headers show
Series block: fix 32 bit overflow in __blkdev_issue_discard() | expand

Commit Message

Dave Chinner Nov. 13, 2018, 9:43 p.m. UTC
From: Dave Chinner <dchinner@redhat.com>

A discard cleanup merged into 4.20-rc2 causes fstests xfs/259 to
fall into an endless loop in the discard code. The test is creating
a device that is exactly 2^32 sectors in size to test mkfs boundary
conditions around the 32 bit sector overflow region.

mkfs issues a discard for the entire device size by default, and
hence this throws a sector count of 2^32 into
blkdev_issue_discard(). It takes the number of sectors to discard as
a sector_t - a 64 bit value.

The commit ba5d73851e71 ("block: cleanup __blkdev_issue_discard")
takes this sector count and casts it to a 32 bit value before
comapring it against the maximum allowed discard size the device
has. This truncates away the upper 32 bits, and so if the lower 32
bits of the sector count is zero, it starts issuing discards of
length 0. This causes the code to fall into an endless loop, issuing
a zero length discards over and over again on the same sector.

Fixes: ba5d73851e71 ("block: cleanup __blkdev_issue_discard")
Signed-off-by: Dave Chinner <dchinner@redhat.com>
---
 block/blk-lib.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

Comments

Darrick J. Wong Nov. 14, 2018, 2:36 a.m. UTC | #1
On Wed, Nov 14, 2018 at 08:43:37AM +1100, Dave Chinner wrote:
> From: Dave Chinner <dchinner@redhat.com>
> 
> A discard cleanup merged into 4.20-rc2 causes fstests xfs/259 to
> fall into an endless loop in the discard code. The test is creating
> a device that is exactly 2^32 sectors in size to test mkfs boundary
> conditions around the 32 bit sector overflow region.
> 
> mkfs issues a discard for the entire device size by default, and
> hence this throws a sector count of 2^32 into
> blkdev_issue_discard(). It takes the number of sectors to discard as
> a sector_t - a 64 bit value.
> 
> The commit ba5d73851e71 ("block: cleanup __blkdev_issue_discard")
> takes this sector count and casts it to a 32 bit value before
> comapring it against the maximum allowed discard size the device
> has. This truncates away the upper 32 bits, and so if the lower 32
> bits of the sector count is zero, it starts issuing discards of
> length 0. This causes the code to fall into an endless loop, issuing
> a zero length discards over and over again on the same sector.
> 
> Fixes: ba5d73851e71 ("block: cleanup __blkdev_issue_discard")
> Signed-off-by: Dave Chinner <dchinner@redhat.com>

Fixes the regression for me too, so...

Tested-by: Darrick J. Wong <darrick.wong@oracle.com>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>

--D

> ---
>  block/blk-lib.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/block/blk-lib.c b/block/blk-lib.c
> index e8b3bb9bf375..144e156ed341 100644
> --- a/block/blk-lib.c
> +++ b/block/blk-lib.c
> @@ -55,9 +55,12 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
>  		return -EINVAL;
>  
>  	while (nr_sects) {
> -		unsigned int req_sects = min_t(unsigned int, nr_sects,
> +		sector_t req_sects = min_t(sector_t, nr_sects,
>  				bio_allowed_max_sectors(q));
>  
> +		WARN_ON_ONCE(req_sects == 0);
> +		WARN_ON_ONCE((req_sects << 9) > UINT_MAX);
> +
>  		bio = blk_next_bio(bio, 0, gfp_mask);
>  		bio->bi_iter.bi_sector = sector;
>  		bio_set_dev(bio, bdev);
> -- 
> 2.19.1
>
Ming Lei Nov. 14, 2018, 2:53 a.m. UTC | #2
On Wed, Nov 14, 2018 at 5:44 AM Dave Chinner <david@fromorbit.com> wrote:
>
> From: Dave Chinner <dchinner@redhat.com>
>
> A discard cleanup merged into 4.20-rc2 causes fstests xfs/259 to
> fall into an endless loop in the discard code. The test is creating
> a device that is exactly 2^32 sectors in size to test mkfs boundary
> conditions around the 32 bit sector overflow region.
>
> mkfs issues a discard for the entire device size by default, and
> hence this throws a sector count of 2^32 into
> blkdev_issue_discard(). It takes the number of sectors to discard as
> a sector_t - a 64 bit value.
>
> The commit ba5d73851e71 ("block: cleanup __blkdev_issue_discard")
> takes this sector count and casts it to a 32 bit value before
> comapring it against the maximum allowed discard size the device
> has. This truncates away the upper 32 bits, and so if the lower 32
> bits of the sector count is zero, it starts issuing discards of
> length 0. This causes the code to fall into an endless loop, issuing
> a zero length discards over and over again on the same sector.
>
> Fixes: ba5d73851e71 ("block: cleanup __blkdev_issue_discard")
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> ---
>  block/blk-lib.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/block/blk-lib.c b/block/blk-lib.c
> index e8b3bb9bf375..144e156ed341 100644
> --- a/block/blk-lib.c
> +++ b/block/blk-lib.c
> @@ -55,9 +55,12 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
>                 return -EINVAL;
>
>         while (nr_sects) {
> -               unsigned int req_sects = min_t(unsigned int, nr_sects,
> +               sector_t req_sects = min_t(sector_t, nr_sects,
>                                 bio_allowed_max_sectors(q));

bio_allowed_max_sectors(q) is always < UINT_MAX, and 'sector_t' is only
required during the comparison, so another simpler fix might be the following,
could you test if it is workable?

diff --git a/block/blk-lib.c b/block/blk-lib.c
index e8b3bb9bf375..6ef44f99e83f 100644
--- a/block/blk-lib.c
+++ b/block/blk-lib.c
@@ -55,7 +55,7 @@ int __blkdev_issue_discard(struct block_device
*bdev, sector_t sector,
                return -EINVAL;

        while (nr_sects) {
-               unsigned int req_sects = min_t(unsigned int, nr_sects,
+               unsigned int req_sects = min_t(sector_t, nr_sects,
                                bio_allowed_max_sectors(q));
>
> +               WARN_ON_ONCE(req_sects == 0);

The above line isn't necessary given 'nr_sects' can't be zero.


Thanks,
Ming Lei
Dave Chinner Nov. 14, 2018, 8:08 a.m. UTC | #3
On Wed, Nov 14, 2018 at 10:53:11AM +0800, Ming Lei wrote:
> On Wed, Nov 14, 2018 at 5:44 AM Dave Chinner <david@fromorbit.com> wrote:
> >
> > From: Dave Chinner <dchinner@redhat.com>
> >
> > A discard cleanup merged into 4.20-rc2 causes fstests xfs/259 to
> > fall into an endless loop in the discard code. The test is creating
> > a device that is exactly 2^32 sectors in size to test mkfs boundary
> > conditions around the 32 bit sector overflow region.
> >
> > mkfs issues a discard for the entire device size by default, and
> > hence this throws a sector count of 2^32 into
> > blkdev_issue_discard(). It takes the number of sectors to discard as
> > a sector_t - a 64 bit value.
> >
> > The commit ba5d73851e71 ("block: cleanup __blkdev_issue_discard")
> > takes this sector count and casts it to a 32 bit value before
> > comapring it against the maximum allowed discard size the device
> > has. This truncates away the upper 32 bits, and so if the lower 32
> > bits of the sector count is zero, it starts issuing discards of
> > length 0. This causes the code to fall into an endless loop, issuing
> > a zero length discards over and over again on the same sector.
> >
> > Fixes: ba5d73851e71 ("block: cleanup __blkdev_issue_discard")
> > Signed-off-by: Dave Chinner <dchinner@redhat.com>
> > ---
> >  block/blk-lib.c | 5 ++++-
> >  1 file changed, 4 insertions(+), 1 deletion(-)
> >
> > diff --git a/block/blk-lib.c b/block/blk-lib.c
> > index e8b3bb9bf375..144e156ed341 100644
> > --- a/block/blk-lib.c
> > +++ b/block/blk-lib.c
> > @@ -55,9 +55,12 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
> >                 return -EINVAL;
> >
> >         while (nr_sects) {
> > -               unsigned int req_sects = min_t(unsigned int, nr_sects,
> > +               sector_t req_sects = min_t(sector_t, nr_sects,
> >                                 bio_allowed_max_sectors(q));
> 
> bio_allowed_max_sectors(q) is always < UINT_MAX, and 'sector_t' is only
> required during the comparison, so another simpler fix might be the following,
> could you test if it is workable?
>
> diff --git a/block/blk-lib.c b/block/blk-lib.c
> index e8b3bb9bf375..6ef44f99e83f 100644
> --- a/block/blk-lib.c
> +++ b/block/blk-lib.c
> @@ -55,7 +55,7 @@ int __blkdev_issue_discard(struct block_device
> *bdev, sector_t sector,
>                 return -EINVAL;
> 
>         while (nr_sects) {
> -               unsigned int req_sects = min_t(unsigned int, nr_sects,
> +               unsigned int req_sects = min_t(sector_t, nr_sects,
>                                 bio_allowed_max_sectors(q));

Rearrange the deck chairs all you like, just make sure you fix your
regression test suite to exercise obvious boundary conditions like
this so the next cleanup doesn't break the code again.

> >
> > +               WARN_ON_ONCE(req_sects == 0);
> 
> The above line isn't necessary given 'nr_sects' can't be zero.

Except it was 0 and it caused the bug I had to fix. So it should
have a warning on it.

Cheers,

Dave.
Ming Lei Nov. 14, 2018, 8:15 a.m. UTC | #4
On Wed, Nov 14, 2018 at 4:09 PM Dave Chinner <david@fromorbit.com> wrote:
>
> On Wed, Nov 14, 2018 at 10:53:11AM +0800, Ming Lei wrote:
> > On Wed, Nov 14, 2018 at 5:44 AM Dave Chinner <david@fromorbit.com> wrote:
> > >
> > > From: Dave Chinner <dchinner@redhat.com>
> > >
> > > A discard cleanup merged into 4.20-rc2 causes fstests xfs/259 to
> > > fall into an endless loop in the discard code. The test is creating
> > > a device that is exactly 2^32 sectors in size to test mkfs boundary
> > > conditions around the 32 bit sector overflow region.
> > >
> > > mkfs issues a discard for the entire device size by default, and
> > > hence this throws a sector count of 2^32 into
> > > blkdev_issue_discard(). It takes the number of sectors to discard as
> > > a sector_t - a 64 bit value.
> > >
> > > The commit ba5d73851e71 ("block: cleanup __blkdev_issue_discard")
> > > takes this sector count and casts it to a 32 bit value before
> > > comapring it against the maximum allowed discard size the device
> > > has. This truncates away the upper 32 bits, and so if the lower 32
> > > bits of the sector count is zero, it starts issuing discards of
> > > length 0. This causes the code to fall into an endless loop, issuing
> > > a zero length discards over and over again on the same sector.
> > >
> > > Fixes: ba5d73851e71 ("block: cleanup __blkdev_issue_discard")
> > > Signed-off-by: Dave Chinner <dchinner@redhat.com>
> > > ---
> > >  block/blk-lib.c | 5 ++++-
> > >  1 file changed, 4 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/block/blk-lib.c b/block/blk-lib.c
> > > index e8b3bb9bf375..144e156ed341 100644
> > > --- a/block/blk-lib.c
> > > +++ b/block/blk-lib.c
> > > @@ -55,9 +55,12 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
> > >                 return -EINVAL;
> > >
> > >         while (nr_sects) {
> > > -               unsigned int req_sects = min_t(unsigned int, nr_sects,
> > > +               sector_t req_sects = min_t(sector_t, nr_sects,
> > >                                 bio_allowed_max_sectors(q));
> >
> > bio_allowed_max_sectors(q) is always < UINT_MAX, and 'sector_t' is only
> > required during the comparison, so another simpler fix might be the following,
> > could you test if it is workable?
> >
> > diff --git a/block/blk-lib.c b/block/blk-lib.c
> > index e8b3bb9bf375..6ef44f99e83f 100644
> > --- a/block/blk-lib.c
> > +++ b/block/blk-lib.c
> > @@ -55,7 +55,7 @@ int __blkdev_issue_discard(struct block_device
> > *bdev, sector_t sector,
> >                 return -EINVAL;
> >
> >         while (nr_sects) {
> > -               unsigned int req_sects = min_t(unsigned int, nr_sects,
> > +               unsigned int req_sects = min_t(sector_t, nr_sects,
> >                                 bio_allowed_max_sectors(q));
>
> Rearrange the deck chairs all you like, just make sure you fix your
> regression test suite to exercise obvious boundary conditions like
> this so the next cleanup doesn't break the code again.

Good point, we may add comment on the overflow story.

>
> > >
> > > +               WARN_ON_ONCE(req_sects == 0);
> >
> > The above line isn't necessary given 'nr_sects' can't be zero.
>
> Except it was 0 and it caused the bug I had to fix. So it should
> have a warning on it.

Obviously, it can't be zero except CPU is broken, :-)

Thanks,
Ming Lei
Jens Axboe Nov. 14, 2018, 3:18 p.m. UTC | #5
On 11/13/18 2:43 PM, Dave Chinner wrote:
> From: Dave Chinner <dchinner@redhat.com>
> 
> A discard cleanup merged into 4.20-rc2 causes fstests xfs/259 to
> fall into an endless loop in the discard code. The test is creating
> a device that is exactly 2^32 sectors in size to test mkfs boundary
> conditions around the 32 bit sector overflow region.
> 
> mkfs issues a discard for the entire device size by default, and
> hence this throws a sector count of 2^32 into
> blkdev_issue_discard(). It takes the number of sectors to discard as
> a sector_t - a 64 bit value.
> 
> The commit ba5d73851e71 ("block: cleanup __blkdev_issue_discard")
> takes this sector count and casts it to a 32 bit value before
> comapring it against the maximum allowed discard size the device
> has. This truncates away the upper 32 bits, and so if the lower 32
> bits of the sector count is zero, it starts issuing discards of
> length 0. This causes the code to fall into an endless loop, issuing
> a zero length discards over and over again on the same sector.

Applied, thanks. Ming, can you please add a blktests test for
this case? This is the 2nd time it's been broken.
Ming Lei Nov. 15, 2018, 1:06 a.m. UTC | #6
On Wed, Nov 14, 2018 at 08:18:24AM -0700, Jens Axboe wrote:
> On 11/13/18 2:43 PM, Dave Chinner wrote:
> > From: Dave Chinner <dchinner@redhat.com>
> > 
> > A discard cleanup merged into 4.20-rc2 causes fstests xfs/259 to
> > fall into an endless loop in the discard code. The test is creating
> > a device that is exactly 2^32 sectors in size to test mkfs boundary
> > conditions around the 32 bit sector overflow region.
> > 
> > mkfs issues a discard for the entire device size by default, and
> > hence this throws a sector count of 2^32 into
> > blkdev_issue_discard(). It takes the number of sectors to discard as
> > a sector_t - a 64 bit value.
> > 
> > The commit ba5d73851e71 ("block: cleanup __blkdev_issue_discard")
> > takes this sector count and casts it to a 32 bit value before
> > comapring it against the maximum allowed discard size the device
> > has. This truncates away the upper 32 bits, and so if the lower 32
> > bits of the sector count is zero, it starts issuing discards of
> > length 0. This causes the code to fall into an endless loop, issuing
> > a zero length discards over and over again on the same sector.
> 
> Applied, thanks. Ming, can you please add a blktests test for
> this case? This is the 2nd time it's been broken.

OK, I will add zram discard test in blktests, which should cover the
1st report. For the xfs/259, I need to investigate if it is easy to
do in blktests.

Thanks,
Ming
Dave Chinner Nov. 15, 2018, 1:22 a.m. UTC | #7
On Thu, Nov 15, 2018 at 09:06:52AM +0800, Ming Lei wrote:
> On Wed, Nov 14, 2018 at 08:18:24AM -0700, Jens Axboe wrote:
> > On 11/13/18 2:43 PM, Dave Chinner wrote:
> > > From: Dave Chinner <dchinner@redhat.com>
> > > 
> > > A discard cleanup merged into 4.20-rc2 causes fstests xfs/259 to
> > > fall into an endless loop in the discard code. The test is creating
> > > a device that is exactly 2^32 sectors in size to test mkfs boundary
> > > conditions around the 32 bit sector overflow region.
> > > 
> > > mkfs issues a discard for the entire device size by default, and
> > > hence this throws a sector count of 2^32 into
> > > blkdev_issue_discard(). It takes the number of sectors to discard as
> > > a sector_t - a 64 bit value.
> > > 
> > > The commit ba5d73851e71 ("block: cleanup __blkdev_issue_discard")
> > > takes this sector count and casts it to a 32 bit value before
> > > comapring it against the maximum allowed discard size the device
> > > has. This truncates away the upper 32 bits, and so if the lower 32
> > > bits of the sector count is zero, it starts issuing discards of
> > > length 0. This causes the code to fall into an endless loop, issuing
> > > a zero length discards over and over again on the same sector.
> > 
> > Applied, thanks. Ming, can you please add a blktests test for
> > this case? This is the 2nd time it's been broken.
> 
> OK, I will add zram discard test in blktests, which should cover the
> 1st report. For the xfs/259, I need to investigate if it is easy to
> do in blktests.

Just write a test that creates block devices of 2^32 + (-1,0,1)
sectors and runs a discard across the entire device. That's all that
xfs/259 it doing - exercising mkfs on 2TB, 4TB and 16TB boundaries.
i.e. the boundaries where sectors and page cache indexes (on 4k page
size systems) overflow 32 bit int and unsigned int sizes. mkfs
issues a discard for the entire device, so it's testing that as
well...

You need to write tests that exercise write_same, write_zeros and
discard operations around these boundaries, because they all take
a 64 bit sector count and stuff them into 32 bit size fields in
the bio tha tis being submitted.

Cheers,

Dave.
Jens Axboe Nov. 15, 2018, 1:51 a.m. UTC | #8
On 11/14/18 6:06 PM, Ming Lei wrote:
> On Wed, Nov 14, 2018 at 08:18:24AM -0700, Jens Axboe wrote:
>> On 11/13/18 2:43 PM, Dave Chinner wrote:
>>> From: Dave Chinner <dchinner@redhat.com>
>>>
>>> A discard cleanup merged into 4.20-rc2 causes fstests xfs/259 to
>>> fall into an endless loop in the discard code. The test is creating
>>> a device that is exactly 2^32 sectors in size to test mkfs boundary
>>> conditions around the 32 bit sector overflow region.
>>>
>>> mkfs issues a discard for the entire device size by default, and
>>> hence this throws a sector count of 2^32 into
>>> blkdev_issue_discard(). It takes the number of sectors to discard as
>>> a sector_t - a 64 bit value.
>>>
>>> The commit ba5d73851e71 ("block: cleanup __blkdev_issue_discard")
>>> takes this sector count and casts it to a 32 bit value before
>>> comapring it against the maximum allowed discard size the device
>>> has. This truncates away the upper 32 bits, and so if the lower 32
>>> bits of the sector count is zero, it starts issuing discards of
>>> length 0. This causes the code to fall into an endless loop, issuing
>>> a zero length discards over and over again on the same sector.
>>
>> Applied, thanks. Ming, can you please add a blktests test for
>> this case? This is the 2nd time it's been broken.
> 
> OK, I will add zram discard test in blktests, which should cover the
> 1st report. For the xfs/259, I need to investigate if it is easy to
> do in blktests.

null_blk has discard support, might be an easier target in terms
of blktests.
Ming Lei Nov. 15, 2018, 3:10 a.m. UTC | #9
On Thu, Nov 15, 2018 at 12:22:01PM +1100, Dave Chinner wrote:
> On Thu, Nov 15, 2018 at 09:06:52AM +0800, Ming Lei wrote:
> > On Wed, Nov 14, 2018 at 08:18:24AM -0700, Jens Axboe wrote:
> > > On 11/13/18 2:43 PM, Dave Chinner wrote:
> > > > From: Dave Chinner <dchinner@redhat.com>
> > > > 
> > > > A discard cleanup merged into 4.20-rc2 causes fstests xfs/259 to
> > > > fall into an endless loop in the discard code. The test is creating
> > > > a device that is exactly 2^32 sectors in size to test mkfs boundary
> > > > conditions around the 32 bit sector overflow region.
> > > > 
> > > > mkfs issues a discard for the entire device size by default, and
> > > > hence this throws a sector count of 2^32 into
> > > > blkdev_issue_discard(). It takes the number of sectors to discard as
> > > > a sector_t - a 64 bit value.
> > > > 
> > > > The commit ba5d73851e71 ("block: cleanup __blkdev_issue_discard")
> > > > takes this sector count and casts it to a 32 bit value before
> > > > comapring it against the maximum allowed discard size the device
> > > > has. This truncates away the upper 32 bits, and so if the lower 32
> > > > bits of the sector count is zero, it starts issuing discards of
> > > > length 0. This causes the code to fall into an endless loop, issuing
> > > > a zero length discards over and over again on the same sector.
> > > 
> > > Applied, thanks. Ming, can you please add a blktests test for
> > > this case? This is the 2nd time it's been broken.
> > 
> > OK, I will add zram discard test in blktests, which should cover the
> > 1st report. For the xfs/259, I need to investigate if it is easy to
> > do in blktests.
> 
> Just write a test that creates block devices of 2^32 + (-1,0,1)
> sectors and runs a discard across the entire device. That's all that
> xfs/259 it doing - exercising mkfs on 2TB, 4TB and 16TB boundaries.
> i.e. the boundaries where sectors and page cache indexes (on 4k page
> size systems) overflow 32 bit int and unsigned int sizes. mkfs
> issues a discard for the entire device, so it's testing that as
> well...

Indeed, I can reproduce this issue via the following commands:

modprobe scsi_debug virtual_gb=2049 sector_size=512 lbpws10=1 dev_size_mb=512
blkdiscard /dev/sde

> 
> You need to write tests that exercise write_same, write_zeros and
> discard operations around these boundaries, because they all take
> a 64 bit sector count and stuff them into 32 bit size fields in
> the bio tha tis being submitted.

write_same/write_zeros are usually used by driver directly, so we
may need make the test case on some specific device.

Thanks,
Ming
Dave Chinner Nov. 15, 2018, 10:13 p.m. UTC | #10
On Thu, Nov 15, 2018 at 11:10:36AM +0800, Ming Lei wrote:
> On Thu, Nov 15, 2018 at 12:22:01PM +1100, Dave Chinner wrote:
> > On Thu, Nov 15, 2018 at 09:06:52AM +0800, Ming Lei wrote:
> > > On Wed, Nov 14, 2018 at 08:18:24AM -0700, Jens Axboe wrote:
> > > > On 11/13/18 2:43 PM, Dave Chinner wrote:
> > > > > From: Dave Chinner <dchinner@redhat.com>
> > > > > 
> > > > > A discard cleanup merged into 4.20-rc2 causes fstests xfs/259 to
> > > > > fall into an endless loop in the discard code. The test is creating
> > > > > a device that is exactly 2^32 sectors in size to test mkfs boundary
> > > > > conditions around the 32 bit sector overflow region.
> > > > > 
> > > > > mkfs issues a discard for the entire device size by default, and
> > > > > hence this throws a sector count of 2^32 into
> > > > > blkdev_issue_discard(). It takes the number of sectors to discard as
> > > > > a sector_t - a 64 bit value.
> > > > > 
> > > > > The commit ba5d73851e71 ("block: cleanup __blkdev_issue_discard")
> > > > > takes this sector count and casts it to a 32 bit value before
> > > > > comapring it against the maximum allowed discard size the device
> > > > > has. This truncates away the upper 32 bits, and so if the lower 32
> > > > > bits of the sector count is zero, it starts issuing discards of
> > > > > length 0. This causes the code to fall into an endless loop, issuing
> > > > > a zero length discards over and over again on the same sector.
> > > > 
> > > > Applied, thanks. Ming, can you please add a blktests test for
> > > > this case? This is the 2nd time it's been broken.
> > > 
> > > OK, I will add zram discard test in blktests, which should cover the
> > > 1st report. For the xfs/259, I need to investigate if it is easy to
> > > do in blktests.
> > 
> > Just write a test that creates block devices of 2^32 + (-1,0,1)
> > sectors and runs a discard across the entire device. That's all that
> > xfs/259 it doing - exercising mkfs on 2TB, 4TB and 16TB boundaries.
> > i.e. the boundaries where sectors and page cache indexes (on 4k page
> > size systems) overflow 32 bit int and unsigned int sizes. mkfs
> > issues a discard for the entire device, so it's testing that as
> > well...
> 
> Indeed, I can reproduce this issue via the following commands:
> 
> modprobe scsi_debug virtual_gb=2049 sector_size=512 lbpws10=1 dev_size_mb=512
> blkdiscard /dev/sde
> 
> > 
> > You need to write tests that exercise write_same, write_zeros and
> > discard operations around these boundaries, because they all take
> > a 64 bit sector count and stuff them into 32 bit size fields in
> > the bio tha tis being submitted.
> 
> write_same/write_zeros are usually used by driver directly, so we
> may need make the test case on some specific device.

My local linux iscsi server and client advertise support for them.
It definitely does not ships zeros across the wire(*) when I use
things like FALLOC_FL_ZERO_RANGE, but fstests does not have block
device fallocate() tests for zeroing or punching...

Cheers,

Dave.

(*) but the back end storage is a sparse file on an XFS filesystem,
and the iscsi server fails to translate write_zeroes or
WRITE_SAME(0) to FALLOC_FL_ZERO_RANGE on the storage side and hence
is really slow because it physically writes zeros to the XFS file.
i.e. the client offloads the operation to the server to minimise
wire traffic, but then the server doesn't offload the operation to
the storage....

Cheers,

Dave.
Darrick J. Wong Nov. 15, 2018, 10:24 p.m. UTC | #11
On Fri, Nov 16, 2018 at 09:13:37AM +1100, Dave Chinner wrote:
> On Thu, Nov 15, 2018 at 11:10:36AM +0800, Ming Lei wrote:
> > On Thu, Nov 15, 2018 at 12:22:01PM +1100, Dave Chinner wrote:
> > > On Thu, Nov 15, 2018 at 09:06:52AM +0800, Ming Lei wrote:
> > > > On Wed, Nov 14, 2018 at 08:18:24AM -0700, Jens Axboe wrote:
> > > > > On 11/13/18 2:43 PM, Dave Chinner wrote:
> > > > > > From: Dave Chinner <dchinner@redhat.com>
> > > > > > 
> > > > > > A discard cleanup merged into 4.20-rc2 causes fstests xfs/259 to
> > > > > > fall into an endless loop in the discard code. The test is creating
> > > > > > a device that is exactly 2^32 sectors in size to test mkfs boundary
> > > > > > conditions around the 32 bit sector overflow region.
> > > > > > 
> > > > > > mkfs issues a discard for the entire device size by default, and
> > > > > > hence this throws a sector count of 2^32 into
> > > > > > blkdev_issue_discard(). It takes the number of sectors to discard as
> > > > > > a sector_t - a 64 bit value.
> > > > > > 
> > > > > > The commit ba5d73851e71 ("block: cleanup __blkdev_issue_discard")
> > > > > > takes this sector count and casts it to a 32 bit value before
> > > > > > comapring it against the maximum allowed discard size the device
> > > > > > has. This truncates away the upper 32 bits, and so if the lower 32
> > > > > > bits of the sector count is zero, it starts issuing discards of
> > > > > > length 0. This causes the code to fall into an endless loop, issuing
> > > > > > a zero length discards over and over again on the same sector.
> > > > > 
> > > > > Applied, thanks. Ming, can you please add a blktests test for
> > > > > this case? This is the 2nd time it's been broken.
> > > > 
> > > > OK, I will add zram discard test in blktests, which should cover the
> > > > 1st report. For the xfs/259, I need to investigate if it is easy to
> > > > do in blktests.
> > > 
> > > Just write a test that creates block devices of 2^32 + (-1,0,1)
> > > sectors and runs a discard across the entire device. That's all that
> > > xfs/259 it doing - exercising mkfs on 2TB, 4TB and 16TB boundaries.
> > > i.e. the boundaries where sectors and page cache indexes (on 4k page
> > > size systems) overflow 32 bit int and unsigned int sizes. mkfs
> > > issues a discard for the entire device, so it's testing that as
> > > well...
> > 
> > Indeed, I can reproduce this issue via the following commands:
> > 
> > modprobe scsi_debug virtual_gb=2049 sector_size=512 lbpws10=1 dev_size_mb=512
> > blkdiscard /dev/sde
> > 
> > > 
> > > You need to write tests that exercise write_same, write_zeros and
> > > discard operations around these boundaries, because they all take
> > > a 64 bit sector count and stuff them into 32 bit size fields in
> > > the bio tha tis being submitted.
> > 
> > write_same/write_zeros are usually used by driver directly, so we
> > may need make the test case on some specific device.
> 
> My local linux iscsi server and client advertise support for them.
> It definitely does not ships zeros across the wire(*) when I use
> things like FALLOC_FL_ZERO_RANGE, but fstests does not have block
> device fallocate() tests for zeroing or punching...

fstests does (generic/{349,350,351}) but those basic functionality tests
don't include creating a 2^32 block device and seeing if overflows
happen... :/

...I also see that Eryu succeeded in kicking those tests out of the
quick group, so they probably don't run that often either.

--D

> 
> Cheers,
> 
> Dave.
> 
> (*) but the back end storage is a sparse file on an XFS filesystem,
> and the iscsi server fails to translate write_zeroes or
> WRITE_SAME(0) to FALLOC_FL_ZERO_RANGE on the storage side and hence
> is really slow because it physically writes zeros to the XFS file.
> i.e. the client offloads the operation to the server to minimise
> wire traffic, but then the server doesn't offload the operation to
> the storage....
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
Dave Chinner Nov. 16, 2018, 4:04 a.m. UTC | #12
On Thu, Nov 15, 2018 at 02:24:19PM -0800, Darrick J. Wong wrote:
> On Fri, Nov 16, 2018 at 09:13:37AM +1100, Dave Chinner wrote:
> > On Thu, Nov 15, 2018 at 11:10:36AM +0800, Ming Lei wrote:
> > > On Thu, Nov 15, 2018 at 12:22:01PM +1100, Dave Chinner wrote:
> > > > On Thu, Nov 15, 2018 at 09:06:52AM +0800, Ming Lei wrote:
> > > > > On Wed, Nov 14, 2018 at 08:18:24AM -0700, Jens Axboe wrote:
> > > > > > On 11/13/18 2:43 PM, Dave Chinner wrote:
> > > > > > > From: Dave Chinner <dchinner@redhat.com>
> > > > > > > 
> > > > > > > A discard cleanup merged into 4.20-rc2 causes fstests xfs/259 to
> > > > > > > fall into an endless loop in the discard code. The test is creating
> > > > > > > a device that is exactly 2^32 sectors in size to test mkfs boundary
> > > > > > > conditions around the 32 bit sector overflow region.
> > > > > > > 
> > > > > > > mkfs issues a discard for the entire device size by default, and
> > > > > > > hence this throws a sector count of 2^32 into
> > > > > > > blkdev_issue_discard(). It takes the number of sectors to discard as
> > > > > > > a sector_t - a 64 bit value.
> > > > > > > 
> > > > > > > The commit ba5d73851e71 ("block: cleanup __blkdev_issue_discard")
> > > > > > > takes this sector count and casts it to a 32 bit value before
> > > > > > > comapring it against the maximum allowed discard size the device
> > > > > > > has. This truncates away the upper 32 bits, and so if the lower 32
> > > > > > > bits of the sector count is zero, it starts issuing discards of
> > > > > > > length 0. This causes the code to fall into an endless loop, issuing
> > > > > > > a zero length discards over and over again on the same sector.
> > > > > > 
> > > > > > Applied, thanks. Ming, can you please add a blktests test for
> > > > > > this case? This is the 2nd time it's been broken.
> > > > > 
> > > > > OK, I will add zram discard test in blktests, which should cover the
> > > > > 1st report. For the xfs/259, I need to investigate if it is easy to
> > > > > do in blktests.
> > > > 
> > > > Just write a test that creates block devices of 2^32 + (-1,0,1)
> > > > sectors and runs a discard across the entire device. That's all that
> > > > xfs/259 it doing - exercising mkfs on 2TB, 4TB and 16TB boundaries.
> > > > i.e. the boundaries where sectors and page cache indexes (on 4k page
> > > > size systems) overflow 32 bit int and unsigned int sizes. mkfs
> > > > issues a discard for the entire device, so it's testing that as
> > > > well...
> > > 
> > > Indeed, I can reproduce this issue via the following commands:
> > > 
> > > modprobe scsi_debug virtual_gb=2049 sector_size=512 lbpws10=1 dev_size_mb=512
> > > blkdiscard /dev/sde
> > > 
> > > > 
> > > > You need to write tests that exercise write_same, write_zeros and
> > > > discard operations around these boundaries, because they all take
> > > > a 64 bit sector count and stuff them into 32 bit size fields in
> > > > the bio tha tis being submitted.
> > > 
> > > write_same/write_zeros are usually used by driver directly, so we
> > > may need make the test case on some specific device.
> > 
> > My local linux iscsi server and client advertise support for them.
> > It definitely does not ships zeros across the wire(*) when I use
> > things like FALLOC_FL_ZERO_RANGE, but fstests does not have block
> > device fallocate() tests for zeroing or punching...
> 
> fstests does (generic/{349,350,351}) but those basic functionality tests
> don't include creating a 2^32 block device and seeing if overflows
> happen... :/

They don't run on my test machines because they require a modular
kernel and I run a monolithic kernel specified externally by the
qemu command line on all my test VMs.

generic/349     [not run] scsi_debug module not found
generic/350     [not run] scsi_debug module not found
generic/351     [not run] scsi_debug module not found

Cheers,

Dave.
Christoph Hellwig Nov. 16, 2018, 8:32 a.m. UTC | #13
On Fri, Nov 16, 2018 at 03:04:57PM +1100, Dave Chinner wrote:
> They don't run on my test machines because they require a modular
> kernel and I run a monolithic kernel specified externally by the
> qemu command line on all my test VMs.
> 
> generic/349     [not run] scsi_debug module not found
> generic/350     [not run] scsi_debug module not found
> generic/351     [not run] scsi_debug module not found

Same here, btw.  Any test that requires modules is a rather bad idea.
Omar Sandoval Nov. 16, 2018, 8:46 a.m. UTC | #14
On Fri, Nov 16, 2018 at 12:32:33AM -0800, Christoph Hellwig wrote:
> On Fri, Nov 16, 2018 at 03:04:57PM +1100, Dave Chinner wrote:
> > They don't run on my test machines because they require a modular
> > kernel and I run a monolithic kernel specified externally by the
> > qemu command line on all my test VMs.
> > 
> > generic/349     [not run] scsi_debug module not found
> > generic/350     [not run] scsi_debug module not found
> > generic/351     [not run] scsi_debug module not found
> 
> Same here, btw.  Any test that requires modules is a rather bad idea.

I'll plug my vm.py script that supports running a kernel build with
modules without installing them into the VM (by mounting the modules
over 9p): https://github.com/osandov/osandov-linux#vm-setup
Christoph Hellwig Nov. 16, 2018, 8:53 a.m. UTC | #15
On Fri, Nov 16, 2018 at 12:46:58AM -0800, Omar Sandoval wrote:
> > > generic/349     [not run] scsi_debug module not found
> > > generic/350     [not run] scsi_debug module not found
> > > generic/351     [not run] scsi_debug module not found
> > 
> > Same here, btw.  Any test that requires modules is a rather bad idea.
> 
> I'll plug my vm.py script that supports running a kernel build with
> modules without installing them into the VM (by mounting the modules
> over 9p): https://github.com/osandov/osandov-linux#vm-setup

All nice and good, but avoiding complexity is even better :)

Especially as there should be no reason to only allowing to configure
something at module load time in this century.
Ming Lei Nov. 16, 2018, 12:06 p.m. UTC | #16
On Fri, Nov 16, 2018 at 03:04:57PM +1100, Dave Chinner wrote:
> On Thu, Nov 15, 2018 at 02:24:19PM -0800, Darrick J. Wong wrote:
> > On Fri, Nov 16, 2018 at 09:13:37AM +1100, Dave Chinner wrote:
> > > On Thu, Nov 15, 2018 at 11:10:36AM +0800, Ming Lei wrote:
> > > > On Thu, Nov 15, 2018 at 12:22:01PM +1100, Dave Chinner wrote:
> > > > > On Thu, Nov 15, 2018 at 09:06:52AM +0800, Ming Lei wrote:
> > > > > > On Wed, Nov 14, 2018 at 08:18:24AM -0700, Jens Axboe wrote:
> > > > > > > On 11/13/18 2:43 PM, Dave Chinner wrote:
> > > > > > > > From: Dave Chinner <dchinner@redhat.com>
> > > > > > > > 
> > > > > > > > A discard cleanup merged into 4.20-rc2 causes fstests xfs/259 to
> > > > > > > > fall into an endless loop in the discard code. The test is creating
> > > > > > > > a device that is exactly 2^32 sectors in size to test mkfs boundary
> > > > > > > > conditions around the 32 bit sector overflow region.
> > > > > > > > 
> > > > > > > > mkfs issues a discard for the entire device size by default, and
> > > > > > > > hence this throws a sector count of 2^32 into
> > > > > > > > blkdev_issue_discard(). It takes the number of sectors to discard as
> > > > > > > > a sector_t - a 64 bit value.
> > > > > > > > 
> > > > > > > > The commit ba5d73851e71 ("block: cleanup __blkdev_issue_discard")
> > > > > > > > takes this sector count and casts it to a 32 bit value before
> > > > > > > > comapring it against the maximum allowed discard size the device
> > > > > > > > has. This truncates away the upper 32 bits, and so if the lower 32
> > > > > > > > bits of the sector count is zero, it starts issuing discards of
> > > > > > > > length 0. This causes the code to fall into an endless loop, issuing
> > > > > > > > a zero length discards over and over again on the same sector.
> > > > > > > 
> > > > > > > Applied, thanks. Ming, can you please add a blktests test for
> > > > > > > this case? This is the 2nd time it's been broken.
> > > > > > 
> > > > > > OK, I will add zram discard test in blktests, which should cover the
> > > > > > 1st report. For the xfs/259, I need to investigate if it is easy to
> > > > > > do in blktests.
> > > > > 
> > > > > Just write a test that creates block devices of 2^32 + (-1,0,1)
> > > > > sectors and runs a discard across the entire device. That's all that
> > > > > xfs/259 it doing - exercising mkfs on 2TB, 4TB and 16TB boundaries.
> > > > > i.e. the boundaries where sectors and page cache indexes (on 4k page
> > > > > size systems) overflow 32 bit int and unsigned int sizes. mkfs
> > > > > issues a discard for the entire device, so it's testing that as
> > > > > well...
> > > > 
> > > > Indeed, I can reproduce this issue via the following commands:
> > > > 
> > > > modprobe scsi_debug virtual_gb=2049 sector_size=512 lbpws10=1 dev_size_mb=512
> > > > blkdiscard /dev/sde
> > > > 
> > > > > 
> > > > > You need to write tests that exercise write_same, write_zeros and
> > > > > discard operations around these boundaries, because they all take
> > > > > a 64 bit sector count and stuff them into 32 bit size fields in
> > > > > the bio tha tis being submitted.
> > > > 
> > > > write_same/write_zeros are usually used by driver directly, so we
> > > > may need make the test case on some specific device.
> > > 
> > > My local linux iscsi server and client advertise support for them.
> > > It definitely does not ships zeros across the wire(*) when I use
> > > things like FALLOC_FL_ZERO_RANGE, but fstests does not have block
> > > device fallocate() tests for zeroing or punching...
> > 
> > fstests does (generic/{349,350,351}) but those basic functionality tests
> > don't include creating a 2^32 block device and seeing if overflows
> > happen... :/
> 
> They don't run on my test machines because they require a modular
> kernel and I run a monolithic kernel specified externally by the
> qemu command line on all my test VMs.
> 
> generic/349     [not run] scsi_debug module not found
> generic/350     [not run] scsi_debug module not found
> generic/351     [not run] scsi_debug module not found

Given both null_blk and scsi_debug have tons of module parameters,
it should be inevitable to build them as modules, then you may setup
them in any way you want.

Most of my tests are done in VM too, I start VM by one Fedora offical
kernel, and install modules built from host via 'scp & tar zxf' inside
the VM, then use new built kernel to start VM and run any tests. The
whole script is quite simple, and no need host root privilege.

Thanks,
Ming
diff mbox series

Patch

diff --git a/block/blk-lib.c b/block/blk-lib.c
index e8b3bb9bf375..144e156ed341 100644
--- a/block/blk-lib.c
+++ b/block/blk-lib.c
@@ -55,9 +55,12 @@  int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
 		return -EINVAL;
 
 	while (nr_sects) {
-		unsigned int req_sects = min_t(unsigned int, nr_sects,
+		sector_t req_sects = min_t(sector_t, nr_sects,
 				bio_allowed_max_sectors(q));
 
+		WARN_ON_ONCE(req_sects == 0);
+		WARN_ON_ONCE((req_sects << 9) > UINT_MAX);
+
 		bio = blk_next_bio(bio, 0, gfp_mask);
 		bio->bi_iter.bi_sector = sector;
 		bio_set_dev(bio, bdev);