diff mbox series

[v12,1/3] bio: control bio max size

Message ID 20210604050324.28670-2-nanich.lee@samsung.com (mailing list archive)
State New, archived
Headers show
Series bio: control bio max size | expand

Commit Message

Changheun Lee June 4, 2021, 5:03 a.m. UTC
bio size can grow up to 4GB after muli-page bvec has been enabled.
But sometimes large size of bio would lead to inefficient behaviors.
Control of bio max size will be helpful to improve inefficiency.

Below is a example for inefficient behaviours.
In case of large chunk direct I/O, - 32MB chunk read in user space -
all pages for 32MB would be merged to a bio structure if the pages
physical addresses are contiguous. It makes some delay to submit
until merge complete. bio max size should be limited to a proper size.

When 32MB chunk read with direct I/O option is coming from userspace,
kernel behavior is below now in do_direct_IO() loop. It's timeline.

 | bio merge for 32MB. total 8,192 pages are merged.
 | total elapsed time is over 2ms.
 |------------------ ... ----------------------->|
                                                 | 8,192 pages merged a bio.
                                                 | at this time, first bio submit is done.
                                                 | 1 bio is split to 32 read request and issue.
                                                 |--------------->
                                                  |--------------->
                                                   |--------------->
                                                              ......
                                                                   |--------------->
                                                                    |--------------->|
                          total 19ms elapsed to complete 32MB read done from device. |

If bio max size is limited with 1MB, behavior is changed below.

 | bio merge for 1MB. 256 pages are merged for each bio.
 | total 32 bio will be made.
 | total elapsed time is over 2ms. it's same.
 | but, first bio submit timing is fast. about 100us.
 |--->|--->|--->|---> ... -->|--->|--->|--->|--->|
      | 256 pages merged a bio.
      | at this time, first bio submit is done.
      | and 1 read request is issued for 1 bio.
      |--------------->
           |--------------->
                |--------------->
                                      ......
                                                 |--------------->
                                                  |--------------->|
        total 17ms elapsed to complete 32MB read done from device. |

As a result, read request issue timing is faster if bio max size is limited.
Current kernel behavior with multipage bvec, super large bio can be created.
And it lead to delay first I/O request issue.

Signed-off-by: Changheun Lee <nanich.lee@samsung.com>
---
 block/bio.c            | 17 ++++++++++++++---
 block/blk-settings.c   | 19 +++++++++++++++++++
 include/linux/bio.h    |  4 +++-
 include/linux/blkdev.h |  3 +++
 4 files changed, 39 insertions(+), 4 deletions(-)

Comments

Damien Le Moal June 4, 2021, 7:24 a.m. UTC | #1
On 2021/06/04 14:22, Changheun Lee wrote:
> bio size can grow up to 4GB after muli-page bvec has been enabled.
> But sometimes large size of bio would lead to inefficient behaviors.
> Control of bio max size will be helpful to improve inefficiency.
> 
> Below is a example for inefficient behaviours.
> In case of large chunk direct I/O, - 32MB chunk read in user space -
> all pages for 32MB would be merged to a bio structure if the pages
> physical addresses are contiguous. It makes some delay to submit
> until merge complete. bio max size should be limited to a proper size.
> 
> When 32MB chunk read with direct I/O option is coming from userspace,
> kernel behavior is below now in do_direct_IO() loop. It's timeline.
> 
>  | bio merge for 32MB. total 8,192 pages are merged.
>  | total elapsed time is over 2ms.
>  |------------------ ... ----------------------->|
>                                                  | 8,192 pages merged a bio.
>                                                  | at this time, first bio submit is done.
>                                                  | 1 bio is split to 32 read request and issue.
>                                                  |--------------->
>                                                   |--------------->
>                                                    |--------------->
>                                                               ......
>                                                                    |--------------->
>                                                                     |--------------->|
>                           total 19ms elapsed to complete 32MB read done from device. |
> 
> If bio max size is limited with 1MB, behavior is changed below.
> 
>  | bio merge for 1MB. 256 pages are merged for each bio.
>  | total 32 bio will be made.
>  | total elapsed time is over 2ms. it's same.
>  | but, first bio submit timing is fast. about 100us.
>  |--->|--->|--->|---> ... -->|--->|--->|--->|--->|
>       | 256 pages merged a bio.
>       | at this time, first bio submit is done.
>       | and 1 read request is issued for 1 bio.
>       |--------------->
>            |--------------->
>                 |--------------->
>                                       ......
>                                                  |--------------->
>                                                   |--------------->|
>         total 17ms elapsed to complete 32MB read done from device. |
> 
> As a result, read request issue timing is faster if bio max size is limited.
> Current kernel behavior with multipage bvec, super large bio can be created.
> And it lead to delay first I/O request issue.
> 
> Signed-off-by: Changheun Lee <nanich.lee@samsung.com>
> ---
>  block/bio.c            | 17 ++++++++++++++---
>  block/blk-settings.c   | 19 +++++++++++++++++++
>  include/linux/bio.h    |  4 +++-
>  include/linux/blkdev.h |  3 +++
>  4 files changed, 39 insertions(+), 4 deletions(-)
> 
> diff --git a/block/bio.c b/block/bio.c
> index 44205dfb6b60..73b673f1684e 100644
> --- a/block/bio.c
> +++ b/block/bio.c
> @@ -255,6 +255,13 @@ void bio_init(struct bio *bio, struct bio_vec *table,
>  }
>  EXPORT_SYMBOL(bio_init);
>  
> +unsigned int bio_max_bytes(struct bio *bio)
> +{
> +	struct block_device *bdev = bio->bi_bdev;
> +
> +	return bdev ? bdev->bd_disk->queue->limits.max_bio_bytes : UINT_MAX;
> +}

unsigned int bio_max_bytes(struct bio *bio)
{
	struct block_device *bdev = bio->bi_bdev;

	if (!bdev)
		return UINT_MAX;
	return bdev->bd_disk->queue->limits.max_bio_bytes;
}

is a lot more readable...
Also, I remember there was some problems with bd_disk possibly being null. Was
that fixed ?

> +
>  /**
>   * bio_reset - reinitialize a bio
>   * @bio:	bio to reset
> @@ -866,7 +873,7 @@ bool __bio_try_merge_page(struct bio *bio, struct page *page,
>  		struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt - 1];
>  
>  		if (page_is_mergeable(bv, page, len, off, same_page)) {
> -			if (bio->bi_iter.bi_size > UINT_MAX - len) {
> +			if (bio->bi_iter.bi_size > bio_max_bytes(bio) - len) {
>  				*same_page = false;
>  				return false;
>  			}
> @@ -995,6 +1002,7 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
>  {
>  	unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt;
>  	unsigned short entries_left = bio->bi_max_vecs - bio->bi_vcnt;
> +	unsigned int bytes_left = bio_max_bytes(bio) - bio->bi_iter.bi_size;
>  	struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
>  	struct page **pages = (struct page **)bv;
>  	bool same_page = false;
> @@ -1010,7 +1018,8 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
>  	BUILD_BUG_ON(PAGE_PTRS_PER_BVEC < 2);
>  	pages += entries_left * (PAGE_PTRS_PER_BVEC - 1);
>  
> -	size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset);
> +	size = iov_iter_get_pages(iter, pages, bytes_left, nr_pages,
> +				  &offset);
>  	if (unlikely(size <= 0))
>  		return size ? size : -EFAULT;
>  
> @@ -1038,6 +1047,7 @@ static int __bio_iov_append_get_pages(struct bio *bio, struct iov_iter *iter)
>  {
>  	unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt;
>  	unsigned short entries_left = bio->bi_max_vecs - bio->bi_vcnt;
> +	unsigned int bytes_left = bio_max_bytes(bio) - bio->bi_iter.bi_size;
>  	struct request_queue *q = bio->bi_bdev->bd_disk->queue;
>  	unsigned int max_append_sectors = queue_max_zone_append_sectors(q);
>  	struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
> @@ -1058,7 +1068,8 @@ static int __bio_iov_append_get_pages(struct bio *bio, struct iov_iter *iter)
>  	BUILD_BUG_ON(PAGE_PTRS_PER_BVEC < 2);
>  	pages += entries_left * (PAGE_PTRS_PER_BVEC - 1);
>  
> -	size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset);
> +	size = iov_iter_get_pages(iter, pages, bytes_left, nr_pages,
> +				  &offset);
>  	if (unlikely(size <= 0))
>  		return size ? size : -EFAULT;
>  
> diff --git a/block/blk-settings.c b/block/blk-settings.c
> index 902c40d67120..e270e31519a1 100644
> --- a/block/blk-settings.c
> +++ b/block/blk-settings.c
> @@ -32,6 +32,7 @@ EXPORT_SYMBOL_GPL(blk_queue_rq_timeout);
>   */
>  void blk_set_default_limits(struct queue_limits *lim)
>  {
> +	lim->max_bio_bytes = UINT_MAX;
>  	lim->max_segments = BLK_MAX_SEGMENTS;
>  	lim->max_discard_segments = 1;
>  	lim->max_integrity_segments = 0;
> @@ -100,6 +101,24 @@ void blk_queue_bounce_limit(struct request_queue *q, enum blk_bounce bounce)
>  }
>  EXPORT_SYMBOL(blk_queue_bounce_limit);
>  
> +/**
> + * blk_queue_max_bio_bytes - set bio max size for queue

blk_queue_max_bio_bytes - set max_bio_bytes queue limit

And then you can drop the not very useful description.

> + * @q: the request queue for the device
> + * @bytes : bio max bytes to be set
> + *
> + * Description:
> + *    Set proper bio max size to optimize queue operating.
> + **/
> +void blk_queue_max_bio_bytes(struct request_queue *q, unsigned int bytes)
> +{
> +	struct queue_limits *limits = &q->limits;
> +	unsigned int max_bio_bytes = round_up(bytes, PAGE_SIZE);
> +
> +	limits->max_bio_bytes = max_t(unsigned int, max_bio_bytes,
> +				      BIO_MAX_VECS * PAGE_SIZE);
> +}
> +EXPORT_SYMBOL(blk_queue_max_bio_bytes);

Setting of the stacked limits is still missing.

> +
>  /**
>   * blk_queue_max_hw_sectors - set max sectors for a request for this queue
>   * @q:  the request queue for the device
> diff --git a/include/linux/bio.h b/include/linux/bio.h
> index a0b4cfdf62a4..3959cc1a0652 100644
> --- a/include/linux/bio.h
> +++ b/include/linux/bio.h
> @@ -106,6 +106,8 @@ static inline void *bio_data(struct bio *bio)
>  	return NULL;
>  }
>  
> +extern unsigned int bio_max_bytes(struct bio *bio);
> +
>  /**
>   * bio_full - check if the bio is full
>   * @bio:	bio to check
> @@ -119,7 +121,7 @@ static inline bool bio_full(struct bio *bio, unsigned len)
>  	if (bio->bi_vcnt >= bio->bi_max_vecs)
>  		return true;
>  
> -	if (bio->bi_iter.bi_size > UINT_MAX - len)
> +	if (bio->bi_iter.bi_size > bio_max_bytes(bio) - len)
>  		return true;
>  
>  	return false;
> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> index 1255823b2bc0..861888501fc0 100644
> --- a/include/linux/blkdev.h
> +++ b/include/linux/blkdev.h
> @@ -326,6 +326,8 @@ enum blk_bounce {
>  };
>  
>  struct queue_limits {
> +	unsigned int		max_bio_bytes;
> +
>  	enum blk_bounce		bounce;
>  	unsigned long		seg_boundary_mask;
>  	unsigned long		virt_boundary_mask;
> @@ -1132,6 +1134,7 @@ extern void blk_abort_request(struct request *);
>   * Access functions for manipulating queue properties
>   */
>  extern void blk_cleanup_queue(struct request_queue *);
> +extern void blk_queue_max_bio_bytes(struct request_queue *, unsigned int);
>  void blk_queue_bounce_limit(struct request_queue *q, enum blk_bounce limit);
>  extern void blk_queue_max_hw_sectors(struct request_queue *, unsigned int);
>  extern void blk_queue_chunk_sectors(struct request_queue *, unsigned int);
>
Changheun Lee June 4, 2021, 7:34 a.m. UTC | #2
> On 2021/06/04 14:22, Changheun Lee wrote:
> > bio size can grow up to 4GB after muli-page bvec has been enabled.
> > But sometimes large size of bio would lead to inefficient behaviors.
> > Control of bio max size will be helpful to improve inefficiency.
> > 
> > Below is a example for inefficient behaviours.
> > In case of large chunk direct I/O, - 32MB chunk read in user space -
> > all pages for 32MB would be merged to a bio structure if the pages
> > physical addresses are contiguous. It makes some delay to submit
> > until merge complete. bio max size should be limited to a proper size.
> > 
> > When 32MB chunk read with direct I/O option is coming from userspace,
> > kernel behavior is below now in do_direct_IO() loop. It's timeline.
> > 
> >  | bio merge for 32MB. total 8,192 pages are merged.
> >  | total elapsed time is over 2ms.
> >  |------------------ ... ----------------------->|
> >                                                  | 8,192 pages merged a bio.
> >                                                  | at this time, first bio submit is done.
> >                                                  | 1 bio is split to 32 read request and issue.
> >                                                  |--------------->
> >                                                   |--------------->
> >                                                    |--------------->
> >                                                               ......
> >                                                                    |--------------->
> >                                                                     |--------------->|
> >                           total 19ms elapsed to complete 32MB read done from device. |
> > 
> > If bio max size is limited with 1MB, behavior is changed below.
> > 
> >  | bio merge for 1MB. 256 pages are merged for each bio.
> >  | total 32 bio will be made.
> >  | total elapsed time is over 2ms. it's same.
> >  | but, first bio submit timing is fast. about 100us.
> >  |--->|--->|--->|---> ... -->|--->|--->|--->|--->|
> >       | 256 pages merged a bio.
> >       | at this time, first bio submit is done.
> >       | and 1 read request is issued for 1 bio.
> >       |--------------->
> >            |--------------->
> >                 |--------------->
> >                                       ......
> >                                                  |--------------->
> >                                                   |--------------->|
> >         total 17ms elapsed to complete 32MB read done from device. |
> > 
> > As a result, read request issue timing is faster if bio max size is limited.
> > Current kernel behavior with multipage bvec, super large bio can be created.
> > And it lead to delay first I/O request issue.
> > 
> > Signed-off-by: Changheun Lee <nanich.lee@samsung.com>
> > ---
> >  block/bio.c            | 17 ++++++++++++++---
> >  block/blk-settings.c   | 19 +++++++++++++++++++
> >  include/linux/bio.h    |  4 +++-
> >  include/linux/blkdev.h |  3 +++
> >  4 files changed, 39 insertions(+), 4 deletions(-)
> > 
> > diff --git a/block/bio.c b/block/bio.c
> > index 44205dfb6b60..73b673f1684e 100644
> > --- a/block/bio.c
> > +++ b/block/bio.c
> > @@ -255,6 +255,13 @@ void bio_init(struct bio *bio, struct bio_vec *table,
> >  }
> >  EXPORT_SYMBOL(bio_init);
> >  
> > +unsigned int bio_max_bytes(struct bio *bio)
> > +{
> > +	struct block_device *bdev = bio->bi_bdev;
> > +
> > +	return bdev ? bdev->bd_disk->queue->limits.max_bio_bytes : UINT_MAX;
> > +}
> 
> unsigned int bio_max_bytes(struct bio *bio)
> {
> 	struct block_device *bdev = bio->bi_bdev;
> 
> 	if (!bdev)
> 		return UINT_MAX;
> 	return bdev->bd_disk->queue->limits.max_bio_bytes;
> }
> 
> is a lot more readable...
> Also, I remember there was some problems with bd_disk possibly being null. Was
> that fixed ?

Thank you for review. But I'd like current style, and it's readable enough
now I think. Null of bd_disk was just suspicion. bd_disk is not null if bdev
is not null.

> 
> > +
> >  /**
> >   * bio_reset - reinitialize a bio
> >   * @bio:	bio to reset
> > @@ -866,7 +873,7 @@ bool __bio_try_merge_page(struct bio *bio, struct page *page,
> >  		struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt - 1];
> >  
> >  		if (page_is_mergeable(bv, page, len, off, same_page)) {
> > -			if (bio->bi_iter.bi_size > UINT_MAX - len) {
> > +			if (bio->bi_iter.bi_size > bio_max_bytes(bio) - len) {
> >  				*same_page = false;
> >  				return false;
> >  			}
> > @@ -995,6 +1002,7 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
> >  {
> >  	unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt;
> >  	unsigned short entries_left = bio->bi_max_vecs - bio->bi_vcnt;
> > +	unsigned int bytes_left = bio_max_bytes(bio) - bio->bi_iter.bi_size;
> >  	struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
> >  	struct page **pages = (struct page **)bv;
> >  	bool same_page = false;
> > @@ -1010,7 +1018,8 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
> >  	BUILD_BUG_ON(PAGE_PTRS_PER_BVEC < 2);
> >  	pages += entries_left * (PAGE_PTRS_PER_BVEC - 1);
> >  
> > -	size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset);
> > +	size = iov_iter_get_pages(iter, pages, bytes_left, nr_pages,
> > +				  &offset);
> >  	if (unlikely(size <= 0))
> >  		return size ? size : -EFAULT;
> >  
> > @@ -1038,6 +1047,7 @@ static int __bio_iov_append_get_pages(struct bio *bio, struct iov_iter *iter)
> >  {
> >  	unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt;
> >  	unsigned short entries_left = bio->bi_max_vecs - bio->bi_vcnt;
> > +	unsigned int bytes_left = bio_max_bytes(bio) - bio->bi_iter.bi_size;
> >  	struct request_queue *q = bio->bi_bdev->bd_disk->queue;
> >  	unsigned int max_append_sectors = queue_max_zone_append_sectors(q);
> >  	struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
> > @@ -1058,7 +1068,8 @@ static int __bio_iov_append_get_pages(struct bio *bio, struct iov_iter *iter)
> >  	BUILD_BUG_ON(PAGE_PTRS_PER_BVEC < 2);
> >  	pages += entries_left * (PAGE_PTRS_PER_BVEC - 1);
> >  
> > -	size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset);
> > +	size = iov_iter_get_pages(iter, pages, bytes_left, nr_pages,
> > +				  &offset);
> >  	if (unlikely(size <= 0))
> >  		return size ? size : -EFAULT;
> >  
> > diff --git a/block/blk-settings.c b/block/blk-settings.c
> > index 902c40d67120..e270e31519a1 100644
> > --- a/block/blk-settings.c
> > +++ b/block/blk-settings.c
> > @@ -32,6 +32,7 @@ EXPORT_SYMBOL_GPL(blk_queue_rq_timeout);
> >   */
> >  void blk_set_default_limits(struct queue_limits *lim)
> >  {
> > +	lim->max_bio_bytes = UINT_MAX;
> >  	lim->max_segments = BLK_MAX_SEGMENTS;
> >  	lim->max_discard_segments = 1;
> >  	lim->max_integrity_segments = 0;
> > @@ -100,6 +101,24 @@ void blk_queue_bounce_limit(struct request_queue *q, enum blk_bounce bounce)
> >  }
> >  EXPORT_SYMBOL(blk_queue_bounce_limit);
> >  
> > +/**
> > + * blk_queue_max_bio_bytes - set bio max size for queue
> 
> blk_queue_max_bio_bytes - set max_bio_bytes queue limit
> 
> And then you can drop the not very useful description.

OK. I'll.

> 
> > + * @q: the request queue for the device
> > + * @bytes : bio max bytes to be set
> > + *
> > + * Description:
> > + *    Set proper bio max size to optimize queue operating.
> > + **/
> > +void blk_queue_max_bio_bytes(struct request_queue *q, unsigned int bytes)
> > +{
> > +	struct queue_limits *limits = &q->limits;
> > +	unsigned int max_bio_bytes = round_up(bytes, PAGE_SIZE);
> > +
> > +	limits->max_bio_bytes = max_t(unsigned int, max_bio_bytes,
> > +				      BIO_MAX_VECS * PAGE_SIZE);
> > +}
> > +EXPORT_SYMBOL(blk_queue_max_bio_bytes);
> 
> Setting of the stacked limits is still missing.

max_bio_bytes for stacked device is just default(UINT_MAX) in this patch.
Because blk_set_stacking_limits() call blk_set_default_limits().
I'll work continue for stacked device after this patchowork.

> 
> > +
> >  /**
> >   * blk_queue_max_hw_sectors - set max sectors for a request for this queue
> >   * @q:  the request queue for the device
> > diff --git a/include/linux/bio.h b/include/linux/bio.h
> > index a0b4cfdf62a4..3959cc1a0652 100644
> > --- a/include/linux/bio.h
> > +++ b/include/linux/bio.h
> > @@ -106,6 +106,8 @@ static inline void *bio_data(struct bio *bio)
> >  	return NULL;
> >  }
> >  
> > +extern unsigned int bio_max_bytes(struct bio *bio);
> > +
> >  /**
> >   * bio_full - check if the bio is full
> >   * @bio:	bio to check
> > @@ -119,7 +121,7 @@ static inline bool bio_full(struct bio *bio, unsigned len)
> >  	if (bio->bi_vcnt >= bio->bi_max_vecs)
> >  		return true;
> >  
> > -	if (bio->bi_iter.bi_size > UINT_MAX - len)
> > +	if (bio->bi_iter.bi_size > bio_max_bytes(bio) - len)
> >  		return true;
> >  
> >  	return false;
> > diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> > index 1255823b2bc0..861888501fc0 100644
> > --- a/include/linux/blkdev.h
> > +++ b/include/linux/blkdev.h
> > @@ -326,6 +326,8 @@ enum blk_bounce {
> >  };
> >  
> >  struct queue_limits {
> > +	unsigned int		max_bio_bytes;
> > +
> >  	enum blk_bounce		bounce;
> >  	unsigned long		seg_boundary_mask;
> >  	unsigned long		virt_boundary_mask;
> > @@ -1132,6 +1134,7 @@ extern void blk_abort_request(struct request *);
> >   * Access functions for manipulating queue properties
> >   */
> >  extern void blk_cleanup_queue(struct request_queue *);
> > +extern void blk_queue_max_bio_bytes(struct request_queue *, unsigned int);
> >  void blk_queue_bounce_limit(struct request_queue *q, enum blk_bounce limit);
> >  extern void blk_queue_max_hw_sectors(struct request_queue *, unsigned int);
> >  extern void blk_queue_chunk_sectors(struct request_queue *, unsigned int);
> > 
> 
> 
> -- 
> Damien Le Moal
> Western Digital Research

Thank you,
Changheun Lee
Damien Le Moal June 4, 2021, 8:01 a.m. UTC | #3
On 2021/06/04 16:53, Changheun Lee wrote:
>> On 2021/06/04 14:22, Changheun Lee wrote:
>>> bio size can grow up to 4GB after muli-page bvec has been enabled.
>>> But sometimes large size of bio would lead to inefficient behaviors.
>>> Control of bio max size will be helpful to improve inefficiency.
>>>
>>> Below is a example for inefficient behaviours.
>>> In case of large chunk direct I/O, - 32MB chunk read in user space -
>>> all pages for 32MB would be merged to a bio structure if the pages
>>> physical addresses are contiguous. It makes some delay to submit
>>> until merge complete. bio max size should be limited to a proper size.
>>>
>>> When 32MB chunk read with direct I/O option is coming from userspace,
>>> kernel behavior is below now in do_direct_IO() loop. It's timeline.
>>>
>>>  | bio merge for 32MB. total 8,192 pages are merged.
>>>  | total elapsed time is over 2ms.
>>>  |------------------ ... ----------------------->|
>>>                                                  | 8,192 pages merged a bio.
>>>                                                  | at this time, first bio submit is done.
>>>                                                  | 1 bio is split to 32 read request and issue.
>>>                                                  |--------------->
>>>                                                   |--------------->
>>>                                                    |--------------->
>>>                                                               ......
>>>                                                                    |--------------->
>>>                                                                     |--------------->|
>>>                           total 19ms elapsed to complete 32MB read done from device. |
>>>
>>> If bio max size is limited with 1MB, behavior is changed below.
>>>
>>>  | bio merge for 1MB. 256 pages are merged for each bio.
>>>  | total 32 bio will be made.
>>>  | total elapsed time is over 2ms. it's same.
>>>  | but, first bio submit timing is fast. about 100us.
>>>  |--->|--->|--->|---> ... -->|--->|--->|--->|--->|
>>>       | 256 pages merged a bio.
>>>       | at this time, first bio submit is done.
>>>       | and 1 read request is issued for 1 bio.
>>>       |--------------->
>>>            |--------------->
>>>                 |--------------->
>>>                                       ......
>>>                                                  |--------------->
>>>                                                   |--------------->|
>>>         total 17ms elapsed to complete 32MB read done from device. |
>>>
>>> As a result, read request issue timing is faster if bio max size is limited.
>>> Current kernel behavior with multipage bvec, super large bio can be created.
>>> And it lead to delay first I/O request issue.
>>>
>>> Signed-off-by: Changheun Lee <nanich.lee@samsung.com>
>>> ---
>>>  block/bio.c            | 17 ++++++++++++++---
>>>  block/blk-settings.c   | 19 +++++++++++++++++++
>>>  include/linux/bio.h    |  4 +++-
>>>  include/linux/blkdev.h |  3 +++
>>>  4 files changed, 39 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/block/bio.c b/block/bio.c
>>> index 44205dfb6b60..73b673f1684e 100644
>>> --- a/block/bio.c
>>> +++ b/block/bio.c
>>> @@ -255,6 +255,13 @@ void bio_init(struct bio *bio, struct bio_vec *table,
>>>  }
>>>  EXPORT_SYMBOL(bio_init);
>>>  
>>> +unsigned int bio_max_bytes(struct bio *bio)
>>> +{
>>> +	struct block_device *bdev = bio->bi_bdev;
>>> +
>>> +	return bdev ? bdev->bd_disk->queue->limits.max_bio_bytes : UINT_MAX;
>>> +}
>>
>> unsigned int bio_max_bytes(struct bio *bio)
>> {
>> 	struct block_device *bdev = bio->bi_bdev;
>>
>> 	if (!bdev)
>> 		return UINT_MAX;
>> 	return bdev->bd_disk->queue->limits.max_bio_bytes;
>> }
>>
>> is a lot more readable...
>> Also, I remember there was some problems with bd_disk possibly being null. Was
>> that fixed ?
> 
> Thank you for review. But I'd like current style, and it's readable enough
> now I think. Null of bd_disk was just suspicion. bd_disk is not null if bdev
> is not null.
> 
>>
>>> +
>>>  /**
>>>   * bio_reset - reinitialize a bio
>>>   * @bio:	bio to reset
>>> @@ -866,7 +873,7 @@ bool __bio_try_merge_page(struct bio *bio, struct page *page,
>>>  		struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt - 1];
>>>  
>>>  		if (page_is_mergeable(bv, page, len, off, same_page)) {
>>> -			if (bio->bi_iter.bi_size > UINT_MAX - len) {
>>> +			if (bio->bi_iter.bi_size > bio_max_bytes(bio) - len) {
>>>  				*same_page = false;
>>>  				return false;
>>>  			}
>>> @@ -995,6 +1002,7 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
>>>  {
>>>  	unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt;
>>>  	unsigned short entries_left = bio->bi_max_vecs - bio->bi_vcnt;
>>> +	unsigned int bytes_left = bio_max_bytes(bio) - bio->bi_iter.bi_size;
>>>  	struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
>>>  	struct page **pages = (struct page **)bv;
>>>  	bool same_page = false;
>>> @@ -1010,7 +1018,8 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
>>>  	BUILD_BUG_ON(PAGE_PTRS_PER_BVEC < 2);
>>>  	pages += entries_left * (PAGE_PTRS_PER_BVEC - 1);
>>>  
>>> -	size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset);
>>> +	size = iov_iter_get_pages(iter, pages, bytes_left, nr_pages,
>>> +				  &offset);
>>>  	if (unlikely(size <= 0))
>>>  		return size ? size : -EFAULT;
>>>  
>>> @@ -1038,6 +1047,7 @@ static int __bio_iov_append_get_pages(struct bio *bio, struct iov_iter *iter)
>>>  {
>>>  	unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt;
>>>  	unsigned short entries_left = bio->bi_max_vecs - bio->bi_vcnt;
>>> +	unsigned int bytes_left = bio_max_bytes(bio) - bio->bi_iter.bi_size;
>>>  	struct request_queue *q = bio->bi_bdev->bd_disk->queue;
>>>  	unsigned int max_append_sectors = queue_max_zone_append_sectors(q);
>>>  	struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
>>> @@ -1058,7 +1068,8 @@ static int __bio_iov_append_get_pages(struct bio *bio, struct iov_iter *iter)
>>>  	BUILD_BUG_ON(PAGE_PTRS_PER_BVEC < 2);
>>>  	pages += entries_left * (PAGE_PTRS_PER_BVEC - 1);
>>>  
>>> -	size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset);
>>> +	size = iov_iter_get_pages(iter, pages, bytes_left, nr_pages,
>>> +				  &offset);
>>>  	if (unlikely(size <= 0))
>>>  		return size ? size : -EFAULT;
>>>  
>>> diff --git a/block/blk-settings.c b/block/blk-settings.c
>>> index 902c40d67120..e270e31519a1 100644
>>> --- a/block/blk-settings.c
>>> +++ b/block/blk-settings.c
>>> @@ -32,6 +32,7 @@ EXPORT_SYMBOL_GPL(blk_queue_rq_timeout);
>>>   */
>>>  void blk_set_default_limits(struct queue_limits *lim)
>>>  {
>>> +	lim->max_bio_bytes = UINT_MAX;
>>>  	lim->max_segments = BLK_MAX_SEGMENTS;
>>>  	lim->max_discard_segments = 1;
>>>  	lim->max_integrity_segments = 0;
>>> @@ -100,6 +101,24 @@ void blk_queue_bounce_limit(struct request_queue *q, enum blk_bounce bounce)
>>>  }
>>>  EXPORT_SYMBOL(blk_queue_bounce_limit);
>>>  
>>> +/**
>>> + * blk_queue_max_bio_bytes - set bio max size for queue
>>
>> blk_queue_max_bio_bytes - set max_bio_bytes queue limit
>>
>> And then you can drop the not very useful description.
> 
> OK. I'll.
> 
>>
>>> + * @q: the request queue for the device
>>> + * @bytes : bio max bytes to be set
>>> + *
>>> + * Description:
>>> + *    Set proper bio max size to optimize queue operating.
>>> + **/
>>> +void blk_queue_max_bio_bytes(struct request_queue *q, unsigned int bytes)
>>> +{
>>> +	struct queue_limits *limits = &q->limits;
>>> +	unsigned int max_bio_bytes = round_up(bytes, PAGE_SIZE);
>>> +
>>> +	limits->max_bio_bytes = max_t(unsigned int, max_bio_bytes,
>>> +				      BIO_MAX_VECS * PAGE_SIZE);
>>> +}
>>> +EXPORT_SYMBOL(blk_queue_max_bio_bytes);
>>
>> Setting of the stacked limits is still missing.
> 
> max_bio_bytes for stacked device is just default(UINT_MAX) in this patch.
> Because blk_set_stacking_limits() call blk_set_default_limits().> I'll work continue for stacked device after this patchowork.

Why ? Without that added now, anybody using this performance fix will see no
benefits if a device mapper is used. The stacking limit should be super simple.
In blk_stack_limits(), just add:

	t->max_bio_bytes = min(t->max_bio_bytes, b->max_bio_bytes);

> 
>>
>>> +
>>>  /**
>>>   * blk_queue_max_hw_sectors - set max sectors for a request for this queue
>>>   * @q:  the request queue for the device
>>> diff --git a/include/linux/bio.h b/include/linux/bio.h
>>> index a0b4cfdf62a4..3959cc1a0652 100644
>>> --- a/include/linux/bio.h
>>> +++ b/include/linux/bio.h
>>> @@ -106,6 +106,8 @@ static inline void *bio_data(struct bio *bio)
>>>  	return NULL;
>>>  }
>>>  
>>> +extern unsigned int bio_max_bytes(struct bio *bio);
>>> +
>>>  /**
>>>   * bio_full - check if the bio is full
>>>   * @bio:	bio to check
>>> @@ -119,7 +121,7 @@ static inline bool bio_full(struct bio *bio, unsigned len)
>>>  	if (bio->bi_vcnt >= bio->bi_max_vecs)
>>>  		return true;
>>>  
>>> -	if (bio->bi_iter.bi_size > UINT_MAX - len)
>>> +	if (bio->bi_iter.bi_size > bio_max_bytes(bio) - len)
>>>  		return true;
>>>  
>>>  	return false;
>>> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
>>> index 1255823b2bc0..861888501fc0 100644
>>> --- a/include/linux/blkdev.h
>>> +++ b/include/linux/blkdev.h
>>> @@ -326,6 +326,8 @@ enum blk_bounce {
>>>  };
>>>  
>>>  struct queue_limits {
>>> +	unsigned int		max_bio_bytes;
>>> +
>>>  	enum blk_bounce		bounce;
>>>  	unsigned long		seg_boundary_mask;
>>>  	unsigned long		virt_boundary_mask;
>>> @@ -1132,6 +1134,7 @@ extern void blk_abort_request(struct request *);
>>>   * Access functions for manipulating queue properties
>>>   */
>>>  extern void blk_cleanup_queue(struct request_queue *);
>>> +extern void blk_queue_max_bio_bytes(struct request_queue *, unsigned int);
>>>  void blk_queue_bounce_limit(struct request_queue *q, enum blk_bounce limit);
>>>  extern void blk_queue_max_hw_sectors(struct request_queue *, unsigned int);
>>>  extern void blk_queue_chunk_sectors(struct request_queue *, unsigned int);
>>>
>>
>>
>> -- 
>> Damien Le Moal
>> Western Digital Research
> 
> Thank you,
> Changheun Lee
>
Changheun Lee June 4, 2021, 9:23 a.m. UTC | #4
> On 2021/06/04 16:53, Changheun Lee wrote:
> >> On 2021/06/04 14:22, Changheun Lee wrote:
> >>> + * @q: the request queue for the device
> >>> + * @bytes : bio max bytes to be set
> >>> + *
> >>> + * Description:
> >>> + *    Set proper bio max size to optimize queue operating.
> >>> + **/
> >>> +void blk_queue_max_bio_bytes(struct request_queue *q, unsigned int bytes)
> >>> +{
> >>> +	struct queue_limits *limits = &q->limits;
> >>> +	unsigned int max_bio_bytes = round_up(bytes, PAGE_SIZE);
> >>> +
> >>> +	limits->max_bio_bytes = max_t(unsigned int, max_bio_bytes,
> >>> +				      BIO_MAX_VECS * PAGE_SIZE);
> >>> +}
> >>> +EXPORT_SYMBOL(blk_queue_max_bio_bytes);
> >>
> >> Setting of the stacked limits is still missing.
> > 
> > max_bio_bytes for stacked device is just default(UINT_MAX) in this patch.
> > Because blk_set_stacking_limits() call blk_set_default_limits().
> > I'll work continue for stacked device after this patchowork.
> 
> Why ? Without that added now, anybody using this performance fix will see no
> benefits if a device mapper is used. The stacking limit should be super simple.
> In blk_stack_limits(), just add:
> 
> t->max_bio_bytes = min(t->max_bio_bytes, b->max_bio_bytes);
> 
> 
> -- 
> Damien Le Moal
> Western Digital Research

I had tried like as your comment at first. But I got many feedbacks that
applying for all device is not good idea. So I'll try to control bio size
in each stacked driver like as setting max_bio_bytes in LLD as you
recommended. I'm trying to find some target stacked devices to contorl
bio size like as dm-crypt, dm-liner, ... etc. And I'll try to find some
method to control bio max size include using of blk_queue_max_bio_bytes().

Thank you,
Changheun Lee
Bart Van Assche June 4, 2021, 2:52 p.m. UTC | #5
On 6/4/21 12:34 AM, Changheun Lee wrote:
>> On 2021/06/04 14:22, Changheun Lee wrote:
>>> +unsigned int bio_max_bytes(struct bio *bio)
>>> +{
>>> +	struct block_device *bdev = bio->bi_bdev;
>>> +
>>> +	return bdev ? bdev->bd_disk->queue->limits.max_bio_bytes : UINT_MAX;
>>> +}
>>
>> unsigned int bio_max_bytes(struct bio *bio)
>> {
>> 	struct block_device *bdev = bio->bi_bdev;
>>
>> 	if (!bdev)
>> 		return UINT_MAX;
>> 	return bdev->bd_disk->queue->limits.max_bio_bytes;
>> }
>>
>> is a lot more readable...
>> Also, I remember there was some problems with bd_disk possibly being null. Was
>> that fixed ?
> 
> Thank you for review. But I'd like current style, and it's readable enough
> now I think. Null of bd_disk was just suspicion. bd_disk is not null if bdev
> is not null.
 Damien is right. bd_disk can be NULL. From
https://lore.kernel.org/linux-block/20210425043020.30065-1-bvanassche@acm.org/:
"bio_max_size() may get called before device_add_disk() and hence needs to
check whether or not the block device pointer is NULL. [ ... ]
This patch prevents that bio_max_size() triggers the following kernel
crash during a SCSI LUN scan:\
BUG: KASAN: null-ptr-deref in bio_add_hw_page+0xa6/0x310
Read of size 8 at addr 00000000000005a8 by task kworker/u16:0/7
Workqueue: events_unbound async_run_entry_fn
Call Trace:
 show_stack+0x52/0x58
 dump_stack+0x9d/0xcf
 kasan_report.cold+0x4b/0x50
 __asan_load8+0x69/0x90
 bio_add_hw_page+0xa6/0x310
 bio_add_pc_page+0xaa/0xe0
 bio_map_kern+0xdc/0x1a0
 blk_rq_map_kern+0xcd/0x2d0
 __scsi_execute+0x9a/0x290 [scsi_mod]
 scsi_probe_lun.constprop.0+0x17c/0x660 [scsi_mod]
 scsi_probe_and_add_lun+0x178/0x750 [scsi_mod]
 __scsi_add_device+0x18c/0x1a0 [scsi_mod]
 ata_scsi_scan_host+0xe5/0x260 [libata]
 async_port_probe+0x94/0xb0 [libata]
 async_run_entry_fn+0x7d/0x2d0
 process_one_work+0x582/0xac0
 worker_thread+0x8f/0x5a0
 kthread+0x222/0x250
 ret_from_fork+0x1f/0x30"

Bart.
Christoph Hellwig June 7, 2021, 6:32 a.m. UTC | #6
NAK.  As discussed countless time we already have an actual limit.
And we can look it as advisory before building ever bigger bios,
but the last thing we need is yet another confusing limit.
Christoph Hellwig June 7, 2021, 6:35 a.m. UTC | #7
On Fri, Jun 04, 2021 at 07:52:35AM -0700, Bart Van Assche wrote:
>  Damien is right. bd_disk can be NULL. From

bd_disk is initialized in bdev_alloc, so it should never be NULL.
bi_bdev OTOH is only set afer bio_add_page in various places or not at
all in case of passthrough bios.  Which is a bit of a mess and I have
plans to fix it.
Changheun Lee June 7, 2021, 10:42 a.m. UTC | #8
> NAK.  As discussed countless time we already have an actual limit.
> And we can look it as advisory before building ever bigger bios,
> but the last thing we need is yet another confusing limit.

Thank you for your review and feedback. I has thought proper bio size control
with proper interface might be good, and can be helpful to the other module
beside issued performance problem. Providing of common interface to control
bio size in block layer might be good I think even though current patch is
not accepted.

Thanks for all,
Changheun Lee
Bart Van Assche June 7, 2021, 4:46 p.m. UTC | #9
On 6/6/21 11:35 PM, Christoph Hellwig wrote:
> On Fri, Jun 04, 2021 at 07:52:35AM -0700, Bart Van Assche wrote:
>>  Damien is right. bd_disk can be NULL. From
> 
> bd_disk is initialized in bdev_alloc, so it should never be NULL.
> bi_bdev OTOH is only set afer bio_add_page in various places or not at
> all in case of passthrough bios.  Which is a bit of a mess and I have
> plans to fix it.

Hi Christoph,

Thank you for having shared your plans for how to improve how bi_bdev is
set.

In case you would not yet have had the time to do this, please take a
look at the call trace available on
https://lore.kernel.org/linux-block/20210425043020.30065-1-bvanassche@acm.org/.
That call trace shows how bio_add_pc_page() is called by the SCSI core
before alloc_disk() is called. I think that sending a SCSI command
before alloc_disk() is called is fundamental in the SCSI core because
the SCSI INQUIRY command has to be sent before it is known whether or
not a SCSI LUN represents a disk.

Thanks,

Bart.
Christoph Hellwig June 8, 2021, 5:22 a.m. UTC | #10
On Mon, Jun 07, 2021 at 09:46:56AM -0700, Bart Van Assche wrote:
> In case you would not yet have had the time to do this, please take a
> look at the call trace available on
> https://lore.kernel.org/linux-block/20210425043020.30065-1-bvanassche@acm.org/.
> That call trace shows how bio_add_pc_page() is called by the SCSI core
> before alloc_disk() is called. I think that sending a SCSI command
> before alloc_disk() is called is fundamental in the SCSI core because
> the SCSI INQUIRY command has to be sent before it is known whether or
> not a SCSI LUN represents a disk.

I have a plan for that as well, stay tuned :)
diff mbox series

Patch

diff --git a/block/bio.c b/block/bio.c
index 44205dfb6b60..73b673f1684e 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -255,6 +255,13 @@  void bio_init(struct bio *bio, struct bio_vec *table,
 }
 EXPORT_SYMBOL(bio_init);
 
+unsigned int bio_max_bytes(struct bio *bio)
+{
+	struct block_device *bdev = bio->bi_bdev;
+
+	return bdev ? bdev->bd_disk->queue->limits.max_bio_bytes : UINT_MAX;
+}
+
 /**
  * bio_reset - reinitialize a bio
  * @bio:	bio to reset
@@ -866,7 +873,7 @@  bool __bio_try_merge_page(struct bio *bio, struct page *page,
 		struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt - 1];
 
 		if (page_is_mergeable(bv, page, len, off, same_page)) {
-			if (bio->bi_iter.bi_size > UINT_MAX - len) {
+			if (bio->bi_iter.bi_size > bio_max_bytes(bio) - len) {
 				*same_page = false;
 				return false;
 			}
@@ -995,6 +1002,7 @@  static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
 {
 	unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt;
 	unsigned short entries_left = bio->bi_max_vecs - bio->bi_vcnt;
+	unsigned int bytes_left = bio_max_bytes(bio) - bio->bi_iter.bi_size;
 	struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
 	struct page **pages = (struct page **)bv;
 	bool same_page = false;
@@ -1010,7 +1018,8 @@  static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
 	BUILD_BUG_ON(PAGE_PTRS_PER_BVEC < 2);
 	pages += entries_left * (PAGE_PTRS_PER_BVEC - 1);
 
-	size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset);
+	size = iov_iter_get_pages(iter, pages, bytes_left, nr_pages,
+				  &offset);
 	if (unlikely(size <= 0))
 		return size ? size : -EFAULT;
 
@@ -1038,6 +1047,7 @@  static int __bio_iov_append_get_pages(struct bio *bio, struct iov_iter *iter)
 {
 	unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt;
 	unsigned short entries_left = bio->bi_max_vecs - bio->bi_vcnt;
+	unsigned int bytes_left = bio_max_bytes(bio) - bio->bi_iter.bi_size;
 	struct request_queue *q = bio->bi_bdev->bd_disk->queue;
 	unsigned int max_append_sectors = queue_max_zone_append_sectors(q);
 	struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
@@ -1058,7 +1068,8 @@  static int __bio_iov_append_get_pages(struct bio *bio, struct iov_iter *iter)
 	BUILD_BUG_ON(PAGE_PTRS_PER_BVEC < 2);
 	pages += entries_left * (PAGE_PTRS_PER_BVEC - 1);
 
-	size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset);
+	size = iov_iter_get_pages(iter, pages, bytes_left, nr_pages,
+				  &offset);
 	if (unlikely(size <= 0))
 		return size ? size : -EFAULT;
 
diff --git a/block/blk-settings.c b/block/blk-settings.c
index 902c40d67120..e270e31519a1 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -32,6 +32,7 @@  EXPORT_SYMBOL_GPL(blk_queue_rq_timeout);
  */
 void blk_set_default_limits(struct queue_limits *lim)
 {
+	lim->max_bio_bytes = UINT_MAX;
 	lim->max_segments = BLK_MAX_SEGMENTS;
 	lim->max_discard_segments = 1;
 	lim->max_integrity_segments = 0;
@@ -100,6 +101,24 @@  void blk_queue_bounce_limit(struct request_queue *q, enum blk_bounce bounce)
 }
 EXPORT_SYMBOL(blk_queue_bounce_limit);
 
+/**
+ * blk_queue_max_bio_bytes - set bio max size for queue
+ * @q: the request queue for the device
+ * @bytes : bio max bytes to be set
+ *
+ * Description:
+ *    Set proper bio max size to optimize queue operating.
+ **/
+void blk_queue_max_bio_bytes(struct request_queue *q, unsigned int bytes)
+{
+	struct queue_limits *limits = &q->limits;
+	unsigned int max_bio_bytes = round_up(bytes, PAGE_SIZE);
+
+	limits->max_bio_bytes = max_t(unsigned int, max_bio_bytes,
+				      BIO_MAX_VECS * PAGE_SIZE);
+}
+EXPORT_SYMBOL(blk_queue_max_bio_bytes);
+
 /**
  * blk_queue_max_hw_sectors - set max sectors for a request for this queue
  * @q:  the request queue for the device
diff --git a/include/linux/bio.h b/include/linux/bio.h
index a0b4cfdf62a4..3959cc1a0652 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -106,6 +106,8 @@  static inline void *bio_data(struct bio *bio)
 	return NULL;
 }
 
+extern unsigned int bio_max_bytes(struct bio *bio);
+
 /**
  * bio_full - check if the bio is full
  * @bio:	bio to check
@@ -119,7 +121,7 @@  static inline bool bio_full(struct bio *bio, unsigned len)
 	if (bio->bi_vcnt >= bio->bi_max_vecs)
 		return true;
 
-	if (bio->bi_iter.bi_size > UINT_MAX - len)
+	if (bio->bi_iter.bi_size > bio_max_bytes(bio) - len)
 		return true;
 
 	return false;
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 1255823b2bc0..861888501fc0 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -326,6 +326,8 @@  enum blk_bounce {
 };
 
 struct queue_limits {
+	unsigned int		max_bio_bytes;
+
 	enum blk_bounce		bounce;
 	unsigned long		seg_boundary_mask;
 	unsigned long		virt_boundary_mask;
@@ -1132,6 +1134,7 @@  extern void blk_abort_request(struct request *);
  * Access functions for manipulating queue properties
  */
 extern void blk_cleanup_queue(struct request_queue *);
+extern void blk_queue_max_bio_bytes(struct request_queue *, unsigned int);
 void blk_queue_bounce_limit(struct request_queue *q, enum blk_bounce limit);
 extern void blk_queue_max_hw_sectors(struct request_queue *, unsigned int);
 extern void blk_queue_chunk_sectors(struct request_queue *, unsigned int);