diff mbox series

[v4,1/2] bio: limit bio max size

Message ID 20210129034909.18785-1-nanich.lee@samsung.com (mailing list archive)
State New, archived
Headers show
Series [v4,1/2] bio: limit bio max size | expand

Commit Message

Changheun Lee Jan. 29, 2021, 3:49 a.m. UTC
bio size can grow up to 4GB when muli-page bvec is enabled.
but sometimes it would lead to inefficient behaviors.
in case of large chunk direct I/O, - 32MB chunk read in user space -
all pages for 32MB would be merged to a bio structure if the pages
physical addresses are contiguous. it makes some delay to submit
until merge complete. bio max size should be limited to a proper size.

When 32MB chunk read with direct I/O option is coming from userspace,
kernel behavior is below now. it's timeline.

 | bio merge for 32MB. total 8,192 pages are merged.
 | total elapsed time is over 2ms.
 |------------------ ... ----------------------->|
                                                 | 8,192 pages merged a bio.
                                                 | at this time, first bio submit is done.
                                                 | 1 bio is split to 32 read request and issue.
                                                 |--------------->
                                                  |--------------->
                                                   |--------------->
                                                              ......
                                                                   |--------------->
                                                                    |--------------->|
                          total 19ms elapsed to complete 32MB read done from device. |

If bio max size is limited with 1MB, behavior is changed below.

 | bio merge for 1MB. 256 pages are merged for each bio.
 | total 32 bio will be made.
 | total elapsed time is over 2ms. it's same.
 | but, first bio submit timing is fast. about 100us.
 |--->|--->|--->|---> ... -->|--->|--->|--->|--->|
      | 256 pages merged a bio.
      | at this time, first bio submit is done.
      | and 1 read request is issued for 1 bio.
      |--------------->
           |--------------->
                |--------------->
                                      ......
                                                 |--------------->
                                                  |--------------->|
        total 17ms elapsed to complete 32MB read done from device. |

As a result, read request issue timing is faster if bio max size is limited.
Current kernel behavior with multipage bvec, super large bio can be created.
And it lead to delay first I/O request issue.

Signed-off-by: Changheun Lee <nanich.lee@samsung.com>
---
 block/bio.c            | 13 ++++++++++++-
 include/linux/bio.h    |  2 +-
 include/linux/blkdev.h |  3 +++
 3 files changed, 16 insertions(+), 2 deletions(-)

Comments

Ming Lei Jan. 29, 2021, 7:23 a.m. UTC | #1
On Fri, Jan 29, 2021 at 12:49:08PM +0900, Changheun Lee wrote:
> bio size can grow up to 4GB when muli-page bvec is enabled.
> but sometimes it would lead to inefficient behaviors.
> in case of large chunk direct I/O, - 32MB chunk read in user space -
> all pages for 32MB would be merged to a bio structure if the pages
> physical addresses are contiguous. it makes some delay to submit
> until merge complete. bio max size should be limited to a proper size.
> 
> When 32MB chunk read with direct I/O option is coming from userspace,
> kernel behavior is below now. it's timeline.
> 
>  | bio merge for 32MB. total 8,192 pages are merged.
>  | total elapsed time is over 2ms.
>  |------------------ ... ----------------------->|
>                                                  | 8,192 pages merged a bio.
>                                                  | at this time, first bio submit is done.
>                                                  | 1 bio is split to 32 read request and issue.
>                                                  |--------------->
>                                                   |--------------->
>                                                    |--------------->
>                                                               ......
>                                                                    |--------------->
>                                                                     |--------------->|
>                           total 19ms elapsed to complete 32MB read done from device. |
> 
> If bio max size is limited with 1MB, behavior is changed below.
> 
>  | bio merge for 1MB. 256 pages are merged for each bio.
>  | total 32 bio will be made.
>  | total elapsed time is over 2ms. it's same.
>  | but, first bio submit timing is fast. about 100us.
>  |--->|--->|--->|---> ... -->|--->|--->|--->|--->|
>       | 256 pages merged a bio.
>       | at this time, first bio submit is done.
>       | and 1 read request is issued for 1 bio.
>       |--------------->
>            |--------------->
>                 |--------------->
>                                       ......
>                                                  |--------------->
>                                                   |--------------->|
>         total 17ms elapsed to complete 32MB read done from device. |

Can you share us if enabling THP in your application can avoid this issue? BTW, you
need to make the 32MB buffer aligned with huge page size. IMO, THP perfectly fits
your case.


Thanks,
Ming
Changheun Lee Feb. 1, 2021, 2:52 a.m. UTC | #2
> On Fri, Jan 29, 2021 at 12:49:08PM +0900, Changheun Lee wrote:
> > bio size can grow up to 4GB when muli-page bvec is enabled.
> > but sometimes it would lead to inefficient behaviors.
> > in case of large chunk direct I/O, - 32MB chunk read in user space -
> > all pages for 32MB would be merged to a bio structure if the pages
> > physical addresses are contiguous. it makes some delay to submit
> > until merge complete. bio max size should be limited to a proper size.
> > 
> > When 32MB chunk read with direct I/O option is coming from userspace,
> > kernel behavior is below now. it's timeline.
> > 
> >  | bio merge for 32MB. total 8,192 pages are merged.
> >  | total elapsed time is over 2ms.
> >  |------------------ ... ----------------------->|
> >                                                  | 8,192 pages merged a bio.
> >                                                  | at this time, first bio submit is done.
> >                                                  | 1 bio is split to 32 read request and issue.
> >                                                  |--------------->
> >                                                   |--------------->
> >                                                    |--------------->
> >                                                               ......
> >                                                                    |--------------->
> >                                                                     |--------------->|
> >                           total 19ms elapsed to complete 32MB read done from device. |
> > 
> > If bio max size is limited with 1MB, behavior is changed below.
> > 
> >  | bio merge for 1MB. 256 pages are merged for each bio.
> >  | total 32 bio will be made.
> >  | total elapsed time is over 2ms. it's same.
> >  | but, first bio submit timing is fast. about 100us.
> >  |--->|--->|--->|---> ... -->|--->|--->|--->|--->|
> >       | 256 pages merged a bio.
> >       | at this time, first bio submit is done.
> >       | and 1 read request is issued for 1 bio.
> >       |--------------->
> >            |--------------->
> >                 |--------------->
> >                                       ......
> >                                                  |--------------->
> >                                                   |--------------->|
> >         total 17ms elapsed to complete 32MB read done from device. |
> 
> Can you share us if enabling THP in your application can avoid this issue? BTW, you
> need to make the 32MB buffer aligned with huge page size. IMO, THP perfectly fits
> your case.
> 

THP is enabled already like as below in my environment. It has no effect.

cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never

This issue was reported from performance benchmark application in open market.
I can't control application's working in open market.
It's not only my own case. This issue might be occured in many mobile environment.
At least, I checked this problem in exynos, and qualcomm chipset.

> 
> Thanks,
> Ming
> 
> 

---
Changheun Lee
Samsung Electronics.
Ming Lei Feb. 1, 2021, 7:14 a.m. UTC | #3
On Mon, Feb 01, 2021 at 11:52:48AM +0900, Changheun Lee wrote:
> > On Fri, Jan 29, 2021 at 12:49:08PM +0900, Changheun Lee wrote:
> > > bio size can grow up to 4GB when muli-page bvec is enabled.
> > > but sometimes it would lead to inefficient behaviors.
> > > in case of large chunk direct I/O, - 32MB chunk read in user space -
> > > all pages for 32MB would be merged to a bio structure if the pages
> > > physical addresses are contiguous. it makes some delay to submit
> > > until merge complete. bio max size should be limited to a proper size.
> > > 
> > > When 32MB chunk read with direct I/O option is coming from userspace,
> > > kernel behavior is below now. it's timeline.
> > > 
> > >  | bio merge for 32MB. total 8,192 pages are merged.
> > >  | total elapsed time is over 2ms.
> > >  |------------------ ... ----------------------->|
> > >                                                  | 8,192 pages merged a bio.
> > >                                                  | at this time, first bio submit is done.
> > >                                                  | 1 bio is split to 32 read request and issue.
> > >                                                  |--------------->
> > >                                                   |--------------->
> > >                                                    |--------------->
> > >                                                               ......
> > >                                                                    |--------------->
> > >                                                                     |--------------->|
> > >                           total 19ms elapsed to complete 32MB read done from device. |
> > > 
> > > If bio max size is limited with 1MB, behavior is changed below.
> > > 
bio_iov_iter_get_pages> > >  | bio merge for 1MB. 256 pages are merged for each bio.
> > >  | total 32 bio will be made.
> > >  | total elapsed time is over 2ms. it's same.
> > >  | but, first bio submit timing is fast. about 100us.
> > >  |--->|--->|--->|---> ... -->|--->|--->|--->|--->|
> > >       | 256 pages merged a bio.
> > >       | at this time, first bio submit is done.
> > >       | and 1 read request is issued for 1 bio.
> > >       |--------------->
> > >            |--------------->
> > >                 |--------------->
> > >                                       ......
> > >                                                  |--------------->
> > >                                                   |--------------->|
> > >         total 17ms elapsed to complete 32MB read done from device. |
> > 
> > Can you share us if enabling THP in your application can avoid this issue? BTW, you
> > need to make the 32MB buffer aligned with huge page size. IMO, THP perfectly fits
> > your case.
> > 
> 
> THP is enabled already like as below in my environment. It has no effect.
> 
> cat /sys/kernel/mm/transparent_hugepage/enabled
> [always] madvise never

The 32MB user buffer needs to be huge page size aligned. If your system
supports bcc/bpftrace, it is quite easy to check if the buffer is
aligned.

> 
> This issue was reported from performance benchmark application in open market.
> I can't control application's working in open market.
> It's not only my own case. This issue might be occured in many mobile environment.
> At least, I checked this problem in exynos, and qualcomm chipset.

You just said it takes 2ms for building 32MB bio, but you never investigate the
reason. I guess it is from get_user_pages_fast(), but maybe others. Can you dig
further for the reason? Maybe it is one arm64 specific issue.

BTW, bio_iov_iter_get_pages() just takes ~200us on one x86_64 VM with THP, which is
observed via bcc/funclatency when running the following workload:

[root@ktest-01 test]# cat fio.job
[global]
bs=32768k
rw=randread
iodepth=1
ioengine=psync
direct=1
runtime=20
time_based

group_reporting=0
ramp_time=5

[diotest]
filename=/dev/sde


[root@ktest-01 func]# /usr/share/bcc/tools/funclatency bio_iov_iter_get_pages
Tracing 1 functions for "bio_iov_iter_get_pages"... Hit Ctrl-C to end.
^C
     nsecs               : count     distribution
         0 -> 1          : 0        |                                        |
         2 -> 3          : 0        |                                        |
         4 -> 7          : 0        |                                        |
         8 -> 15         : 0        |                                        |
        16 -> 31         : 0        |                                        |
        32 -> 63         : 0        |                                        |
        64 -> 127        : 0        |                                        |
       128 -> 255        : 0        |                                        |
       256 -> 511        : 0        |                                        |
       512 -> 1023       : 0        |                                        |
      1024 -> 2047       : 0        |                                        |
      2048 -> 4095       : 0        |                                        |
      4096 -> 8191       : 0        |                                        |
      8192 -> 16383      : 0        |                                        |
     16384 -> 32767      : 0        |                                        |
     32768 -> 65535      : 0        |                                        |
     65536 -> 131071     : 0        |                                        |
    131072 -> 262143     : 1842     |****************************************|
    262144 -> 524287     : 125      |**                                      |
    524288 -> 1048575    : 6        |                                        |
   1048576 -> 2097151    : 0        |                                        |
   2097152 -> 4194303    : 1        |                                        |
   4194304 -> 8388607    : 0        |                                        |
   8388608 -> 16777215   : 1        |                                        |
Detaching...
Changheun Lee Feb. 2, 2021, 4:12 a.m. UTC | #4
> On Mon, Feb 01, 2021 at 11:52:48AM +0900, Changheun Lee wrote:
> > > On Fri, Jan 29, 2021 at 12:49:08PM +0900, Changheun Lee wrote:
> > > > bio size can grow up to 4GB when muli-page bvec is enabled.
> > > > but sometimes it would lead to inefficient behaviors.
> > > > in case of large chunk direct I/O, - 32MB chunk read in user space -
> > > > all pages for 32MB would be merged to a bio structure if the pages
> > > > physical addresses are contiguous. it makes some delay to submit
> > > > until merge complete. bio max size should be limited to a proper size.
> > > > 
> > > > When 32MB chunk read with direct I/O option is coming from userspace,
> > > > kernel behavior is below now. it's timeline.
> > > > 
> > > >  | bio merge for 32MB. total 8,192 pages are merged.
> > > >  | total elapsed time is over 2ms.
> > > >  |------------------ ... ----------------------->|
> > > >                                                  | 8,192 pages merged a bio.
> > > >                                                  | at this time, first bio submit is done.
> > > >                                                  | 1 bio is split to 32 read request and issue.
> > > >                                                  |--------------->
> > > >                                                   |--------------->
> > > >                                                    |--------------->
> > > >                                                               ......
> > > >                                                                    |--------------->
> > > >                                                                     |--------------->|
> > > >                           total 19ms elapsed to complete 32MB read done from device. |
> > > > 
> > > > If bio max size is limited with 1MB, behavior is changed below.
> > > > 
> bio_iov_iter_get_pages> > >  | bio merge for 1MB. 256 pages are merged for each bio.
> > > >  | total 32 bio will be made.
> > > >  | total elapsed time is over 2ms. it's same.
> > > >  | but, first bio submit timing is fast. about 100us.
> > > >  |--->|--->|--->|---> ... -->|--->|--->|--->|--->|
> > > >       | 256 pages merged a bio.
> > > >       | at this time, first bio submit is done.
> > > >       | and 1 read request is issued for 1 bio.
> > > >       |--------------->
> > > >            |--------------->
> > > >                 |--------------->
> > > >                                       ......
> > > >                                                  |--------------->
> > > >                                                   |--------------->|
> > > >         total 17ms elapsed to complete 32MB read done from device. |
> > > 
> > > Can you share us if enabling THP in your application can avoid this issue? BTW, you
> > > need to make the 32MB buffer aligned with huge page size. IMO, THP perfectly fits
> > > your case.
> > > 
> > 
> > THP is enabled already like as below in my environment. It has no effect.
> > 
> > cat /sys/kernel/mm/transparent_hugepage/enabled
> > [always] madvise never
> 
> The 32MB user buffer needs to be huge page size aligned. If your system
> supports bcc/bpftrace, it is quite easy to check if the buffer is
> aligned.
> 
> > 
> > This issue was reported from performance benchmark application in open market.
> > I can't control application's working in open market.
> > It's not only my own case. This issue might be occured in many mobile environment.
> > At least, I checked this problem in exynos, and qualcomm chipset.
> 
> You just said it takes 2ms for building 32MB bio, but you never investigate the
> reason. I guess it is from get_user_pages_fast(), but maybe others. Can you dig
> further for the reason? Maybe it is one arm64 specific issue.
> 
> BTW, bio_iov_iter_get_pages() just takes ~200us on one x86_64 VM with THP, which is
> observed via bcc/funclatency when running the following workload:
> 

I think you focused on bio_iov_iter_get_pages() because I just commented page
merge delay only. Sorry about that. I missed details of this issue.
Actually there are many operations during while-loop in do_direct_IO().
Page merge operation is just one among them. Page merge operation is called
by dio_send_cur_page() in while-loop. Below is call stack.

__bio_try_merge_page+0x4c/0x614
bio_add_page+0x40/0x12c
dio_send_cur_page+0x13c/0x374
submit_page_section+0xb4/0x304
do_direct_IO+0x3d4/0x854
do_blockdev_direct_IO+0x488/0xa18
__blockdev_direct_IO+0x30/0x3c
f2fs_direct_IO+0x6d0/0xb80
generic_file_read_iter+0x284/0x45c
f2fs_file_read_iter+0x3c/0xac
__vfs_read+0x19c/0x204
vfs_read+0xa4/0x144

2ms delay is not only caused by page merge operation. it inculdes many the
other operations too. But those many operations included page merge should
be executed more if bio size is grow up.

> [root@ktest-01 test]# cat fio.job
> [global]
> bs=32768k
> rw=randread
> iodepth=1
> ioengine=psync
> direct=1
> runtime=20
> time_based
> 
> group_reporting=0
> ramp_time=5
> 
> [diotest]
> filename=/dev/sde
> 
> 
> [root@ktest-01 func]# /usr/share/bcc/tools/funclatency bio_iov_iter_get_pages
> Tracing 1 functions for "bio_iov_iter_get_pages"... Hit Ctrl-C to end.
> ^C
> nsecs               : count     distribution
> 0 -> 1          : 0        |                                        |
> 2 -> 3          : 0        |                                        |
> 4 -> 7          : 0        |                                        |
> 8 -> 15         : 0        |                                        |
> 16 -> 31         : 0        |                                        |
> 32 -> 63         : 0        |                                        |
> 64 -> 127        : 0        |                                        |
> 128 -> 255        : 0        |                                        |
> 256 -> 511        : 0        |                                        |
> 512 -> 1023       : 0        |                                        |
> 1024 -> 2047       : 0        |                                        |
> 2048 -> 4095       : 0        |                                        |
> 4096 -> 8191       : 0        |                                        |
> 8192 -> 16383      : 0        |                                        |
> 16384 -> 32767      : 0        |                                        |
> 32768 -> 65535      : 0        |                                        |
> 65536 -> 131071     : 0        |                                        |
> 131072 -> 262143     : 1842     |****************************************|
> 262144 -> 524287     : 125      |**                                      |
> 524288 -> 1048575    : 6        |                                        |
> 1048576 -> 2097151    : 0        |                                        |
> 2097152 -> 4194303    : 1        |                                        |
> 4194304 -> 8388607    : 0        |                                        |
> 8388608 -> 16777215   : 1        |                                        |
> Detaching...
> 
> 
> 
> -- 
> Ming
> 

---
Changheun Lee
Samsung Electronics
Ming Lei Feb. 3, 2021, 3:40 a.m. UTC | #5
On Tue, Feb 02, 2021 at 01:12:04PM +0900, Changheun Lee wrote:
> > On Mon, Feb 01, 2021 at 11:52:48AM +0900, Changheun Lee wrote:
> > > > On Fri, Jan 29, 2021 at 12:49:08PM +0900, Changheun Lee wrote:
> > > > > bio size can grow up to 4GB when muli-page bvec is enabled.
> > > > > but sometimes it would lead to inefficient behaviors.
> > > > > in case of large chunk direct I/O, - 32MB chunk read in user space -
> > > > > all pages for 32MB would be merged to a bio structure if the pages
> > > > > physical addresses are contiguous. it makes some delay to submit
> > > > > until merge complete. bio max size should be limited to a proper size.
> > > > > 
> > > > > When 32MB chunk read with direct I/O option is coming from userspace,
> > > > > kernel behavior is below now. it's timeline.
> > > > > 
> > > > >  | bio merge for 32MB. total 8,192 pages are merged.
> > > > >  | total elapsed time is over 2ms.
> > > > >  |------------------ ... ----------------------->|
> > > > >                                                  | 8,192 pages merged a bio.
> > > > >                                                  | at this time, first bio submit is done.
> > > > >                                                  | 1 bio is split to 32 read request and issue.
> > > > >                                                  |--------------->
> > > > >                                                   |--------------->
> > > > >                                                    |--------------->
> > > > >                                                               ......
> > > > >                                                                    |--------------->
> > > > >                                                                     |--------------->|
> > > > >                           total 19ms elapsed to complete 32MB read done from device. |
> > > > > 
> > > > > If bio max size is limited with 1MB, behavior is changed below.
> > > > > 
> > bio_iov_iter_get_pages> > >  | bio merge for 1MB. 256 pages are merged for each bio.
> > > > >  | total 32 bio will be made.
> > > > >  | total elapsed time is over 2ms. it's same.
> > > > >  | but, first bio submit timing is fast. about 100us.
> > > > >  |--->|--->|--->|---> ... -->|--->|--->|--->|--->|
> > > > >       | 256 pages merged a bio.
> > > > >       | at this time, first bio submit is done.
> > > > >       | and 1 read request is issued for 1 bio.
> > > > >       |--------------->
> > > > >            |--------------->
> > > > >                 |--------------->
> > > > >                                       ......
> > > > >                                                  |--------------->
> > > > >                                                   |--------------->|
> > > > >         total 17ms elapsed to complete 32MB read done from device. |
> > > > 
> > > > Can you share us if enabling THP in your application can avoid this issue? BTW, you
> > > > need to make the 32MB buffer aligned with huge page size. IMO, THP perfectly fits
> > > > your case.
> > > > 
> > > 
> > > THP is enabled already like as below in my environment. It has no effect.
> > > 
> > > cat /sys/kernel/mm/transparent_hugepage/enabled
> > > [always] madvise never
> > 
> > The 32MB user buffer needs to be huge page size aligned. If your system
> > supports bcc/bpftrace, it is quite easy to check if the buffer is
> > aligned.
> > 
> > > 
> > > This issue was reported from performance benchmark application in open market.
> > > I can't control application's working in open market.
> > > It's not only my own case. This issue might be occured in many mobile environment.
> > > At least, I checked this problem in exynos, and qualcomm chipset.
> > 
> > You just said it takes 2ms for building 32MB bio, but you never investigate the
> > reason. I guess it is from get_user_pages_fast(), but maybe others. Can you dig
> > further for the reason? Maybe it is one arm64 specific issue.
> > 
> > BTW, bio_iov_iter_get_pages() just takes ~200us on one x86_64 VM with THP, which is
> > observed via bcc/funclatency when running the following workload:
> > 
> 
> I think you focused on bio_iov_iter_get_pages() because I just commented page
> merge delay only. Sorry about that. I missed details of this issue.
> Actually there are many operations during while-loop in do_direct_IO().
> Page merge operation is just one among them. Page merge operation is called
> by dio_send_cur_page() in while-loop. Below is call stack.
> 
> __bio_try_merge_page+0x4c/0x614
> bio_add_page+0x40/0x12c
> dio_send_cur_page+0x13c/0x374
> submit_page_section+0xb4/0x304
> do_direct_IO+0x3d4/0x854
> do_blockdev_direct_IO+0x488/0xa18
> __blockdev_direct_IO+0x30/0x3c
> f2fs_direct_IO+0x6d0/0xb80
> generic_file_read_iter+0x284/0x45c
> f2fs_file_read_iter+0x3c/0xac
> __vfs_read+0x19c/0x204
> vfs_read+0xa4/0x144
> 
> 2ms delay is not only caused by page merge operation. it inculdes many the
> other operations too. But those many operations included page merge should
> be executed more if bio size is grow up.

OK, got it.

Then I think you can just limit bio size in dio_bio_add_page() instead of
doing it for all.
Changheun Lee Feb. 3, 2021, 5:30 a.m. UTC | #6
> On Tue, Feb 02, 2021 at 01:12:04PM +0900, Changheun Lee wrote:
> > > On Mon, Feb 01, 2021 at 11:52:48AM +0900, Changheun Lee wrote:
> > > > > On Fri, Jan 29, 2021 at 12:49:08PM +0900, Changheun Lee wrote:
> > > > > > bio size can grow up to 4GB when muli-page bvec is enabled.
> > > > > > but sometimes it would lead to inefficient behaviors.
> > > > > > in case of large chunk direct I/O, - 32MB chunk read in user space -
> > > > > > all pages for 32MB would be merged to a bio structure if the pages
> > > > > > physical addresses are contiguous. it makes some delay to submit
> > > > > > until merge complete. bio max size should be limited to a proper size.
> > > > > > 
> > > > > > When 32MB chunk read with direct I/O option is coming from userspace,
> > > > > > kernel behavior is below now. it's timeline.
> > > > > > 
> > > > > >  | bio merge for 32MB. total 8,192 pages are merged.
> > > > > >  | total elapsed time is over 2ms.
> > > > > >  |------------------ ... ----------------------->|
> > > > > >                                                  | 8,192 pages merged a bio.
> > > > > >                                                  | at this time, first bio submit is done.
> > > > > >                                                  | 1 bio is split to 32 read request and issue.
> > > > > >                                                  |--------------->
> > > > > >                                                   |--------------->
> > > > > >                                                    |--------------->
> > > > > >                                                               ......
> > > > > >                                                                    |--------------->
> > > > > >                                                                     |--------------->|
> > > > > >                           total 19ms elapsed to complete 32MB read done from device. |
> > > > > > 
> > > > > > If bio max size is limited with 1MB, behavior is changed below.
> > > > > > 
> > > bio_iov_iter_get_pages> > >  | bio merge for 1MB. 256 pages are merged for each bio.
> > > > > >  | total 32 bio will be made.
> > > > > >  | total elapsed time is over 2ms. it's same.
> > > > > >  | but, first bio submit timing is fast. about 100us.
> > > > > >  |--->|--->|--->|---> ... -->|--->|--->|--->|--->|
> > > > > >       | 256 pages merged a bio.
> > > > > >       | at this time, first bio submit is done.
> > > > > >       | and 1 read request is issued for 1 bio.
> > > > > >       |--------------->
> > > > > >            |--------------->
> > > > > >                 |--------------->
> > > > > >                                       ......
> > > > > >                                                  |--------------->
> > > > > >                                                   |--------------->|
> > > > > >         total 17ms elapsed to complete 32MB read done from device. |
> > > > > 
> > > > > Can you share us if enabling THP in your application can avoid this issue? BTW, you
> > > > > need to make the 32MB buffer aligned with huge page size. IMO, THP perfectly fits
> > > > > your case.
> > > > > 
> > > > 
> > > > THP is enabled already like as below in my environment. It has no effect.
> > > > 
> > > > cat /sys/kernel/mm/transparent_hugepage/enabled
> > > > [always] madvise never
> > > 
> > > The 32MB user buffer needs to be huge page size aligned. If your system
> > > supports bcc/bpftrace, it is quite easy to check if the buffer is
> > > aligned.
> > > 
> > > > 
> > > > This issue was reported from performance benchmark application in open market.
> > > > I can't control application's working in open market.
> > > > It's not only my own case. This issue might be occured in many mobile environment.
> > > > At least, I checked this problem in exynos, and qualcomm chipset.
> > > 
> > > You just said it takes 2ms for building 32MB bio, but you never investigate the
> > > reason. I guess it is from get_user_pages_fast(), but maybe others. Can you dig
> > > further for the reason? Maybe it is one arm64 specific issue.
> > > 
> > > BTW, bio_iov_iter_get_pages() just takes ~200us on one x86_64 VM with THP, which is
> > > observed via bcc/funclatency when running the following workload:
> > > 
> > 
> > I think you focused on bio_iov_iter_get_pages() because I just commented page
> > merge delay only. Sorry about that. I missed details of this issue.
> > Actually there are many operations during while-loop in do_direct_IO().
> > Page merge operation is just one among them. Page merge operation is called
> > by dio_send_cur_page() in while-loop. Below is call stack.
> > 
> > __bio_try_merge_page+0x4c/0x614
> > bio_add_page+0x40/0x12c
> > dio_send_cur_page+0x13c/0x374
> > submit_page_section+0xb4/0x304
> > do_direct_IO+0x3d4/0x854
> > do_blockdev_direct_IO+0x488/0xa18
> > __blockdev_direct_IO+0x30/0x3c
> > f2fs_direct_IO+0x6d0/0xb80
> > generic_file_read_iter+0x284/0x45c
> > f2fs_file_read_iter+0x3c/0xac
> > __vfs_read+0x19c/0x204
> > vfs_read+0xa4/0x144
> > 
> > 2ms delay is not only caused by page merge operation. it inculdes many the
> > other operations too. But those many operations included page merge should
> > be executed more if bio size is grow up.
> 
> OK, got it.
> 
> Then I think you can just limit bio size in dio_bio_add_page() instead of
> doing it for all.

Root cause is increasing of bio size after multipage bvec applying.
So I think basic solution is limitation of bio size.
Similar problem might be occurred in various scenario too. becasue bio merge
function is called from many case in kernel. And bio max size was 1MB before
applying of multipage bvec. So I think limitation of bio is reasonable.
I think this solution is not bad for all. Becasue it just only activated by
queue flag case by case.

> 
> -- 
> Ming
> 
> 
---
Changheun Lee
Samsung Electronics
diff mbox series

Patch

diff --git a/block/bio.c b/block/bio.c
index 1f2cc1fbe283..c528e1f944c7 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -287,6 +287,17 @@  void bio_init(struct bio *bio, struct bio_vec *table,
 }
 EXPORT_SYMBOL(bio_init);
 
+unsigned int bio_max_size(struct bio *bio)
+{
+	struct request_queue *q = bio->bi_disk->queue;
+
+	if (blk_queue_limit_bio_size(q))
+		return blk_queue_get_max_sectors(q, bio_op(bio))
+			<< SECTOR_SHIFT;
+
+	return UINT_MAX;
+}
+
 /**
  * bio_reset - reinitialize a bio
  * @bio:	bio to reset
@@ -877,7 +888,7 @@  bool __bio_try_merge_page(struct bio *bio, struct page *page,
 		struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt - 1];
 
 		if (page_is_mergeable(bv, page, len, off, same_page)) {
-			if (bio->bi_iter.bi_size > UINT_MAX - len) {
+			if (bio->bi_iter.bi_size > bio_max_size(bio) - len) {
 				*same_page = false;
 				return false;
 			}
diff --git a/include/linux/bio.h b/include/linux/bio.h
index 1edda614f7ce..13b6f6562a5b 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -113,7 +113,7 @@  static inline bool bio_full(struct bio *bio, unsigned len)
 	if (bio->bi_vcnt >= bio->bi_max_vecs)
 		return true;
 
-	if (bio->bi_iter.bi_size > UINT_MAX - len)
+	if (bio->bi_iter.bi_size > bio_max_size(bio) - len)
 		return true;
 
 	return false;
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index f94ee3089e01..3aeab9e7e97b 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -621,6 +621,7 @@  struct request_queue {
 #define QUEUE_FLAG_RQ_ALLOC_TIME 27	/* record rq->alloc_time_ns */
 #define QUEUE_FLAG_HCTX_ACTIVE	28	/* at least one blk-mq hctx is active */
 #define QUEUE_FLAG_NOWAIT       29	/* device supports NOWAIT */
+#define QUEUE_FLAG_LIMIT_BIO_SIZE 30	/* limit bio size */
 
 #define QUEUE_FLAG_MQ_DEFAULT	((1 << QUEUE_FLAG_IO_STAT) |		\
 				 (1 << QUEUE_FLAG_SAME_COMP) |		\
@@ -667,6 +668,8 @@  bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q);
 #define blk_queue_fua(q)	test_bit(QUEUE_FLAG_FUA, &(q)->queue_flags)
 #define blk_queue_registered(q)	test_bit(QUEUE_FLAG_REGISTERED, &(q)->queue_flags)
 #define blk_queue_nowait(q)	test_bit(QUEUE_FLAG_NOWAIT, &(q)->queue_flags)
+#define blk_queue_limit_bio_size(q)	\
+	test_bit(QUEUE_FLAG_LIMIT_BIO_SIZE, &(q)->queue_flags)
 
 extern void blk_set_pm_only(struct request_queue *q);
 extern void blk_clear_pm_only(struct request_queue *q);