mbox series

[0/10] mm: Fix various readahead quirks

Message ID 20240625100859.15507-1-jack@suse.cz (mailing list archive)
Headers show
Series mm: Fix various readahead quirks | expand

Message

Jan Kara June 25, 2024, 10:18 a.m. UTC
Hello!

When we were internally testing performance of recent kernels, we have noticed
quite variable performance of readahead arising from various quirks in
readahead code. So I went on a cleaning spree there. This is a batch of patches
resulting out of that. A quick testing in my test VM with the following fio
job file:

[global]
direct=0
ioengine=sync
invalidate=1
blocksize=4k
size=10g
readwrite=read

[reader]
numjobs=1

shows that this patch series improves the throughput from variable one in
310-340 MB/s range to rather stable one at 350 MB/s. As a side effect these
cleanups also address the issue noticed by Bruz Zhang [1].

								Honza

[1] https://lore.kernel.org/all/20240618114941.5935-1-zhangpengpeng0808@gmail.com/

Comments

Josef Bacik June 25, 2024, 5:12 p.m. UTC | #1
On Tue, Jun 25, 2024 at 12:18:50PM +0200, Jan Kara wrote:
> Hello!
> 
> When we were internally testing performance of recent kernels, we have noticed
> quite variable performance of readahead arising from various quirks in
> readahead code. So I went on a cleaning spree there. This is a batch of patches
> resulting out of that. A quick testing in my test VM with the following fio
> job file:
> 
> [global]
> direct=0
> ioengine=sync
> invalidate=1
> blocksize=4k
> size=10g
> readwrite=read
> 
> [reader]
> numjobs=1
> 
> shows that this patch series improves the throughput from variable one in
> 310-340 MB/s range to rather stable one at 350 MB/s. As a side effect these
> cleanups also address the issue noticed by Bruz Zhang [1].
> 

Reviewed-by: Josef Bacik <josef@toxicpanda.com>

Thanks,

Josef
Zhang Peng June 27, 2024, 3:04 a.m. UTC | #2
I test this batch of patch with fio, it indeed has a huge sppedup
in sequential read when block size is 4KiB. The result as follow,
for async read, iodepth is set to 128, and other settings
are self-evident.

casename                upstream   withFix speedup
----------------        --------   -------- -------
randread-4k-sync        48991      47773 -2.4862%
seqread-4k-sync         1162758    1422955 22.3776%
seqread-1024k-sync      1460208    1452522 -0.5264%
randread-4k-libaio      47467      47309 -0.3329%
randread-4k-posixaio    49190      49512 0.6546%
seqread-4k-libaio       1085932    1234635 13.6936%
seqread-1024k-libaio    1423341    1402214 -1.4843%
seqread-4k-posixaio     1165084    1369613 17.5549%
seqread-1024k-posixaio  1435422    1408808 -1.8541%
zippermonkey June 27, 2024, 6:10 a.m. UTC | #3
Hi, Jan

This is my environment:
pcie gen 3  NVMe SSD,
XFS filesystem with default config,
host 96 cores.

BTW, your patchset seems also fix the bug that I found[1], more complete 
than mine,
soyou can ignore my patch :)

Thanks,
-zp

Tested-by: Zhang Peng <bruzzhang@tencent.com>


[1] 
https://lore.kernel.org/linux-mm/20240625103653.uzabtus3yq2lo3o6@quack3/T/
Andrew Morton June 27, 2024, 9:13 p.m. UTC | #4
On Thu, 27 Jun 2024 14:10:48 +0800 zippermonkey <zzippermonkey@outlook.com> wrote:

> Tested-by: Zhang Peng <bruzzhang@tencent.com>

Thanks.  I added this to the changelogs and pasted your testing results
into the [0/N] description.