Message ID | 20240118180541.930783-1-axboe@kernel.dk (mailing list archive) |
---|---|
Headers | show |
Series | mq-deadline scalability improvements | expand |
On 1/18/24 11:04 AM, Jens Axboe wrote: > With that in place, the same test case now does: > > Device QD Jobs IOPS Contention Diff > ============================================================= > null_blk 4 32 2250K 28% +106% > nvme0n1 4 32 2560K 23% +112% nvme0n1 4 32 2560K 23% +139% Apparently I can't math, this is a +139% improvement for the nvme case... Just wanted to make it clear that the IOPS number was correct, it's just the diff math that was wrong.
On 1/18/24 12:29 PM, Jens Axboe wrote: > On 1/18/24 11:04 AM, Jens Axboe wrote: >> With that in place, the same test case now does: >> >> Device QD Jobs IOPS Contention Diff >> ============================================================= >> null_blk 4 32 2250K 28% +106% >> nvme0n1 4 32 2560K 23% +112% > > nvme0n1 4 32 2560K 23% +139% > > Apparently I can't math, this is a +139% improvement for the nvme > case... Just wanted to make it clear that the IOPS number was correct, > it's just the diff math that was wrong. And further followup, since I ran some quick testing on another box that has a raid1 more normal drive (SATA, 32 tags). Both pre and post the patches, the performance is roughly the same. The bigger difference is that the pre result is using 8% systime to do ~73K, and with the patches we're using 1% systime to do the same work. This should help answer the question "does this matter at all?". The answer is definitely yes. It's not just about scalability, as is usually the case with improving things like this, it's about efficiency as well. 8x the sys time is ridiculous.