diff mbox series

[5/6] io_uring: enable use of bio alloc cache

Message ID 20210812154149.1061502-6-axboe@kernel.dk (mailing list archive)
State New, archived
Headers show
Series Enable bio recycling for polled IO | expand

Commit Message

Jens Axboe Aug. 12, 2021, 3:41 p.m. UTC
Mark polled IO as being safe for dipping into the bio allocation
cache, in case the targeted bio_set has it enabled.

This brings an IOPOLL gen2 Optane QD=128 workload from ~3.0M IOPS to
~3.3M IOPS.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Christoph Hellwig Aug. 12, 2021, 4:39 p.m. UTC | #1
On Thu, Aug 12, 2021 at 09:41:48AM -0600, Jens Axboe wrote:
> Mark polled IO as being safe for dipping into the bio allocation
> cache, in case the targeted bio_set has it enabled.
> 
> This brings an IOPOLL gen2 Optane QD=128 workload from ~3.0M IOPS to
> ~3.3M IOPS.

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>

Didn't the cover letter say 3.5M+ IOPS, though?
Jens Axboe Aug. 12, 2021, 4:46 p.m. UTC | #2
On 8/12/21 10:39 AM, Christoph Hellwig wrote:
> On Thu, Aug 12, 2021 at 09:41:48AM -0600, Jens Axboe wrote:
>> Mark polled IO as being safe for dipping into the bio allocation
>> cache, in case the targeted bio_set has it enabled.
>>
>> This brings an IOPOLL gen2 Optane QD=128 workload from ~3.0M IOPS to
>> ~3.3M IOPS.
> 
> Looks good,
> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> 
> Didn't the cover letter say 3.5M+ IOPS, though?

It does indeed, we've had some recent improvements so the range is now
more in the 3.2M -> 3.5M IOPS with the cache. Didn't update it, as it's
still roughly the same 10% bump.
diff mbox series

Patch

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 6c65c90131cb..ea387b0741b8 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2736,7 +2736,7 @@  static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 		    !kiocb->ki_filp->f_op->iopoll)
 			return -EOPNOTSUPP;
 
-		kiocb->ki_flags |= IOCB_HIPRI;
+		kiocb->ki_flags |= IOCB_HIPRI | IOCB_ALLOC_CACHE;
 		kiocb->ki_complete = io_complete_rw_iopoll;
 		req->iopoll_completed = 0;
 	} else {