diff mbox series

[RFC] Enable bio cache for IRQ driven IO from io_uring

Message ID 3bff2a83-cab2-27b6-6e67-bdae04440458@kernel.dk (mailing list archive)
State New, archived
Headers show
Series [RFC] Enable bio cache for IRQ driven IO from io_uring | expand

Commit Message

Jens Axboe Aug. 18, 2021, 4:54 p.m. UTC
We previously enabled this for O_DIRECT polled IO, however io_uring
completes all IO from task context these days, so it can be enabled for
that path too. This requires moving the bio_put() from IRQ context, and
this can be accomplished by passing the ownership back to the issuer.

Use kiocb->private for that, which should be (as far as I can tell) free
once we get to the completion side of things. Add a IOCB_PUT_CACHE flag
to tell the issuer that we passed back the ownership, then the issuer
can put the bio from a safe context.

Like the polled IO ditto, this is good for a 10% performance increase.

Signed-off-by: Jens Axboe <axboe@kernel.dk>

---

Just hacked this up and tested it, Works For Me. Would welcome input on
alternative methods here, if anyone has good suggestions.

Comments

Christoph Hellwig Aug. 19, 2021, 9:01 a.m. UTC | #1
On Wed, Aug 18, 2021 at 10:54:45AM -0600, Jens Axboe wrote:
> We previously enabled this for O_DIRECT polled IO, however io_uring
> completes all IO from task context these days, so it can be enabled for
> that path too. This requires moving the bio_put() from IRQ context, and
> this can be accomplished by passing the ownership back to the issuer.
> 
> Use kiocb->private for that, which should be (as far as I can tell) free
> once we get to the completion side of things. Add a IOCB_PUT_CACHE flag
> to tell the issuer that we passed back the ownership, then the issuer
> can put the bio from a safe context.
> 
> Like the polled IO ditto, this is good for a 10% performance increase.
> 
> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> 
> ---
> 
> Just hacked this up and tested it, Works For Me. Would welcome input on
> alternative methods here, if anyone has good suggestions.

10% performance improvement looks really nice, but I don't think we can
just hardcode assumptions about bios in iomap->private.  The easiest
would be to call back into the file systems for the freeing, but that
would add an indirect call.
Jens Axboe Aug. 19, 2021, 3:15 p.m. UTC | #2
On 8/19/21 3:01 AM, Christoph Hellwig wrote:
> On Wed, Aug 18, 2021 at 10:54:45AM -0600, Jens Axboe wrote:
>> We previously enabled this for O_DIRECT polled IO, however io_uring
>> completes all IO from task context these days, so it can be enabled for
>> that path too. This requires moving the bio_put() from IRQ context, and
>> this can be accomplished by passing the ownership back to the issuer.
>>
>> Use kiocb->private for that, which should be (as far as I can tell) free
>> once we get to the completion side of things. Add a IOCB_PUT_CACHE flag
>> to tell the issuer that we passed back the ownership, then the issuer
>> can put the bio from a safe context.
>>
>> Like the polled IO ditto, this is good for a 10% performance increase.
>>
>> Signed-off-by: Jens Axboe <axboe@kernel.dk>
>>
>> ---
>>
>> Just hacked this up and tested it, Works For Me. Would welcome input on
>> alternative methods here, if anyone has good suggestions.
> 
> 10% performance improvement looks really nice, but I don't think we can
> just hardcode assumptions about bios in iomap->private.  The easiest
> would be to call back into the file systems for the freeing, but that
> would add an indirect call.

That's why it's an RFC - while it's not the prettiest, the ->ki_complete
assigner is also the one that sets IOCB_ALLOC_CACHE, and hence it's not
that hard to verify that it does IOCB_PUT_CACHE correctly too. That
said, I would prefer a better way of passing the bio back. There are
other optimizations that could be done if we do that. But I have no good
ideas on how to do the passing differently right now.
diff mbox series

Patch

diff --git a/block/bio.c b/block/bio.c
index ae9085b97deb..3c838d5cea89 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -684,6 +684,7 @@  void bio_put(struct bio *bio)
 	if (bio_flagged(bio, BIO_PERCPU_CACHE)) {
 		struct bio_alloc_cache *cache;
 
+		WARN_ON_ONCE(!in_task());
 		bio_uninit(bio);
 		cache = per_cpu_ptr(bio->bi_pool->cache, get_cpu());
 		bio_list_add_head(&cache->free_list, bio);
diff --git a/fs/block_dev.c b/fs/block_dev.c
index 7b8deda57e74..f30cc8e21878 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -332,6 +332,7 @@  static void blkdev_bio_end_io(struct bio *bio)
 {
 	struct blkdev_dio *dio = bio->bi_private;
 	bool should_dirty = dio->should_dirty;
+	bool free_bio = true;
 
 	if (bio->bi_status && !dio->bio.bi_status)
 		dio->bio.bi_status = bio->bi_status;
@@ -347,7 +348,18 @@  static void blkdev_bio_end_io(struct bio *bio)
 			} else {
 				ret = blk_status_to_errno(dio->bio.bi_status);
 			}
-
+			/*
+			 * If IRQ driven and not using multi-bio, pass
+			 * ownership of bio to issuer for task-based free. Then
+			 * we can participate in the cached bio allocations.
+			 */
+			if (!dio->multi_bio &&
+			    (iocb->ki_flags & (IOCB_ALLOC_CACHE|IOCB_HIPRI)) ==
+						IOCB_ALLOC_CACHE) {
+				iocb->ki_flags |= IOCB_PUT_CACHE;
+				iocb->private = bio;
+				free_bio = false;
+			}
 			dio->iocb->ki_complete(iocb, ret, 0);
 			if (dio->multi_bio)
 				bio_put(&dio->bio);
@@ -363,7 +375,8 @@  static void blkdev_bio_end_io(struct bio *bio)
 		bio_check_pages_dirty(bio);
 	} else {
 		bio_release_pages(bio, false);
-		bio_put(bio);
+		if (free_bio)
+			bio_put(bio);
 	}
 }
 
diff --git a/fs/io_uring.c b/fs/io_uring.c
index f984cd1473aa..e5e69bd24d53 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2581,6 +2581,12 @@  static bool __io_complete_rw_common(struct io_kiocb *req, long res)
 
 static void io_req_task_complete(struct io_kiocb *req)
 {
+#ifdef CONFIG_BLOCK
+	struct kiocb *kiocb = &req->rw.kiocb;
+
+	if (kiocb->ki_flags & IOCB_PUT_CACHE)
+		bio_put(kiocb->private);
+#endif
 	__io_req_complete(req, 0, req->result, io_put_rw_kbuf(req));
 }
 
@@ -2786,6 +2792,13 @@  static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 	} else {
 		if (kiocb->ki_flags & IOCB_HIPRI)
 			return -EINVAL;
+		/*
+		 * IRQ driven IO can participate in the bio alloc cache, since
+		 * we don't complete from IRQ anymore. This requires the caller
+		 * to pass back ownership of the bio before calling ki_complete,
+		 * and then ki_complete will put it from a safe context.
+		 */
+		kiocb->ki_flags |= IOCB_ALLOC_CACHE;
 		kiocb->ki_complete = io_complete_rw;
 	}
 
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 96a0affa7b2d..27bfe25106ba 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -321,6 +321,8 @@  enum rw_hint {
 #define IOCB_NOIO		(1 << 20)
 /* can use bio alloc cache */
 #define IOCB_ALLOC_CACHE	(1 << 21)
+/* bio ownership (and put) passed back to caller */
+#define IOCB_PUT_CACHE		(1 << 22)
 
 struct kiocb {
 	struct file		*ki_filp;